diff --git "a/training/sample_data/search_small.jsonl" "b/training/sample_data/search_small.jsonl" deleted file mode 100644--- "a/training/sample_data/search_small.jsonl" +++ /dev/null @@ -1,100 +0,0 @@ -{"query": "Measuring utility by a single-response sequential method.", "session_id": 7278244041409283, "user_id": 6866635221095652, "candidates": [{"corpus_id": 26653830, "title": "Measuring utility by a single response sequential method.", "abstract": "The purpose of this paper is to describe a sequential experiment that provides, at each stage in the sequence, an estimate of the utility to the subject of some amount of a commodity (e.g. money) and to present a few experimental results obtained with the method. The procedure is based upon the following well known 'expected utility hypothesis' For each person there exist numerical constants, called utilities, associated with the various possible outcomes of his actions, given the external events not under his control. If, for a given subject, we could know the values of these constants and the 'personal' probabilities he assigns to the various external events we could, according to this model, predict his choice from among any available set of actions. He will choose an action with the highest expected utility; i.e. with the highest average of utilities of outcomes, weighted by the probabilities he assigns to the corresponding events. He will be indifferent between any two actions with equal expected utilities. Note that (by the nature of weighted averages) the comparison between expected utilities does not depend on which two particular outcomes are regarded as having zero utility and unit utility.", "venue": "Behavioral science", "year": 1964.0, "author_names": ["G M Becker", "Morris H Degroot", "Jacob Marschak"], "n_citations": 2365, "n_key_citations": 131, "score": 1}, {"corpus_id": 230625269, "title": "Studies on the public health importance of infestation of Ostracoda Vargula tsujii (Myodocopa: Cypridinidae) in some marine food fishes off Pamban, Southeast coast of India A case study", "abstract": "The present study was the first attempt to investigate the public health importance of infestation of Ostracoda in some marine food fishes in southeast region of Tamil Nadu India during June 2019 to May 2020 by the method of Becker's measuring utility by a single response sequential method. Total 540 fishermen belonging to 5 villages from Ramand District were interviewed to understand the public health issues related the infestation of Ostracoda V. tsujii in ten major marine food fishes i.e. Parupeneus indicus, Lutjanus fulviflamma, Priacanthus hamrur (Snapper) Carangoides gymnostethus, Carangoides malabaricus, Carangoides ferdau (Carangids) Cephalopholis sonnerati, Epinephelus coioides (grouper) Lethrinus ornatus and Plectorhinchus gibbosus (sea bream) Fishermen, local whole sale buyers, small fishstall owners and fish consumers were part of respondents. It was observed that there was no difference at statistically significant level (P 0.05) between infested and healthy fish samples in terms of nutritional profile like protein, fat, ash, carbohydrates and mineral nutrients level. Based on the feedback and information obtained from respondents in the present study found that no incidence of health issues or risk associated with food fishes infested with Ostracoda Vargula tsujii (local name Arattlai) or any other true parasites.", "venue": "", "year": 2020.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 145623987, "title": "The effects of endowment on the demand for probabilistic information", "abstract": "Research shows that individuals are ambiguity averse: they choose unambiguous over equivalent ambiguous prospects and price them higher (either as buyers or sellers) Moreover, it is often assumed that ambiguity averse individuals are willing to pay an ambiguity premium for information that reduces ambiguity [Camerer, C. F. Weber, M. (1992) Recent developments in modeling preferences: Uncertainty and ambiguity. Journal of Risk and Uncertainty, 5(4) 325 370] However, when people are asked to exchange an ambiguous alternative in their possession for an equivalent unambiguous one, they prefer to retain the former [Roca, M. Hogarth, R. M. Maule, A. J. (2006) Ambiguity seeking as a result of the status quo bias. Journal of Risk and Uncertainty, 32(3) 175 194] We present three experiments investigating the economic effects of endowment on attitudes towards ambiguity and the ambiguity premium. The experiments, based on a [Becker, G. DeGroot, M. Marschak, J. (1964) Measuring utility by a single response sequential method. Science, 9, 3] procedure, show that the value attributed to ambiguity reducing information is substantially affected by the status quo of the individual.", "venue": "", "year": 2009.0, "author_names": ["Merce Roca", "A John Maule"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 197604874, "title": "Experiments in Environmental Economics", "abstract": "Volume I: Motives and methods Microeconomic systems as an experimental science, Vernon L. Smith (1982) Will economics become an experimental science? Charles R. Plott (1991) Economics and ecology A comparison of experimental methodologies and philosophies, Jason F. Shogren and Clifford Nowell (1992) Let's keep the con out of experimental econ A methodological note, Alvin E. Roth (1994) Progress in behavioral game theory, Colin F. Camerer (1997) Environmental risk: Measuring Utility by a Single Response Sequential Method, Gordon M. Becker, Morris H. DeGroot and Jacob Marschak (1964) Economic theory of choice and the preference reversal phenomenon, David M. Grether and Charles R. Plott (1979) Prospect theory An analysis of decision under risk, Daniel Kahneman and Amos Tversky (1979) The framing of decisions and the psychology of choice, Amos Tversky and Daniel Kahneman (1981) Do biases in probability judgment matter in markets? Colin F. Camerer (1987) Experimental Evidence: Risk, ambiguity, and insurance, Robin M. Hogarth and Howard Kunreuther (1989) The impact of self protection and self insurance on individual response to risk, Jason F. Shogren (1990) The endowment effect, loss aversion, and status quo bias, Daniel Kahneman, Jack L. Knetsch and Richard H. Thaler (1991) Insurance for low probability hazards A bimodal response to unlikely events, Gary H. McClelland, William D. Schulze and Don L. Coursey (1993) Investigating generalizations of expected utility theory using experimental data, John D. Hey and Chris Orme (1994) Environmental conflict: An empirical approach to the prisoners' dilemma game, Lester B. Lave (1962) Probabilistic destruction of common pool resources: Experimental evidence, James M. Walker and Roy Gardner (1992) Communication in coordination games, Russell Cooper, Douglas V. DeJong, Robert Forsythe and Thomas W. Ross (1992) An experimental study of the centipede game, Richard D. McKelvey and Thomas R. Palfrey (1992) Repeated play, cooperation and coordination: An experimental study, Thomas R. Palfrey and Howard Rosenthal (1994) The role of communication in resolving commons dilemmas Experimental evidence with heterogeneous appropriators, Steven Hackett, Edella Schlager and James Walker (1994) Mitigating the tragedy of the commons through cooperation An experimental evaluation, Charles F. Mason and Owen R. Phillips (1997) Endogenous timing in a gaming tournament, Kyung Hwan Baik, Todd L. Cherry, Stephan Kroll and Jason F. Shogren (1999) Volume II: Environmental cooperation: The coase theorem Some experimental tests, Elizabeth Hoffman and Matthew L. Spitzer (1982) Experimental evaluation of the coase theorem, Glenn W. Harrison and Michael McKee (1985) Coasian solutions to the externality problem in experimental markets, Glenn W. Harrison, Elizabeth Hoffman, E.E. Rutstrom and Matthew L. Spitzer (1987)", "venue": "", "year": 1999.0, "author_names": ["Jason F Shogren", "Terrence M Hurley"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 105932450, "title": "Sequential chemical separations and multiple ion counting ICP MS for 241Pu 241Am 237Np dating of environmental collections on a single aliquot", "abstract": "We have developed a combined sequential chemical separation procedure and multiple ion counting ICP MS measurement method for isotopic measurements of Am in environmental samples. This, in conjunction with established resin bead TIMS measurements for Pu and Np, allows us to measure long lived Pu Np Am nuclides in environmental samples on a single solution aliquot. This single aliquot method reduces time lines and maximizes sample utility for the measurements, improving sensitivity, precision, and accuracy over prior methods. We have evaluated this new method with environmental reference materials and have obtained accurate results on samples with 3E6 atoms 241Am.", "venue": "Journal of Radioanalytical and Nuclear Chemistry", "year": 2018.0, "author_names": ["Steven J Goldstein", "Kimberly Ann Hinrichs", "Andrew J Nunn", "Daniel Wade Gurganus", "Ronald S Amato", "Warren J Oldham"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 204886661, "title": "Sequential pattern sampling with norm based utility", "abstract": "Sequential pattern mining has been introduced by Agrawal and Srikant (in: Proceedings of ICDE'95, pp 3 14, 1995) 2 decades ago, and its usefulness has been widely proved for different mining tasks and application fields such as web usage mining, text mining, bioinformatics, fraud detection and so on. Since 1995, despite numerous optimization proposals, sequential pattern mining remains a costly task that often generates too many patterns. This limit, also reached by itemset mining, was circumvented by pattern sampling. Pattern sampling is a non exhaustive method for instantly discovering relevant patterns that ensures a good interactivity while providing strong statistical guarantees due to its random nature. Curiously, such an approach investigated for different kinds of patterns including itemsets and subgraphs has not yet been applied to sequential patterns. In this paper, we propose the first method dedicated to sequential pattern sampling. In addition to address sequential data, the originality of our approach is to introduce a class of interestingness measures relying on the norm of the sequence, named norm based utilities In particular, it enables to add constraints on the norm of sampled patterns to control the length of the drawn patterns and to avoid the pitfall of the \"long tail\" where the rarest patterns flood the user. We propose a new two step random procedure integrating this class of measures, named \\textsc {NUSSampling} NUSS A M P L I N G that randomly draws sequential patterns according to frequency weighted by a norm based utility. We demonstrate that this method performs an exact sampling according to the underlying measure. Moreover, despite the use of rejection sampling, the experimental study shows that \\textsc {NUSSampling} NUSS A M P L I N G remains efficient. We especially focus on the interest of norm constraints and exponential decays that help to draw general patterns of the \"head\" We also illustrate how to benefit from these sampled patterns to instantly build an associative classifier dedicated to sequences. This classification approach rivals state of the art proposals showing the interest of sequential pattern sampling with norm based utility.", "venue": "Knowledge and Information Systems", "year": 2019.0, "author_names": ["Lamine Diop", "Cheikh Talibouya Diop", "Arnaud Giacometti", "Dominique H Li", "Arnaud Soulet"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 208226882, "title": "Prescription drug monitoring program use and utility by Washington State pharmacists: A mixed methods study.", "abstract": "OBJECTIVES To explore factors and situations that influence pharmacists to use the prescription drug monitoring program (PDMP) and to characterize actions taken by pharmacists after alarming scenarios from a PDMP query. DESIGN Explanatory sequential 2 phase mixed methods design: (1) cross sectional Web based survey of Washington State pharmacists followed by (2) interviews with purposefully selected respondents to explore statistically significant quantitative findings. SETTING AND PARTICIPANTS The study was conducted in Washington State from September 2018 to February 2019. A total of 967 Washington State pharmacists from various practice settings, including inpatient and outpatient pharmacies, participated. Ten outpatient pharmacists were interviewed in the second phase. OUTCOME MEASURES The pharmacists reported the frequency of PDMP use, opinion on the usefulness of PDMP, and action(s) taken after a concerning PDMP report. RESULTS The usable response rate for pharmacists with a PDMP account was 17.6% (818/4659) and usable response rate for all pharmacists was 10.4% (967/9263) PDMP use varied by race, practice setting, and employer policy on PDMP use. Among the 818 PDMP users, 396 (48% used the database at least once during a shift. Frequent PDMP users were more likely to recommend naloxone compared with less frequent users (adjusted odds ratio 1.70 [95% CI 1.09 2.65] P 0.02) The following 3 interview themes were identified: time, company policy, and red flags. CONCLUSION PDMP has value to pharmacists of all practice settings studied. Frequent PDMP use may facilitate more pharmacist interventions, such as a naloxone prescription.", "venue": "Journal of the American Pharmacists Association JAPhA", "year": 2019.0, "author_names": ["Ryan G Pett", "Lloyd A Mancl", "Debra Revere", "Andy S Stergachis"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 149115376, "title": "The Utility of Different Measures in Adolescence to Assess Emotional Flow Patterns in Single and Mixed High and Low Arousal Emotion Experiences", "abstract": "This research investigated the utility of different measures in adolescence to assess emotional flow patterns in single and mixed high and low arousal emotion experiences. Seventy adolescents (30 males, 40 females) aged between 12 years, 5 months 18 years, 8 months (M 15 years, 11 months) participated in this research. Participants were allocated by age (younger, n=35, 12 years, 5 months 16 years, 9 months and older, n=35, 16 years, 9 months 18 years, 8 months) into two conditions. There were 34 participants in the first condition: happiness and sadness and 36 participants in the second condition: relaxed and tired. They read a vignette about a protagonist eliciting the condition emotions, then completing a 5 point Likert rating scale, questionnaire and the AES task. Chi Square analysis showed significant results within the questionnaire measure, with older participants reporting more single emotions responses, younger participants identifying AES pattern unknown and only females identifying highly simultaneous patterns of mixed emotions. Chi Square MeNemar found a significant difference between the questionnaire and AES responses in the highly simultaneous mixed emotion patterns. Findings revealed that the questionnaire produced more mixed emotion experience patterns. With some observable differences in the AES responses that were not significant, there is a suggestion in the suitable utility of the AES within the adolescent population. It is suggested that future research investigates adolescents own emotions not just the emotions of a protagonist as well as using an interview measure rather than a questionnaire measure. Emotion understanding and experiencing has been extensively investigated in the last 40 years to understand if conflicting emotions can be experienced simultaneously or sequentially (Bender, Somhovd, Pons, Reinholdt Dunne Esbjorn, 2015; Williams Aaker, 2002) On a daily basis, people find themselves in situations that elicit two different emotions (Watson Tellegen, 1985) with an example being happiness of seeing a sister who lives far away, but also sadness that it is not possible to see her often. With emotions often lacking stability and changing from time to time (Diener Larsen, 1984) it is essential to continue to understand mixed emotion experiences, with the aim to help individuals manage within society and live a fulfilled life. Research investigating children's understanding of emotions has found that emotion understanding is related to the quality of psychological well being, social relationships with peers, adults, and children's ability to resolve cognitive problems alone or in a group (Bender et al. 2015) Mixed emotion research has investigated the understanding in children and adults, but there is a gap in research regarding adolescents and their understanding of emotions in themselves and other people.", "venue": "", "year": 2017.0, "author_names": ["A Student"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 230802633, "title": "An international mixed methods study to develop a new preference based measure for women with breast cancer: the BREAST Q Utility module", "abstract": "Background Generic preference based measures (PBM) though commonly used, may not be optimal for use in economic evaluations of breast cancer interventions. No breast cancer specific PBM currently exists, and the generic PBMs fail to capture the unique concerns of women with breast cancer (e.g. body image, appearance, treatment specific adverse effects) Hence, the objective of this study was to develop a breast cancer specific PBM, the BREAST Q Utility module. Methods Women diagnosed with breast cancer (stage 0 4, any treatment) were recruited from two tertiary hospitals in Canada and one in the US. The study followed an exploratory sequential mixed methods approach, whereby semi structured interviews were conducted and at the end of the interview, participants were asked to list their top five health related quality of life (HRQOL) concerns and to rate the importance of each item on the BREAST Q. Interviews were audio recorded, transcribed verbatim, and coded. Constant comparison was used to refine the codes and develop a conceptual framework. Qualitative and quantitative data were triangulated to develop the content of the Utility module that was refined through 2 rounds of cognitive debriefing interviews with women diagnosed with breast cancer and feedback from experts. Results Interviews were conducted with 57 women aged 55 10 years. A conceptual framework was developed from 3948 unique codes specific to breasts, arms, abdomen, and cancer experience. Five top level domains were HRQOL (i.e. physical, psychological, social, and sexual well being) and appearance. Data from the interviews, top 5 HRQOL concerns, and BREAST Q item ratings were used to inform dimensions for inclusion in the Utility module. Feedback from women with breast cancer (N 9) and a multidisciplinary group of experts (N 27) was used to refine the module. The field test version of the HSCS consists of 10 unique dimensions. Each dimension is measured with 1 or 2 candidate items that have 4 5 response levels each. Conclusion The field test version of the BREAST Q Utility module was derived from extensive patient and expert input. This comprehensive approach ensured that the content of the Utility module is relevant, comprehensive, and includes concerns that matter the most to women with breast cancer.", "venue": "BMC Women's Health", "year": 2021.0, "author_names": ["Manraj N Kaur", "Anne F Klassen", "Feng Xie", "Louise J Bordeleau", "Toni Zhong", "Stefan Cano", "Elena Tsangaris", "Trisia Breitkopf", "Ayse Kuspinar", "Andrea L Pusic"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 80151046, "title": "S33 The utility of feno in the differential diagnosis of chronic cough: the response to anti inflammatory therapy with prednisolone and montelukast", "abstract": "Objectives In this study we explored the effectiveness of treatment with montelukast 10 mg as compared with prednisolone in chronic cough patients with an associated elevated FeNO (The fraction of exhaled nitric oxide in breath) a marker of eosinophilic inflammation. Methods 50 non asthmatic patients with chronic cough were recruited sequentially from a specialist cough clinic. 30 patients with high FeNO =30 ppb) were randomised to either two weeks prednisolone 20 mg or two weeks montelukast 10 mg followed by montelukast 10 mg for the subsequent two weeks in both arms. A control group of 20 patients with low FeNO =20 ppb) were enrolled who received four weeks montelukast. 24 hours cough counting at baseline after 2 and 4 weeks treatment was the primary endpoint. Subjective measures of cough, the Leicester Cough Questionnaire (LCQ) and Hull Airways Reflux Questionnaire (HARQ) were also administered. Results At baseline the average FeNO value in both high FeNO treatment groups was similar (around 60+ 30 ppb) At the end of the study there was a significant fall in FeNO of approximately 30% in both high FeNO treatment groups (p 0.005) In the low FeNO group there was no significant change during the study (12+ 5 ppb) Therapy reduced the number of coughs in 24 hours by approximately 50% in both low and high FeNO groups (p<0.005) HARQ and LCQ scores also improved significantly (p<0.005) in all treatment groups. Conclusions The hypothesis that FeNO could be used as a marker of eosinophilic inflammation in chronic cough was supported by our observation at baseline in the high FeNO group of eosinophilia in both blood and sputum. However, baseline FeNO did not predict overall treatment response. Perhaps the most surprising aspect of our study is the dramatic response in the low FeNO group to montelukast. The fact that montelukast appears to be equally effective in the low FeNO group suggest the either the current markers of eosinophilic lung disease are insufficiently sensitive to pick up low levels of leukotriene activation in the low FeNO group, or that montelukast has its antitussive activity by an alternative mechanism. Abstarct S33 Figure 1 Measurements of FeNO, 24hr cough count, HARQ and LCQ in three treatment groups in three visits. Horizontal bars represent mean and SEM value.", "venue": "Thorax", "year": 2017.0, "author_names": ["Mahboobeh Haji Sadeghi", "Caroline E Wright", "Simon Paul Hart", "Michael G Crooks", "A H Morice"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "bumble bee homeotic shift", "session_id": 4298991950181362, "user_id": 2656413637352104, "candidates": [{"corpus_id": 143423547, "title": "A homeotic shift late in development drives mimetic color variation in a bumble bee", "abstract": "Significance Mimicry among bumble bees has driven them to diversify and converge in their color patterns, making them a replicate rich system for connecting genes to traits. Here, we discover that mimetic color variation in a bumble bee is driven by changes in Hox gene expression. Hox genes are master regulators of numerous segment specific morphologies and thus are some of the most conserved developmental genes across animals. In these bees, the posterior Hox gene Abd B is upregulated in a more anterior location to impart phenotypic change. This homeotic shift happens late in development, when nonspecific effects are minimized, thus availing these genes for color pattern diversification. Similar mimetic color patterns were inferred to use different mutations, suggesting diverse routes to mimicry. Natural phenotypic radiations, with their high diversity and convergence, are well suited for informing how genomic changes translate to natural phenotypic variation. New genomic tools enable discovery in such traditionally nonmodel systems. Here, we characterize the genomic basis of color pattern variation in bumble bees (Hymenoptera, Apidae, Bombus) a group that has undergone extensive convergence of setal color patterns as a result of Mullerian mimicry. In western North America, multiple species converge on local mimicry patterns through parallel shifts of midabdominal segments from red to black. Using genome wide association, we establish that a cis regulatory locus between the abdominal fate determining Hox genes, abd A and Abd B, controls the red black color switch in a western species, Bombus melanopygus. Gene expression analysis reveals distinct shifts in Abd B aligned with the duration of setal pigmentation at the pupal adult transition. This results in atypical anterior Abd B expression, a late developmental homeotic shift. Changing expression of Hox genes can have widespread effects, given their important role across segmental phenotypes; however, the late timing reduces this pleiotropy, making Hox genes suitable targets. Analysis of this locus across mimics and relatives reveals that other species follow independent genetic routes to obtain the same phenotypes.", "venue": "Proceedings of the National Academy of Sciences", "year": 2019.0, "author_names": ["Li Tian", "Sarthok Rasique Rahman", "Briana D Ezray", "Luca Franzini", "James P Strange", "Patrick Lhomme", "Heather M Hines"], "n_citations": 24, "n_key_citations": 0, "score": 1}, {"corpus_id": 182107603, "title": "Faculty Opinions recommendation of A homeotic shift late in development drives mimetic color variation in a bumble bee.", "abstract": "", "venue": "Faculty Opinions Post Publication Peer Review of the Biomedical Literature", "year": 2019.0, "author_names": ["Stacey D Smith"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 202024363, "title": "Faculty Opinions recommendation of A homeotic shift late in development drives mimetic color variation in a bumble bee.", "abstract": "", "venue": "Faculty Opinions Post Publication Peer Review of the Biomedical Literature", "year": 2019.0, "author_names": ["Xin Zhou", "Shiqi Luo"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 207809077, "title": "Historical changes in bumble bee body size and range shift of declining species", "abstract": "Bumble bees are declining worldwide, their vital ecosystem services are diminishing and underlying mechanisms are species specific and multifaceted. This has sparked an increase in long term assessments of historical collections that provide valuable information about population trends and shifts in distributions. However, museums specimens also contain important ecological information, including rarely measured morphological traits. Trait based assessments of museums specimens provide additional information on underlying mechanisms of population trends, by tracking changes over time. Here, we used museum specimens of four Bombus species, spanning a timeframe of 125 years to: (i) compare body size of declining and increasing species, (ii) assess intra specific trends over the last century, and (iii) investigate shifts in geographical distribution over time. We found that declining Bombus species were larger than increasing ones. All four species were smaller in current time than a century ago. Intra specific size declines were more pronounced for larger bodied species. With our sampling, declining and increasing species showed an upward shift in elevation, and declining species showed an additional geographic shift in recent times as compared to historic records. Intra specific body size declines may represent species adaptation to unfavorable environmental conditions, and may be a useful metric to complement traditional species vulnerability assessments. We highlight the utility of incorporating trait based assessments into future studies investigating species declines.", "venue": "Biodiversity and Conservation", "year": 2019.0, "author_names": ["Sabine S Nooten", "Sandra M Rehan"], "n_citations": 15, "n_key_citations": 1, "score": 0}, {"corpus_id": 214723704, "title": "Shift in worker physiology and gene expression pattern from reproductive to diapause like with colony age in the bumble bee Bombus impatiens", "abstract": "Insects maximize their fitness by exhibiting predictable and adaptive seasonal patterns in response to changing environmental conditions. These seasonal patterns are often expressed even when insects are kept in captivity, suggesting they are functionally and evolutionary important. In this study we examined whether workers of the eusocial bumble bee Bombus impatiens maintained a seasonal signature when kept in captivity. We used an integrative approach and compared worker egg laying, ovarian activation, body size and mass, lipid content in the fat body, cold tolerance and expression of genes related to cold tolerance, metabolism, and stress throughout colony development. We found that bumble bee worker physiology and gene expression patterns shift from reproductive like to diapause like as the colony ages. Workers eclosing early in the colony cycle had increased egg laying and ovarian activation, and reduced cold tolerance, body size, mass, and lipid content in the fat body, in line with a reproductive like profile, while late eclosing workers exhibited the opposite characteristics. Furthermore, expression patterns of genes associated with reproduction and diapause differed between early and late eclosing workers, partially following the physiological patterns. We suggest that a seasonal signature, innate to individual workers, the queen or the colony is used by workers as a social cue determining the phenology of the colony and discuss possible implications for understanding reproductive division of labor in bumble bee colonies and the evolutionary divergence of female castes in the genus Bombus.", "venue": "", "year": 2019.0, "author_names": ["Erin D Treanore", "Jacklyn M Kiner", "Mackenzie E Kerner", "Etya Amsalem"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 214629705, "title": "Shift in worker physiology and gene expression pattern from reproductive to diapause like with colony age in the bumble bee Bombus impatiens", "abstract": "ABSTRACT Insects maximize their fitness by exhibiting predictable and adaptive seasonal patterns in response to changing environmental conditions. These seasonal patterns are often expressed even when insects are kept in captivity, suggesting they are functionally and evolutionarily important. In this study, we examined whether workers of the eusocial bumble bee Bombus impatiens maintained a seasonal signature when kept in captivity. We used an integrative approach and compared worker egg laying, ovarian activation, body size and mass, lipid content in the fat body, cold tolerance and expression of genes related to cold tolerance, metabolism and stress throughout colony development. We found that bumble bee worker physiology and gene expression patterns shift from reproductive like to diapause like as the colony ages. Workers eclosing early in the colony cycle had increased egg laying and ovarian activation, and reduced cold tolerance, body size, mass and lipid content in the fat body, in line with a reproductive like profile, while late eclosing workers exhibited the opposite characteristics. Furthermore, expression patterns of genes associated with reproduction and diapause differed between early and late eclosing workers, partially following the physiological patterns. We suggest that a seasonal signature, innate to individual workers, the queen or the colony, is used by workers as a social cue determining the phenology of the colony and discuss possible implications for understanding reproductive division of labor in bumble bee colonies and the evolutionary divergence of female castes in the genus Bombus. Summary: Bumblebee workers exhibit a physiological signature (innate to workers, queen or the colony) corresponding to colony age with a shift towards a diapause like profile in late eclosing workers.", "venue": "Journal of Experimental Biology", "year": 2020.0, "author_names": ["Erin D Treanore", "Jacklyn M Kiner", "Mackenzie E Kerner", "Etya Amsalem"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 202866919, "title": "Bumble bee parasite prevalence but not genetic diversity impacted by the invasive plant Impatiens glandulifera", "abstract": "While many bee species are experiencing population declines, some host plant generalist bees remain common in Europe, partly because they seem able to shift to new resources. However, foraging on a new alternative plant, such as an invasive species, can modify diet quality and have a potentially detrimental effect on bee health. Herein, we investigated whether the spread of the invasive plant Impatiens glandulifera affects Bombus pascuorum population regarding parasite prevalence, genetic structure, and nest density in Belgium. While no difference in bumble bee genetic structure was detected between invaded and uninvaded sites, we show that I. glandulifera occurrence was significantly correlated with a decrease in the prevalence of Apicystis bombi but not the prevalence of three other parasite species (i.e. Crithidia bombi, Nosema bombi, Nosema ceranae, and Nosema sp. Regarding our investigations, this effect was likely not due to variation in local bumble bee population fitness before I. glandulifera flowering, nor to the relative abundance of other pollinators such as Apis mellifera, but the unique chemical composition (i.e. polyphenol rich) of the pollen of I. glandulifera remained as an interesting hypothesis. Whereas B. pascuorum queens probably colonize all the potential nesting sites in an area, invaded by I. glandulifera or not, the abundance of polyphenol ampelopsin in pollen from I. glandulifera pollen might reduce local parasite prevalence. Our field study confirms that bumble bee parasite prevalence is potentially related to the particular chemical composition of collected pollen. Plant traits such as secondary metabolite occurrence could play a key role in the health and conservation of bumble bees.", "venue": "Ecosphere", "year": 2019.0, "author_names": ["Maryse Vanderplanck", "Nathalie Roger", "Romain Moerman", "Guillaume Ghisbain", "Maxence Gerard", "Dominik Popowski", "Sebastian Granica", "Denis Fournier", "Ivan Meeus", "Niels Piot", "Guy Smagghe", "Lucas Terrana", "Denis Michez"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 199635463, "title": "Effects of Toxicant Exposure on Honey Bee and Bumble Bee Microbiomes and Impacts on Host Health", "abstract": "Author(s) Rothman, Jason Advisor(s) McFrederick, Quinn Abstract: Bees are important insect pollinators in both agricultural and natural settings who may encounter toxicants while foraging on plants growing in contaminated soils. How these chemicals affect the bee microbiome, which confers many health benefits to the host, is an important but understudied aspect of pollinator health. Through a combination of 16S rRNA gene sequencing, LC MS metabolomics, ICP OES spectroscopy, quantitative PCR, culturing, microbiome manipulation, and whole organism exposure studies, I attempt to establish the effects that toxicants have on social bees and their associated microbes. The microbiome of animals has been shown to reduce metalloid toxicity, so I exposed microbiome inoculated or uninoculated bumble bees to 0.75 mg/L selenate and found that inoculated bees survive longer when compared to uninoculated bees. I also showed that selenate exposure altered the composition of the bumble bee microbiome and that the growth of two major gut symbionts Snodgrassella alvi and Lactobacillus bombicola was unaffected by this exposure. Due to the pervasiveness of environmental pollution in bee habitats, I exposed bumble bees to cadmium, copper, selenate, imidacloprid, and hydrogen peroxide and found that each of these compounds can be lethal to bees. I also showed that most of these chemicals can affect the diversity of the bee microbiome and that there is interstrain variation in toxicant tolerance genes in the major bee symbionts Snodgrassella alvi and Gilliamella apicola. As exposure to cadmium or selenate has been shown to affect animal associated microbes, I assayed the effects of these chemicals on honey bees and observed shifts in the bee microbiome at multiple timepoints. I also found that exposure to selenate and cadmium changes the overall bee metabolome and may cause oxidative damage to proteins and lipids. Lastly, I found that bee associated bacteria can bioaccumulate cadmium but generally not selenate. In this dissertation I demonstrated that bee associated bacteria are generally robust to toxicant exposure, but that chemicals can alter the composition of both bumble bee and honey bee microbiomes. I also show that toxicants affect bee metabolism, and that the bee microbiome plays an important role in maintaining host health when challenged with toxicants.", "venue": "", "year": 2019.0, "author_names": ["Jason A Rothman"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 147705094, "title": "Infection outcomes are robust to thermal variability in a bumble bee host parasite system.", "abstract": "Climate change related increases in thermal variability and rapid temperature shifts will affect organisms in multiple ways, including imposing physiological stress. Furthermore, the effects of temperature may alter the outcome of biotic interactions, including those with pathogens and parasites. In the context of host parasite interactions, the beneficial acclimation hypothesis posits that shifts away from acclimation or optimum performance temperatures will impose physiological stress on hosts and will affect their ability to resist parasite infection. We investigated the beneficial acclimation hypothesis in a bumble bee trypanosome parasite system. Freshly emerged adult worker bumble bees, Bombus impatiens, were acclimated to 21 degC, 26 degC or 29 degC. They were subsequently experimentally exposed to the parasite, Crithidia bombi, and placed in a performance temperature that was the same as the acclimation temperature (constant) or one of the other temperatures (mismatched) Prevalence of parasite transmission was checked four and six days post parasite exposure, and infection intensity in the gut was quantified at eight days post exposure. Parasite strain, host colony, and host size had significant effects on transmission prevalence and infection load. However, neither transmission nor infection intensity were significantly different between constant and mismatched thermal regimes. Furthermore, acclimation temperature, performance temperature, and the interaction of acclimation and performance temperatures had no significant effects on infection outcomes. These results, counter to predictions of the beneficial acclimation hypothesis, suggest that infection outcomes in this host parasite system are robust to thermal variation within typically experienced ranges. This could be a consequence of adaptation to commonly experienced natural thermal regimes or a result of individual and colony level heterothermy in bumble bees. However, thermal variability may still have a detrimental effect on more sensitive stages or species, or when extreme climatic events push temperatures outside of the normally experienced range.", "venue": "Integrative and comparative biology", "year": 2019.0, "author_names": ["Kerrigan B Tobin", "Austin C Calhoun", "Madeline F Hallahan", "Abraham Martinez", "Ben M Sadd"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 204842493, "title": "Influence of pile length and body size on rates of heat loss in the bumble bee, Bombus vosnesenskii, The", "abstract": "Individual bumble bee species can exist at a wide ranges of latitudes and altitudes, exposing themselves to extreme temperature ranges. As heterotherms, bees are required to deal with a variety of thermal demands. They must be able to deal with extreme ambient temperatures, but their production of metabolic heat required for flight adds more complex demands. They need to be able to warm themselves quickly to ensure mobility and retain that heat to improve efficiency, but sometimes need to facilitate heat loss to avoid overheating. Thermal biology becomes especially important during the intermittent flights required for foraging. As soon as a bee lands on a flower it begins cooling to a point that will make it unable to fly. To continue flying between flowers bees will have to maintain an elevated body temperature while feeding or rewarm themselves after feeding. Both of these strategies have been shown to be significant energetic costs for heterothermic insects, and these costs have measurable consequences on the foraging decisions bees make (Nieh et al. 2006, Waddington 1990, Heinrich 1972) When facing thermal challenges bees will have to respond either by shifts in morphology or behavior.", "venue": "", "year": 2019.0, "author_names": ["Zachary D Parsons"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "sarracenia in vitro", "session_id": 594098829459950, "user_id": 204915041096142, "candidates": [{"corpus_id": 91283410, "title": "In vitro Multiplication of the Pitcher Plant Sarracenia Purpurea", "abstract": "", "venue": "", "year": 2018.0, "author_names": ["Ileana Miclea", "Rita Bernat"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 89076395, "title": "Germination In Vitro, Micropropagation, and Cryogenic Storage for Three Rare Pitcher Plants: Sarracenia oreophila (Kearney) Wherry (Federally Endangered) S. leucophylla Raf. and S. purpurea spp. venosa (Raf. Wherry", "abstract": "The genus Sarracenia forms a group of carnivorous pitcher plants native to North America. Habitat destruction and overcollection have caused pitcher plants to become rare, including U.S. federally endangered S. oreophila as well as S. leucophylla and S. purpurea spp. venosa (Raf. Wherry, both listed as endangered in several states. Protocols for in vitro germination, sustainable shoot micropropagation, shoot establish ment in soil, and seed cryopreservation are presented. Six min sulfuric acid scarification treatments coupled with appropriate tissue culture media resulted in germination in vitro within 3 weeks, often reaching greater than 50% Best germination for S. leucophylla and S. purpurea occurred on one third strength Murashige and Skoog (MS) salts, whereas S. oreophila germinated best on one sixth strength MS salts. Adjustment of pH to 4.5 to simulate a bog environment further increased germination for S. leucophylla .S hoot multiplication occurred at optimal levels when explants were placed on media in the presenceofacytokininwithoutauxinwithgreatestmultiplicationon6 benzylaminopurine (BAP) or trans zeatin and best shoot quality on trans zeatin. Plant establishment in soil required both an in vitro rooting treatment and use of shoot clusters resulting in greater than 80%survivalin soil. Seedcryopreservationtests withall three species suggeststorage in liquid N2 followed by in vitro micropropagation and plant establishment can be used to preserve material long term.", "venue": "", "year": 2012.0, "author_names": ["Cameron Northcutt", "D Davies", "Ron Gagliardo", "Kylie Bucalo", "Ron O Determann", "Jennifer M Cruse-Sanders", "G S Pullman"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 233461990, "title": "Sarracenia alata (Alph.Wood) Alph.Wood Microcuttings as a Source of Volatiles Potentially Responsible for Insects' Respond", "abstract": "Rare carnivorous plants representing the genus Sarracenia are perceived as very interesting to scientists involved in various fields of botany, ethnobotany, entomology, phytochemistry and others. Such high interest is caused mainly by the unique capacity of Sarracenia spp. to attract insects. Therefore, an attempt to develop a protocol for micropropagation of the Sarracenia alata (Alph.Wood) Alph.Wood, commonly named yellow trumpets, and to identify the specific chemical composition of volatile compounds of this plant in vitro and ex vivo was undertaken. Thus, the chemical volatile compounds excreted by the studied plant to attract insects were recognized with the application of the headspace solid phase microextraction (HS SPME) coupled with the GC MS technique. As the major volatile compounds (Z) 3 hexen 1 ol (16.48% 0.31) (E) 3 hexen 1 ol acetate (19.99% 0.01) and b caryophyllene (11.30% 0.27) were identified. Further, both the chemical assumed to be responsible for attracting insects, i.e. pyridine (3.10% 0.07) and whole plants were used in in vivo bioassays with two insect species, namely Drosophila hydei and Acyrthosiphon pisum. The obtained results bring a new perspective on the possibilities of cultivating rare carnivorous plants in vitro since they are regarded as a valuable source of bioactive volatile compounds, as including ones with repellent or attractant activity.", "venue": "Molecules", "year": 2021.0, "author_names": ["Jacek Lyczko", "J Twardowski", "Bartlomiej Skalny", "Renata Galek", "Antoni Szumny", "Iwona Gruss", "Dariusz Piesik", "Sebastian Sendel"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 84045859, "title": "Several Factors Affecting In Vitro Culture and in Vivo Plant Growth of Sarracenia purpurea", "abstract": "", "venue": "", "year": 2006.0, "author_names": ["Cheol Hee Lee", "Ju Kwang Hwang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 3911312, "title": "In Vitro Characterization of a Nineteenth Century Therapy for Smallpox", "abstract": "In the nineteenth century, smallpox ravaged through the United States and Canada. At this time, a botanical preparation, derived from the carnivorous plant Sarracenia purpurea, was proclaimed as being a successful therapy for smallpox infections. The work described characterizes the antipoxvirus activity associated with this botanical extract against vaccinia virus, monkeypox virus and variola virus, the causative agent of smallpox. Our work demonstrates the in vitro characterization of Sarracenia purpurea as the first effective inhibitor of poxvirus replication at the level of early viral transcription. With the renewed threat of poxvirus related infections, our results indicate Sarracenia purpurea may act as another defensive measure against Orthopoxvirus infections.", "venue": "PloS one", "year": 2012.0, "author_names": ["William D Arndt", "Chandra Mitnik", "Karen L Denzler", "Stacy D White", "Robert Waters", "Bertram L Jacobs", "Yvan Rochon", "Victoria A Olson", "Inger K Damon", "Jeffrey Langland"], "n_citations": 15, "n_key_citations": 0, "score": 0}, {"corpus_id": 211223361, "title": "The role of dipteran larvae in controlling Euglena concentrations in the pitchers of Sarracenia purpurea L.", "abstract": "Field studies suggest that in phytotelm communities of Sarracenia purpurea, the absence of pitcher plant specific larvae of the mosquito Wyeomyia smithii and the chironomid midge Metriocnemus knabi resulted in higher concentrations of Euglena. Conversely, a pitcher inquiline in which these larvae were present had lower concentrations of Euglena and algae, in general. In vitro experiments demonstrated that the Euglenaconcentration with the absence of larvae, rose from near zero to 10 cells/mL after 14 day and by day 28, it exceeded 3 x 10 cells/mL. When only M. knabi larvae were present, the Euglena concentration rose to 7 x 10 cells/mL by day 14 and to 2 x 10 cells/mL by day 28. When the larvae of the mosquito W. smithii were present, either by themselves or commensally with M. knabi, the Euglena concentration remained below 10 cells/mL. Microscopic inspection of both M. knabi and W. smithii larvae revealed that, in the presence of Euglena, their guts turned green. We concluded that predation by W. smithii and M. knabi larvae suppress the growth of Euglena. These larvae prevent algae from overwhelming the pitcher plant phytotelm and thus may play a critical role in maintaining the integrity of S. purpurea's inquiline community.", "venue": "", "year": 2016.0, "author_names": ["Herald T Douglas", "Raymond L Petersen", "Mintesinot Jiru", "Tatiana Roth"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 45651332, "title": "Antidiabetic compounds from Sarracenia purpurea used traditionally by the Eeyou Istchee Cree First Nation.", "abstract": "Through ethnobotanical surveys, the CIHR Team in Aboriginal Antidiabetic Medicines identified 17 boreal forest plants stemming from the pharmacopeia of the Cree First Nations of Eeyou Istchee (James Bay region of Northern Quebec) that were used traditionally against diabetes symptoms. The leaves of Sarracenia purpurea (pitcher plant) one of the identified Cree plants, exhibited marked antidiabetic activity in vitro by stimulating glucose uptake in C2C12 mouse muscle cells and by reducing glucose production in H4IIE rat liver cells. Fractionation guided by glucose uptake in C2C12 cells resulted in the isolation of 11 compounds from this plant extract, including a new phenolic glycoside, flavonoid glycosides, and iridoids. Compounds 6 (isorhamnetin 3 O glucoside) 8 [kaempferol 3 O (6' caffeoylglucoside] and 11 (quercetin 3 O galactoside) potentiated glucose uptake in vitro, which suggests they represent active principles of S. purpurea (EC(50) values of 18.5, 13.8, and 60.5 mM, respectively) This is the first report of potentiation of glucose uptake by compounds 6 and 8, while compound 11 (isolated from Vaccinium vitis) was previously shown to enhance glucose uptake. Treatment of H4IIE liver cells with the new compound 1, 6' O caffeoylgoodyeroside, decreased hepatic glucose production by reducing glucose 6 phosphatase enzymatic activity (IC(50) 13.6 mM) which would contribute to lowering glycemia and to the antidiabetic potential of S. purpurea.", "venue": "Journal of natural products", "year": 2012.0, "author_names": ["Asim Muhammad", "Jose Antonio Guerrero-Analco", "Louis C Martineau", "Lina Musallam", "Padma Madiraju", "Abir Nachar", "Ammar Saleem", "Pierre S Haddad", "John T Arnason"], "n_citations": 24, "n_key_citations": 3, "score": 0}, {"corpus_id": 6443310, "title": "Characterizing the cytoprotective activity of Sarracenia purpurea L. a medicinal plant that inhibits glucotoxicity in PC12 cells", "abstract": "BackgroundThe purple pitcher plant, Sarracenia purpurea L. is a widely distributed species in North America with a history of use as both a marketed pain therapy and a traditional medicine in many aboriginal communities. Among the Cree of Eeyou Istchee in northern Quebec, the plant is employed to treat symptoms of diabetes and the leaf extract demonstrates multiple anti diabetic activities including cytoprotection in an in vitro model of diabetic neuropathy. The current study aimed to further investigate this activity by identifying the plant parts and secondary metabolites that contribute to these cytoprotective effects.MethodsEthanolic extracts of S. purpurea leaves and roots were separately administered to PC12 cells exposed to glucose toxicity with subsequent assessment by two cell viability assays. Assay guided fractionation of the active extract and fractions was then conducted to identify active principles. Using high pressure liquid chromatography together with mass spectrometry, the presence of identified actives in both leaf and root extracts were determined.ResultsThe leaf extract, but not that of the root, prevented glucose mediated cell loss in a concentration dependent manner. Several fractions elicited protective effects, indicative of multiple active metabolites, and, following subfractionation of the polar fraction, hyperoside (quercetin 3 O galactoside) and morroniside were isolated as active constituents. Phytochemical analysis confirmed the presence of hyperoside in the leaf but not root extract and, although morroniside was detected in both organs, its concentration was seven times higher in the leaf.ConclusionOur results not only support further study into the therapeutic potential and safety of S. purpurea as an alternative and complementary treatment for diabetic complications associated with glucose toxicity but also identify active principles that can be used for purposes of standardization and quality control.", "venue": "BMC Complementary and Alternative Medicine", "year": 2012.0, "author_names": ["Cory S Harris", "Muhammad Asim", "Ammar Saleem", "Pierre S Haddad", "John T Arnason", "Steffany A L Bennett"], "n_citations": 14, "n_key_citations": 2, "score": 0}, {"corpus_id": 56372045, "title": "The development of the pitcher plant Sarracenia purpurea into a potentially valuable recombinant protein production system.", "abstract": "The unique inducible system of protein secretion by the carnivorous pitcher plant Sarracenia purpurea may be an ideal system for recombinant protein farming. S. purpurea is relatively uncommon and difficult to grow in vitro, so it has not been explored as a potential source of recombinant proteins. However, it naturally secretes large amounts of proteins into a liquid found in the leaf pitchers, so it may be an ideal way to collect recombinant proteins in leaf pitchers. Here, the advantages of transgenic S. purpurea systems over traditional transgenic plant systems for the production of recombinant pharmaceutical proteins are explored, and the steps necessary to produce such a system are discussed.", "venue": "", "year": 2009.0, "author_names": ["Bruce A Rosa", "Lada Malek", "Wensheng Qin"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 30242755, "title": "Occurrence and growth characteristics of Escherichia coli and enterococci within the accumulated fluid of the northern pitcher plant (Sarracenia purpurea L.", "abstract": "Sarracenia purpurea L. a carnivorous bog plant (also known as the pitcher plant) represents an excellent model of a well defined, self contained ecosystem; the individual pitchers of the plant serve as a microhabitat for a variety of micro and macro organisms. Previously, fecal indicator bacteria (Escherichia coli and enterococci) were shown as incidental contaminants in pitcher fluid; however, whether their occurrence in pitcher fluid is incidental or common has not been established. The purpose of this study was to investigate the occurrence, distribution, and growth potential of E. coli and enterococci in pitcher plant fluid from a protected bog in northwest Indiana. Escherichia coli and enterococci were recovered in pitcher fluids (n=43 plants) with mean densities (log CFU mL 1) of 1.28+ 0.23 and 1.97+ 0.27, respectively. In vitro experiments showed that E. coli growth in fluid not containing insects or indigenous organisms was directly proportional to the fluid concentration (growth was 10 fold in 24 h in 100% fluid) however, in the presence of other indigenous organisms, E. coli and enterococci were only sustained for 5 days at 26 degrees C. Pulsed field gel electrophoresis (PFGE) analysis showed that the plant Enterococcus faecalis isolates were genetically distinct from the human isolates; identical PFGE patterns were observed among plant isolates that fell into one of six clonal groups. These findings suggest that (i) E. coli and enterococci occurrence in pitcher plants is rather common in the bog studied, although their originating source is unclear, and (ii) the pitcher fluid contains adequate nutrients, especially carbon and energy sources, to promote the growth of indicator bacteria; however, under natural conditions, the biotic factors (e.g. competition for nutrients) may restrict their growth.", "venue": "Canadian journal of microbiology", "year": 2005.0, "author_names": ["Richard L Whitman", "Stacey E Byers", "Dawn A Shively", "Donna M Ferguson", "Muruleedhara N Byappanahalli"], "n_citations": 45, "n_key_citations": 4, "score": 1}]} -{"query": "Dualities between entropy functions and network codes,", "session_id": 6541148273263642, "user_id": 5137389961751915, "candidates": [{"corpus_id": 157591, "title": "Dualities Between Entropy Functions and Network Codes", "abstract": "In communications networks, the capacity region of multisource network coding is given in terms of the set of entropy functions Gamma* More broadly, determination of Gamma* would have an impact on converse theorems for multi terminal problems in information theory. This paper provides several new dualities between entropy functions and network codes. Given a function g ges 0 defined on all subsets of N random variables, we provide a construction for a network multicast problem which is ldquosolvablerdquo if and only if g is the entropy function of a set of quasi uniform random variables. The underlying network topology is fixed and the multicast problem depends on g only through link capacities and source rates. A corresponding duality is developed for linear network codes, where the constructed multicast problem is linearly solvable if and only if g is linear group characterizable. Relaxing the requirement that the domain of g be subsets of random variables, we obtain a similar duality between polymatroids and the linear programming bound. These duality results provide an alternative proof of the insufficiency of linear (and abelian) network codes, and demonstrate the utility of non Shannon inequalities to tighten outer bounds on network coding capacity regions.", "venue": "IEEE Transactions on Information Theory", "year": 2008.0, "author_names": ["Terence H Chan", "Alex J Grant"], "n_citations": 68, "n_key_citations": 2, "score": 1}, {"corpus_id": 122946915, "title": "Dualities between Entropy Functions and Network Codes", "abstract": "This paper provides new dualities between entropy functions and network codes. These duality results give an alternative proof of the insufficiency of linear (and abelian) network codes, and demonstrate the utility of non Shannon inequalities to tighten outer bounds on network coding capacity regions.", "venue": "", "year": 2008.0, "author_names": ["Terence H Chan", "Alex J Grant"], "n_citations": 28, "n_key_citations": 3, "score": 0}, {"corpus_id": 231809436, "title": "High Entropy Dual Functions and Locally Decodable Codes (Extended Abstract)", "abstract": "Locally decodable codes (LDCs) allow any single encoded message symbol to be retrieved from a codeword with good probability by reading only a tiny number of codeword symbols, even if the codeword is partially corrupted. LDCs have surprisingly many applications in computer science and mathematics (we refer to [13, 10] for extensive surveys) But despite their ubiquity, they are poorly understood. Of particular interest is the tradeoff between the codeword length N as a function of message length k when the query complexity the number of probed codeword symbols and alphabet size are constant. The Hadamard code is a 2 query LDC of length N 2O(k) and this length is optimal in the 2 query regime [11] For q 3, near exponential gaps persist between the best known upper and lower bounds. The family of Reed Muller codes, which generalize the Hadamard code, were for a long time the best known examples, giving q query LDCs of length exp(O(k1/(q 1) until breakthrough constructions of matching vector LDCs of Yekhanin and Efremenko [12, 6] In contrast with other combinatorial objects such as expander graphs, the probabilistic method has so far not been successfully used to beat the best explicit LDC constructions. In [3] a probabilistic framework was given that could in principle yield best possible LDCs, albeit non constructively. A special instance of this framework connects LDCs with a probabilistic version of Szemeredi's theorem. The setup for this is as follows: For a finite abelian group G of size N |G| let D G be a random subset where each element is present with probability r independently of all others. For k 3 and e (0, 1) let E be the event that every subset A G of size |A| e|G| contains a proper k term arithmetic progression with common difference in D. For fixed e 0 and sufficiently large N it is an open problem to determine the smallest value of r denoted rk such that Pr[E] 1 2 In [3] it is shown that there exist k query LDCs of message length O(rkN) and codeword length O(N) As such, Szemeredi's theorem with random differences, in particular lower bounds on rk, can be used to show the existence of LDCs. Conversely, this connection indirectly implies the best known upper bounds on rk for all k 3 [8, 4] However, a conjecture from [9] states that over ZN we have rk Ok(N logN) for all k, which would be best possible. Truth of this conjecture would imply that over this group, Szemeredi's theorem with random differences cannot give LDCs better than the Hadamard code. For finite fields, Altman [1] showed that this is false. In particular, over Fp for p odd, he proved that r3 O(p n n2) generally, rk O(p n nk 1) holds when p k 1 [2] In turn, these bounds are conjectured to be optimal for the finite field setting, which would imply that over finite fields, Szemeredi's theorem with random differences cannot give LDCs better than Reed Muller codes. The finite field conjecture is motivated mainly by the possibility that so called dual functions can be approximated well by polynomial phases, functions of the form e2piP (x)/p where P is a multivariate polynomial over Fp. We show that this is false. Using Yekhanin's matching vector code construction, we give dual functions of order k over Fp that cannot be approximated in L distance by polynomial phases of degree k 1. This answers in the negative a natural finite field analog of a problem of Frantzikinakis over N [7, Problem 1] 2012 ACM Subject Classification Theory of computation", "venue": "ITCS", "year": 2021.0, "author_names": ["Jop Briet", "Farrokh Labib"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 204787809, "title": "Developing a dual entropy transinformation criterion for hydrometric network optimization based on information theory and copulas.", "abstract": "Hydrometric information collected by monitoring networks is fundamental for effective management of water resources. In recent years, entropy based multi objective criterions have been developed for the evaluation and optimization of hydrometric networks, and copula functions have been frequently used in hydrological frequency analysis to model multivariate dependence structures. This study developed a dual entropy transinformation criterion (DETC) to identify and prioritize significant stations and generate candidate network optimization solutions. The criterion integrated an entropy index computed with mathematical floor function and a transinformation index computed with copula entropy through a tradeoff weight. The best fitted copula models were selected from three Archimedean copula families, i.e. Gumbel, Frank and Clayton. DETC was applied to a streamflow monitoring network in the Fenhe River basin and two rainfall monitoring networks in the Beijing Municipality and the Taihu Lake basin, which covers different network classification, network scale, and climate type. DETC was assessed by the commonly used dual entropy multiobjective optimization (DEMO) criterion and was compared with a minimum transinformation (MinT) based criterion for network optimization. Results showed that DETC could effectively prioritize stations according to their significance and incorporate decision preference on information content and information redundancy. Comparison of the isohyet maps of two rainstorm events between DETC and MinT showed that DETC had advantage of restoring the spatial distribution of precipitation.", "venue": "Environmental research", "year": 2019.0, "author_names": ["Heshu Li", "Dong Wang", "Vijay Pratap Singh", "Jianfeng Wu", "Jichun Wu", "Ruimin He", "Ying Zou", "Jiufu Liu", "Jianyun Zhang"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 199127070, "title": "Deep supervised hashing using symmetric relative entropy", "abstract": "Abstract By virtue of their simplicity and efficiency, hashing algorithms have achieved significant success on large scale approximate nearest neighbor search. Recently, many deep neural network based hashing methods have been proposed to improve the search accuracy by simultaneously learning both the feature representation and the binary hash functions. Most deep hashing methods depend on supervised semantic label information for preserving the distance or similarity between local structures, which unfortunately ignores the global distribution of the learned hash codes. We propose a novel deep supervised hashing method that aims to minimize the information loss generated during the embedding process. Specifically, the information loss is measured by the Jensen Shannon divergence to ensure that compact hash codes have a similar distribution with those from the original images. Experimental results show that our method outperforms current state of the art approaches on two benchmark datasets.", "venue": "Pattern Recognit. Lett.", "year": 2019.0, "author_names": ["Xueni Zhang", "Lei Zhou", "Xiao Bai", "Xiushu Luan", "Jie Luo", "Edwin R Hancock"], "n_citations": 6, "n_key_citations": 1, "score": 0}, {"corpus_id": 229363792, "title": "Deep Unsupervised Image Hashing by Maximizing Bit Entropy", "abstract": "Unsupervised hashing is important for indexing huge image or video collections without having expensive annotations available. Hashing aims to learn short binary codes for compact storage and efficient semantic retrieval. We propose an unsupervised deep hashing layer called Bi half Net that maximizes entropy of the binary codes. Entropy is maximal when both possible values of the bit are uniformly (half half) distributed. To maximize bit entropy, we do not add a term to the loss function as this is difficult to optimize and tune. Instead, we design a new parameter free network layer to explicitly force continuous image features to approximate the optimal half half bit distribution. This layer is shown to minimize a penalized term of the Wasserstein distance between the learned continuous image features and the optimal half half bit distribution. Experimental results on the image datasets Flickr25k, Nus wide, Cifar 10, Mscoco, Mnist and the video datasets Ucf 101 and Hmdb 51 show that our approach leads to compact codes and compares favorably to the current stateof the art. Introduction Semantically similar images or videos can be found by comparing their output features in the last layer of a deep network. Such features are typically around 1,000 continuous floating point values (He et al. 2016) which is already too slow and large for moderately sized datasets of a few million samples. Speed and storage are greatly improved by replacing the continuous features with just a small number of bits. Unsupervised hashing aims to learn compact binary codes that preserves semantic similarity without making use of any annotated label supervision and is thus of great practical importance for indexing huge visual collections. In this paper, as illustrated in Fig. 1, we see the transition from a continuous variable to a discrete binary variable as a lossy communication channel. The capacity of a hash bit as measured by the entropy is maximized when it is half half distributed: Half of the images are encoded with 1 and the other half of the images is encoded with +1. We minimize the information loss in the hash channel by forcing the continuous variable to be half half distributed. Other methods have optimized entropy by adding an additional term to the Copyright c (c) 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org) All rights reserved. Distribution over images Hash channel 1", "venue": "AAAI", "year": 2021.0, "author_names": ["Yun-qiang Li", "Jan C van Gemert"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 215819744, "title": "LINC01410/miR 23c/CHD7 functions as a ceRNA network to affect the prognosis of patients with endometrial cancer and strengthen the malignant properties of endometrial cancer cells", "abstract": "In previous studies, long non coding RNA LINC01410 (LINC01410) has been found to promote cells proliferation and invasion in colon and gastric cancers. However, the function of LINC01410 in endometrial cancer (EC) is still elusive. The expression patterns of LINC01410/miR 23c/Chromodomain Helicase DNA Binding Protein 7 (CHD7) in EC tissues and the prognosis of patients with different expression of LINC01410/miR 23c/CHD7 were determined by consulting TCGA database. EC patients with complete clinical data were applied for clinicopathological correlation analysis. The biological characteristics of EC cells were analyzed with the support of CCK 8 and transwell assays. CHD7 expression was assessed by qRT PCR and western blot assays. Targeted associations between LINC01410 and miR 23c, as well as miR 23c and CHD7 were speculated by prediction website and verified by dual luciferase assay. Rescue assays were performed to explore the interrelation among LINC01410, miR 23c and CHD7. Our data illustrated that LINC01410 high expression was presented in EC tissues and was positively related to the poor prognosis of patients in EC, as well as the malignant behaviors of EC cells. Through bioinformatics analysis, we surmised that LINC01410/miR 23c/CHD7 may play a role through the formation of competing endogenous RNA (ceRNA) mechanism. CHD7 expression was positively regulated by LINC01410, and inversely controlled by miR 23c. Furthermore, the promoting effects of miR 23c inhibitor or CHD7 upregulation on EC cell growth and aggressiveness were attenuated by LINC01410 silencing. Our results indicated that high expression of LINC01410 promoted EC cell progression through modulating miR 23c/CHD7 axis, providing a new direction for revealing the molecular mechanism of EC.", "venue": "Molecular and Cellular Biochemistry", "year": 2020.0, "author_names": ["Ming Lu", "Ning Ding", "Shichao Zhuang", "Yujiao Li"], "n_citations": 11, "n_key_citations": 1, "score": 0}, {"corpus_id": 214728357, "title": "A Comparison of Metric Learning Loss Functions for End To End Speaker Verification", "abstract": "Despite the growing popularity of metric learning approaches, very little work has attempted to perform a fair comparison of these techniques for speaker verification. We try to fill this gap and compare several metric learning loss functions in a systematic manner on the VoxCeleb dataset. The first family of loss functions is derived from the cross entropy loss (usually used for supervised classification) and includes the congenerous cosine loss, the additive angular margin loss, and the center loss. The second family of loss functions focuses on the similarity between training samples and includes the contrastive loss and the triplet loss. We show that the additive angular margin loss function outperforms all other loss functions in the study, while learning more robust representations. Based on a combination of SincNet trainable features and the x vector architecture, the network used in this paper brings us a step closer to a really end to end speaker verification system, when combined with the additive angular margin loss, while still being competitive with the x vector baseline. In the spirit of reproducible research, we also release open source Python code for reproducing our results, and share pretrained PyTorch models on torch.hub that can be used either directly or after fine tuning.", "venue": "SLSP", "year": 2020.0, "author_names": ["Juan Manuel Coria", "Herve Bredin", "Sahar Ghannay", "Sophie Rosset"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 54442977, "title": "Optimized Binary Hashing Codes Generated by Siamese Neural Networks for Image Retrieval", "abstract": "In this paper, we use a Siamese Neural Network based hashing method for generating binary codes with certain properties. The training architecture takes a pair of images as input. The loss function trains the network so that similar images are mapped to similar binary codes and dissimilar images to different binary codes. We add additional constraints in form of loss functions that enforce certain properties on the binary codes. The main motivation of incorporating the first constraint is maximization of entropy by generating binary codes with the same number of 1s and Os. The second constraint minimizes the mutual information between binary codes by generating orthogonal binary codes for dissimilar images. For this, we introduce orthogonality criterion for binary codes consisting of the binary values 0 and 1. Furthermore, we evaluate the properties such as mutual information and entropy of the binary codes generated with the additional constraints. We also analyze the influence of different bit sizes on those properties. The retrieval performance is evaluated by measuring Mean Average Precision (MAP) values and the results are compared with other state of the art approaches.", "venue": "2018 26th European Signal Processing Conference (EUSIPCO)", "year": 2018.0, "author_names": ["Abin Jose", "Timo Horstmann", "Jens-Rainer Ohm"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 131784844, "title": "Entropy based groundwater monitoring network design considering spatial distribution of annual recharge", "abstract": "Abstract This study explores the inclusion of a groundwater recharge based design objective and the impact it has on the design of optimum groundwater monitoring networks. The study was conducted in the Hamilton, Halton, and Credit Valley regions of Ontario, Canada, in which the existing Ontario Provincial Groundwater Monitoring Network was augmented with additional monitoring wells. The Dual Entropy Multiobjective Optimization (DEMO) model was used in these analyses. The value of using this design objective is rooted in the information contained within the estimated recharge. Recharge requires knowledge of climate, geomorphology, and geology of the area, thus using this objective function can help account for these physical characteristics. Two sources of groundwater recharge data were examined and compared, the first was calculated using the Precipitation Runoff Modeling System (PRMS) and the second was an aggregation of recharge found using both the PRMS and Hydrological Simulation Program Fortran (HSP F) The entropy functions are used to identify optimal trade offs between the maximum information content and the minimum shared information between the monitoring wells. The recharge objective will help to quantify hydrological characteristics of the vadose zone, and thus provide more information to the optimization algorithm. Results show that by including recharge as a design objective, the spatial coverage of the monitoring network can be improved. The study also highlights the flexibility of DEMO and its ability to incorporate additional design objectives such as the groundwater recharge.", "venue": "", "year": 2016.0, "author_names": ["James M Leach", "Paulin Coulibaly", "Yiping Guo"], "n_citations": 22, "n_key_citations": 1, "score": 0}]} -{"query": "Efficient Road Lane Marking Detection with Deep Learning", "session_id": 2452588110256745, "user_id": 1617042846374881, "candidates": [{"corpus_id": 52188205, "title": "Efficient Road Lane Marking Detection with Deep Learning", "abstract": "Lane mark detection is an important element in the road scene analysis for Advanced Driver Assistant System (ADAS) Limited by the onboard computing power, it is still a challenge to reduce system complexity and maintain high accuracy at the same time. In this paper, we propose a Lane Marking Detector (LMD) using deep convolutional neural network to extract robust lane marking features. To improve its performance with a target of lower complexity, the dilated convolution is adopted. A shallower and thinner structure is designed to decrease the computational cost. Moreover, we also design post processing algorithms to construct 3rd oder polynomial models to fit into the curved lanes. Our system shows promising results on the captured road scenes.", "venue": "2018 IEEE 23rd International Conference on Digital Signal Processing (DSP)", "year": 2018.0, "author_names": ["Ping-Rong Chen", "Shao-Yuan Lo", "Hsueh-Ming Hang", "Sheng-Wei Chan", "Jing-Jhih Lin"], "n_citations": 31, "n_key_citations": 2, "score": 1}, {"corpus_id": 236237189, "title": "An efficient encode decode deep learning network for lane markings instant segmentation", "abstract": "Nowadays, advanced driver assistance systems (ADAS) has been incorporated with a distinct type of progressive and essential features. One of the most preliminary and significant features of the ADAS is lane marking detection, which permits the vehicle to keep in a particular road lane itself. It has been detected by utilizing high specialized, handcrafted features and distinct post processing approaches lead to less accurate, less efficient, and high computational framework under different environmental conditions. Hence, this research proposed a simple encode decode deep learning approach under distinguishing environmental effects like different daytime, multiple lanes, different traffic condition, good and medium weather conditions for detecting the lane markings more accurately and efficiently. The proposed model is emphasized on the simple encode decode Seg Net framework incorporated with VGG16 architecture that has been trained by using the inequity and cross entropy losses to obtain more accurate instant segmentation result of lane markings. The framework has been trained and tested on a vast public dataset named Tusimple, which includes around 3.6K training and 2.7 k testing image frames of different environmental conditions. The model has noted the highest accuracy, 96.61% F1 score 96.34% precision 98.91% and recall 93.89% Also, it has also obtained the lowest 3.125% false positive and 1.259% false negative value, which transcended some of the previous researches. It is expected to assist significantly in the field of lane markings detection applying deep neural networks.", "venue": "", "year": 2021.0, "author_names": ["Abdullah-Al Mamun", "Poh Ping Em", "Jakir Hossen"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 218688051, "title": "Intensity Thresholding and Deep Learning Based Lane Marking Extraction and Lane Width Estimation from Mobile Light Detection and Ranging (LiDAR) Point Clouds", "abstract": "Lane markings are one of the essential elements of road information, which is useful for a wide range of transportation applications. Several studies have been conducted to extract lane markings through intensity thresholding of Light Detection and Ranging (LiDAR) point clouds acquired by mobile mapping systems (MMS) This paper proposes an intensity thresholding strategy using unsupervised intensity normalization and a deep learning strategy using automatically labeled training data for lane marking extraction. For comparative evaluation, original intensity thresholding and deep learning using manually established labels strategies are also implemented. A pavement surface based assessment of lane marking extraction by the four strategies is conducted in asphalt and concrete pavement areas covered by MMS equipped with multiple LiDAR scanners. Additionally, the extracted lane markings are used for lane width estimation and reporting lane marking gaps along various highways. The normalized intensity thresholding leads to a better lane marking extraction with an F1 score of 78.9% in comparison to the original intensity thresholding with an F1 score of 72.3% On the other hand, the deep learning model trained with automatically generated labels achieves a higher F1 score of 85.9% than the one trained on manually established labels with an F1 score of 75.1% In concrete pavement area, the normalized intensity thresholding and both deep learning strategies obtain better lane marking extraction (i.e. lane markings along longer segments of the highway have been extracted) than the original intensity thresholding approach. For the lane width results, more estimates are observed, especially in areas with poor edge lane marking, using the two deep learning models when compared with the intensity thresholding strategies due to the higher recall rates for the former. The outcome of the proposed strategies is used to develop a framework for reporting lane marking gap regions, which can be subsequently visualized in RGB imagery to identify their cause.", "venue": "Remote. Sens.", "year": 2020.0, "author_names": ["Yi-Ting Cheng", "Ankit Patel", "Chenglu Wen", "Darcy M Bullock", "Ayman F Habib"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 224981809, "title": "Gradient Based Edge Effects on Lane Marking Detection using a Deep Learning Based Approach", "abstract": "Lane detection is part of the advanced driver assistance system (ADAS) equipped in intelligent vehicles. The system provides the driver with significant geometric information of the road ahead. Numerous deep learning techniques have been employed in lane detection because of the simplicity, ease, and efficiency of these techniques in learning discriminative features from RGB (red, green, and blue) images. However, existing works have rarely considered detecting lane markings during bad weather conditions, which could reduce lane detection performance. Hence, this paper proposed a Fully Convolutional Network (FCN) model with RGB and Canny edge detection used as the model's spatial input. The proposed platform was developed using two scenarios: FCN RGB edge and FCN edge. The model development was divided into three stages, namely data acquisition, platform development, and benchmarking against existing methods and data. Both scenarios using the proposed method yielded a 4% improvement compared to the original FCN RGB images (i.e. the previous method) The Canny edge detection method successfully extracted necessary information from the images and neglected the water drops in rainy conditions by treating them as noise. In summary, the proposed method has the potential to boost the performance of the ADAS system in detecting lane markings in rainy conditions.", "venue": "", "year": 2020.0, "author_names": ["Noor Jannah Zakaria", "Mohd Ibrahim Shapiai", "Hilman Fauzi", "Hossamelden Mohamed Elhawary", "Wira Jazair Yahya", "Mohd Azizi Abdul Rahman", "Khairil Anwar Abu Kassim", "Irfan Bahiuddin", "Mohd Hatta Mohammed Ariff"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 235805677, "title": "Deep Learning in Lane Marking Detection: A Survey", "abstract": "Lane marking detection is a fundamental but crucial step in intelligent driving systems. It can not only provide relevant road condition information to prevent lane departure but also assist vehicle positioning and forehead car detection. However, lane marking detection faces many challenges, including extreme lighting, missing lane markings, and obstacle obstructions. Recently, deep learning based algorithms draw much attention in intelligent driving society because of their excellent performance. In this paper, we review deep learning methods for lane marking detection, focusing on their network structures and optimization objectives, the two key determinants of their success. Besides, we summarize existing lane related datasets, evaluation criteria, and common data processing techniques. We also compare the detection performance and running time of various methods, and conclude with some current challenges and future trends for deep learning based lane marking detection algorithm.", "venue": "", "year": 2021.0, "author_names": ["Youcheng Zhang", "Xuechen Zhang", "Jing-Hao Xue", "Qingmin Liao"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 235079883, "title": "Lane marking detection using simple encode decode deep learning technique: SegNet", "abstract": "In recent times, many innocent people are suffering from sudden death for the sake of unwanted road accidents, which also riveting a lot of financial properties. The researchers have deployed advanced driver assistance systems (ADAS) in which a large number of automated features have been incorporated in the modern vehicles to overcome human mortality as well as financial loss, and lane markings detection is one of them. Many computer vision techniques and intricate image processing approaches have been used for detecting the lane markings by utilizing the handcrafted with highly specialized features. However, the systems have become more challenging due to the computational complexity, overfitting, less accuracy, and incapability to cope up with the intricate environmental conditions. Therefore, this research paper proposed a simple encode decode deep learning model to detect lane markings under the distinct environmental condition with lower computational complexity. The model is based on SegNet architecture for improving the performance of the existing researches, which is trained by the lane marking dataset containing different complex environment conditions like rain, cloud, low light, curve roads. The model has successfully achieved 96.38% accuracy, 0.0311 false positive, 0.0201 false negative, 0.960 F1 score with a loss of only 1.45% less overfitting and 428 ms per step that outstripped some of the existing researches. It is expected that this research will bring a significant contribution to the field lane marking detection.", "venue": "", "year": 2021.0, "author_names": ["Abdullah Al Mamun", "Poh Ping Em", "Jakir Hossen"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 145050101, "title": "Vision Based Lane Detection and Lane Marking Model Inference: A Three Step Deep Learning Approach", "abstract": "In many advanced driver assistance systems (ADAS) lane detection is often necessary. Vision based lane detection is popular because of its cost efficiency, but it can be easily affected by illumination changes, especially abrupt ones. Moreover, since most camera systems have a very limited angle of view (AOV) a single camera ADAS can only perceive a portion of a highly curved road. This introduces another challenge to ADAS when fitting lane models. In this paper, we propose a method for lane model inference, which uses one of the two lane markings if there is only one lane marking can be seen; or even, using lane marking models from previous moments if there are no lane markings to be seen at the current moment. In addition, we also propose using deep neural networks (DNN) to reduce noise at feature extraction stage. We use two DNNs in our method: a YOLO network for detecting an removing vehicles from images; a CPN network for detecting road surfaces in order to remove noises that are not on road surfaces. We tested our method on a video in which the roads are mostly curved and the lighting conditions can change very fast. We use the distances between our lane marking models and the ground truth to evaluate our method. We see some big improvements in scenarios where the scene suddenly becomes very bright and where the road has a very high curvature.", "venue": "2018 9th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP)", "year": 2018.0, "author_names": ["Yueen Ma", "Vincent Havyarimana", "Jing Bai", "Zhu Xiao"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 222007620, "title": "Lane Line Detection via Deep Learning Based Approach Applying Two Types of Input into Network Model", "abstract": "Lane line detection is one of the important modules for Advanced Driver Assistance System (ADAS) that are applied in the autonomous vehicle. This module work by exhibit the position of the road lane marking and providing the details of the geometrical features of the lane line structures into the intelligent system. This paper proposes the lane line marking detection using Fully Convolutional Neural Network (FCN) model by investigating the two types of input fed into the networks. RGB channel (Red, Green, Blue) and Canny edge were used as the inputs to develop in the FCN model. The FCN approach has been proposed as one of the solution methods in mitigating the road lane detection issues due to its great performance in the application of objects detection in image or video. Previously, the RGB channel is widely applied in the deep learning method meanwhile, the Canny edge input has not been applied yet in the deep learning method. Therefore, this study investigates the further performance of this model by applying the canny edge as addition input besides applying only the RGB channel. The data collections were acquired from real time data collection. The result shows that the FCN model with the canny edge achieved a slight improvement with 96 compared to FCN with the RGB channel with 92", "venue": "", "year": 2020.0, "author_names": ["Noor Jannah Zakaria", "Mohd Ibrahim Shapiai", "M Abdul Rahman", "Wira Jazair Yahya"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 222097171, "title": "Perceptual Modelling of Unconstrained Road Traffic Scenarios with Deep Learning", "abstract": "In recent times, advanced driver assistance system (ADAS) and autonomous driving have received significant interest in the automotive community. In this regard, lane assistance system and vehicle detection are considered as core modules of ADAS. However, the drawback is that the conventional image processing and vision based techniques are quite slow and computationally expensive. The proposed work circumvents this limitation with the practical use of deep learning (CNN) based detection architecture. This paper proposes the use of CNN inspired detection methods such as Faster RCNN and YOLO for visual perception of road traffic. Notably, they were applied for effective vehicle detection in non disciplined, heterogeneous Indian road traffic. This involved collecting own dataset for Indian urban heterogeneous traffic in both day and night time. An application of YOLO with VGG network resulted in a mAP score of 78.57% On the other hand, Faster RCNN with Inception v2 and ResNet networks resulted in mAP score of 88% and 89.44% on Indian road traffic datasets. This is a significant result; that shows the use of Deep Learning techniques for an efficient visual modelling of unconstrained road traffic scenarios.", "venue": "2020 10th International Conference on Advanced Computer Information Technologies (ACIT)", "year": 2020.0, "author_names": ["N Jaswanth", "Anjali Poornima Karri", "Hrishikesh Venkataraman"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219509875, "title": "Development of an embedded road boundary detection system based on deep learning", "abstract": "Abstract The ability to sense the surrounding environment is an important developing technology in the field of automated vehicles. Lane line detection could determine a vehicle's travelable area. An embedded road boundary detection system based on deep learning was developed in this study. The system can detect structured and unstructured roads in a variety of situations. To obtain an image with clear lane markings, a convolution auto encoder with the characteristics of noise reduction and reconstruction was used to remove all objects in the images except lane markings. Then, the feature points of the lane line were extracted, and the lane line was fitted with a hyperbolic model. Finally, a particle filter was used for lane tracking. The road boundary detection system was implemented on the NVIDIA Jetson TX2 platform. Three different situations, day, night, and rainy day were selected to demonstrate the performance of the proposed algorithm. Additionally, to deal with structured roads, some special scenes, such as shadows, tunnels, degenerate lane markings, and blocked lane markings, were considered. According to the experimental results, the accuracy of the proposed lane detection system for structured and unstructured roads was 90.02%", "venue": "Image Vis. Comput.", "year": 2020.0, "author_names": ["Jau-Woei Perng", "Ya-Wen Hsu", "Ya Zhu Yang", "Chia-Yen Chen", "Tang-Kai Ying"], "n_citations": 4, "n_key_citations": 0, "score": 0}]} -{"query": "Focus group interviews as a data collecting strategy mclafferty", "session_id": 8631150722346756, "user_id": 4209627767508597, "candidates": [{"corpus_id": 2110601, "title": "Focus group interviews as a data collecting strategy.", "abstract": "BACKGROUND Focus group interviews are a method for collecting qualitative data and have enjoyed a surge in popularity in health care research over the last 20 years. However, the literature on this method is ambiguous in relation to the size, constitution, purpose and execution of focus groups. AIM The aim of this article is to explore some of the methodological issues arising from using focus group interviews in order to stimulate debate about their efficacy. DISCUSSION Methodological issues are discussed in the context of a study examining attitudes towards and beliefs about older adults in hospital settings among first level registered nurses, nursing lecturers and student nurses. Focus group interviews were used to identify everyday language and constructs used by nurses, with the intention of incorporating the findings into an instrument to measure attitudes and beliefs quantitatively. CONCLUSIONS Experiences of conducting focus group interviews demonstrated that smaller groups were more manageable and that groups made up of strangers required more moderator intervention. However, as a data collecting strategy they are a rich source of information.", "venue": "Journal of advanced nursing", "year": 2004.0, "author_names": ["Isabella McLafferty"], "n_citations": 677, "n_key_citations": 38, "score": 1}, {"corpus_id": 226248289, "title": "Unintended consequences: a qualitative study exploring the impact of collecting implementation process data with phone interviews on implementation activities", "abstract": "Qualitative data are crucial for capturing implementation processes, and thus necessary for understanding implementation trial outcomes. Typical methods for capturing such data include observations, focus groups, and interviews. Yet little consideration has been given to how such methods create interactions between researchers and study participants, which may affect participants' engagement, and thus implementation activities and study outcomes. In the context of a clinical trial, we assessed whether and how ongoing telephone check ins to collect data about implementation activities impacted the quality of collected data, and participants' engagement in study activities. Researchers conducted regular phone check ins with clinic staff serving as implementers in an implementation study. Approximately 1 year into this trial, 19 of these study implementers were queried about the impact of these calls on study engagement and implementation activities. The two researchers who collected implementation process data through phone check ins with the study implementers were also interviewed about their perceptions of the impact of the check ins. Study implementers' assessment of the check ins' impact fell into three categories: (1) the check ins had no effect on implementation activities, (2) the check ins served as a reminder about study participation (without relating a clear impact on implementation activities) and (3) the check ins caused changes in implementation activities. The researchers similarly perceived that the phone check ins served as reminders and encouraged some implementers' engagement in implementation activities; their ongoing nature also created personal connections with study implementers that may have impacted implementation activities. Among some study implementers, anticipation of the check in calls also improved their ability to recount implementation activities and positively affected quality of the data collected. These results illustrate the potential impact of qualitative data collection on implementation activities during implementation science trials. Mitigating such effects may prove challenging, but acknowledging these consequences or even embracing them, perhaps by designing data collection methods as implementation strategies could enhance scientific rigor. This work is presented to stimulate debate about the complexities involved in capturing data on implementation processes using common qualitative data collection methods. ClinicalTrials.gov, NCT02325531. Registered 15 December 2014.", "venue": "Implementation Science Communications", "year": 2020.0, "author_names": ["Inga Gruss", "Arwen E Bunce", "James V Davis", "Rachel Gold"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 213257641, "title": "A mixed method approach for the assessment of demand creation intervention strategy for polio eradication on exclusive breast feeding in Northern Nigeria", "abstract": "The Federal Ministry of Health, Nigeria introduced incentives such as sachets milk powder to increase demand for oral polio vaccine (OPV) This study assessed whether the milk encourages the use of breast milk substitutes thereby dis incentivising exclusive breastfeeding (EBF) in children during the first six months of life. A cross sectional design with mixed method was used for collecting quantitative and qualitative data in Borno and Kaduna states. Questionnaire was administered to 808 caregivers. There were focus group discussions, in depth interviews and observations of an ongoing OPV+ intervention campaign. Quantitative and qualitative data were analysed using STATA 10 and MAXQDA, respectively. Milk was an infrequent component of the incentive package and accounting for only 4.6 and 1.5% in the 3 most recent immunisation campaigns. The high EBF awareness (82.4% was associated with the demand creation campaign which the health service providers used to reinforce EBF messages. Breastfeeding decisions were mainly influenced by family and group norms and not by the sachet of milk powder that was given during the OPV+ There were no indications of inappropriate promotion of foods or any of the incentives. The inclusion of sachet milk in OPV+ kit did not compromise EBF but further enhanced it since the same service providers were responsible for all health interventions in the local government. Using milk powder and other incentives are effective for increasing participation and compliance with uptake of OPV in both states. Key words: Polio eradication, incentive, exclusive breastfeeding, demand creation.", "venue": "", "year": 2020.0, "author_names": ["Oladele B Akogun", "Omolola I Olojede", "Adedoyin Adesina", "Sani Njobdi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 195466807, "title": "Management Strategy for Distributing Questionnaires and Interview Guidelines in the Research Data Collection Process", "abstract": "Writing can mean lowering or describing graphic symbols that describe a languageunderstood by someone. For a researcher, management of research preparation is a veryimportant step because this step greatly determines the success or failure of all researchactivities. Before a person starts with research activities, he must make a written plan commonlyreferred to as the management of research data collection. In the process of collecting researchdata, of course we can do the management of questionnaires as well as the preparation ofinterview guidelines to disseminate and obtain accurate information. With the arrangement ofplanning and conducting interviews: the ethics of conducting interviews, the advantages anddisadvantages of interviews, the formulation of interview questions, the schedule of interviews,group and focus group interviews, interviews using recording devices, and interview bias.making a questionnaire must be designed with very good management by giving to theinformation needed, in accordance with the problem and all that does not cause problems at thestage of analysis and interpretation.", "venue": "", "year": 2018.0, "author_names": ["Po Abas Sunarya", "George Iwan Marantika", "Adam Faturahman"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169681832, "title": "Development Strategy for Farmers Corporation Based of Coffee Cattle Bioindustry in Rejang Lebong Regency, Bengkulu", "abstract": "Farmer Owned Enterprise (FOE) of Bukit Kaba Mandiri in Rejang Lebong Regency is one of the economic institutions that is beneficial for farmers to increase the productivity and efficiency of coffee and cattle integrated farming suitable for its regional potentials. However, FOE is still in constraints due to its lack of organizational management capability in developing the coffee cattle bioindustry hence the research aimed to build a strategy in developing the FOE in coffee and cattle bioindustry. The study was conducted at FOE of Bukit Kaba Mandiri in Rejang Lebong Regency, Bengkulu, from January to October 2018. Focus Group Discussion (FGD) was carried out to collect data with 16 respondents and in depth interviews with FOE administrators. Data collected included strengths, weaknesses, opportunities, and threats faced by FOE which were then analyzed using Strength, Weaknesses, Opportunities, Threats (SWOT) method to formulate an FOE development strategy. The strategy was then compiled based on priorities with Analytical Hierarchy Process (AHP) The results formulated three strategies, namely: (1) increasing production of quality feed to meet the needs of dairy cattle, (2) producing competitively priced compost from cow manure, and (3) establishing market partnerships with coffee exporters. Producing competitively priced compost from cow manure is the first priority that FOE needs to develop.", "venue": "Jurnal Tanaman Industri dan Penyegar", "year": 2019.0, "author_names": ["Afrizon Afrizon", "Andi Ishak"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 14029734, "title": "Registry Data Coordinator (RDC) a Proper Accessible Strategy for Improving Road Traffic Injury (RTI) Hospital Based Trauma Registry Systems in Developing Countries and Low Income Countries", "abstract": "Introduction: Evidence suggested that a significant level of trauma mortality can be prevented using registry system. Aim: This study aimed to improve Kashan Hospital Based Trauma Registry System (KHBTRS) for Road Traffic Injury (RTI) Material and methods: After conducting focus group discussion absence of minimum data set (MDS) and poor data collection process (DCP) were identified as main problems for KHBTRS RTI. Proposed MDS were surveyed by 20 experts of trauma research center of throughout the Iran. Then approved MDS applied for trauma registry system data base in form of SQL. DCP were reform from prospective data collection (review of medical record) to concurrent (through the interview) approach. Results: Most of participants for MDS approval belonged to clinical group 13(65% 146 MDS in eighteen main categories were proposed for RTI. The maximum score for each MDS main categories were attributed to body parts injured 220 (100% and patient vital signs 139 (99.29% respectively. Pilot testing of KHBTRS RTI database of 50 (50% riders indicated fully completeness 50 (100% for concurrent approach. It was concluded that based on experts' viewpoints MDS relating to injury nature and place of occurrence have more priority in comparisons to MDS relating to causes of injury. It may attribute to health care providers focus on clinical care and treatment. Conclusion: It was concluded that based on experts' viewpoints MDS relating to injury nature and place of occurrence have more priority in comparisons to MDS relating to RTI prevention; it may attribute to health care providers focus on clinical care and treatment. To develop injury interventions based on given data, recruitment of professionals as registry data coordinator with specific job description to collect and advocacy of injury external causes data seems imperative.", "venue": "Acta informatica medica AIM journal of the Society for Medical Informatics of Bosnia Herzegovina casopis Drustva za medicinsku informatiku BiH", "year": 2018.0, "author_names": ["Zahra Meidani", "Mehrdad Mahdian", "A Ayan", "Mahdis Mohammadzade", "Ali Mohammad Nickfarjam", "Gholam Abbas Moosavi"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 219516775, "title": "Socioeconomic factors influencing farmers' specific adaptive strategies to climate change in Talensi district of the Upper East Region of Ghana", "abstract": "Farmers all over the country have been exposed to various adaptation strategies to climate change. The adaptation options however focus too closely on technical skills and technologies and fail to address critical social factors such as culture, beliefs and values that influence the adoption and effective implementation of new adaptation technologies, skills and capacity. This paper aims to assess the socioeconomic factors influencing farmers' specific adaptive strategies to climate change in Pwalugu and Balungu communities in the Talensi district of the Upper East Region of Ghana. This study used purposive sampling technique to select the study communities, whereas simple random sampling technique was used to select a total of 100 respondents from the selected communities. Questionnaires, key informant interviews and focus group discussions were used in collecting data from respondents. This study used detailed statistical test to analyze the data, and the results are presented in the form of figures and tables. This study highlights the legal and institutional context which must be adopted for effective response to climate change impacts in rural communities in Northern Ghana. It also recommends that government and relevant stakeholders should collaborate with financial institutions to ensure that funds are readily available to farmers to enable them to effectively adapt to climate change as well as provide training/workshop programs to farmers to enhance their capacity in planning and implementing effective strategies to climate change.,This study used the integrated methodological approach where quantitative methods were combined with appropriate qualitative methods. According to Sandelowski (2000) this method ensures reliability (the extents to which results are consistent over time) and validity (the means of which measurements are accurate) of the research. A combination of participatory methods, including key informant interviews, household questionnaire surveys and focus group discussions were used, allowing local people the opportunity to participate by sharing their experiences and knowledge to outline possible solutions to the problem at hand. Multiple methods (Yeasmin and Rahman, 2012) are good at reducing the inadequacies of a single method. Cross sectional study was used in designing the research. Variables were measured or determined at the same period in a given population. This method allowed the assessment of practices, attitudes, knowledge and beliefs of a population in relation to a particular event or phenomenon (Olsen and George, 2014),The findings of this study revealed farming as the major occupation in the two communities with males being dominant. Diverse livelihood activities such as fishing, animal/poultry rearing, firewood/charcoal production, hunting and driving were other activities respondents engaged to earn a living. In terms of institutional arrangements, avoidance of bush burning and tree felling were the norms influencing decision making in the two communities. Fear of being punished, animals feeding on some of the grasses, trees inducing rainfall as well as benefits respondents get from trees were the reasons these norms were adhered to in the study area. Access to land, gender dynamics and finance were identified as the socioeconomic factors in the study area. High demands by landowners, last minute change of mind by landowners, limited fertile lands, lack of money to acquire lands, behavior of tenants, number of acres required and lands far from water bodies were the challenges associated with acquiring land in the communities. Access to finance influenced respondents' ability to acquire fertile lands, lands closer to water bodies and any number of acres of their choice. Gender however impeded women adaptation strategies to climate change. Women were not allowed to own land and other property in the form of animals simply because they are seen as migrants and they do not know the history of the land.,This is a master's thesis project. This paper shows the socioeconomic factors, which are influencing farmers' specific adaptation to climate change in the Talensi district of Ghana.", "venue": "", "year": 2020.0, "author_names": ["Damian Felladam Tangonyire", "George Agana Akuriba"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 218653116, "title": "Gender Specific Livelihood Strategies for Coping with Climate Change Induced Food Insecurity in Southeast Nigeria", "abstract": "This study assessed the livelihood strategies adopted by husbands and wives within the same households for coping with climate induced food insecurity in Southeast Nigeria. Collective and bargaining approaches were used in collecting individual and intra household level data of 120 pairs of spouses in Southeast Nigeria; husbands and wives were interviewed separately. Focus group discussions, key informant interviews, and household surveys were used to elicit responses from the respondents. Quantitative data for the study were analyzed using percentage, mean scores, and multinomial logit regression analysis. Results of the study revealed that 90% of the wives were more food insecure than their husbands (79.2% The respondents noted that the observed changes in the climate contributed immensely to their food insecurity situation. To cope with food insecurity, a slightly higher proportion (47.3% and 14.2% of wives adopted on farm and non farm strategies, respectively, while men (39.8% adopted more off farm strategies (38.5% Additionally, results of the multinomial logit regression revealed that market distance and credit access significantly influenced the choice of husbands' and wives' engagement in off farm livelihood strategy; sourcing information on climate change issues significantly influenced women's choice of engagement in off farm/non farm strategy; and receiving remittances significantly influenced men's choice of engagement in non farm strategy. The study concluded that, although women play crucial roles in addressing food insecurity within their households, gender specific obstacles typically impede their abilities to cope with climate induced food insecurity.", "venue": "Food Security", "year": 2020.0, "author_names": ["Ifeoma Quinette Anugwa", "Agwu Ekwe Agwu", "Murari Suvedi", "Suresh Chandra Babu"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 216373905, "title": "Climate Change Impact on Fisheries Sector in West Coast of Malaysia: Adaptive Strategy and Measures to Mitigate the Environmental Issues", "abstract": "The main source of fish production resides in fisheries, which eventually contributes to increase in Malaysia's economy. Among the 13 states of Malaysia, Selangor is recorded as a huge landing for marine fish, which is the third largest in number for the West Coast of Peninsular Malaysia. As a result, the livelihood of Selangor rural community especially the fisheries has partly contributed to the income of these communities. However, these fisheries required to be maintained properly for sustainable fisheries development. Moreover, the problems need to be identified that are faced by this coastal area fisheries community, where the liabilities are allied with environmental and social features. Thus. This study aims to conduct an in depth interview with focus group discussion (FGD) session to collect data and determine the problem arises in this community in Selangor of the west coast of Peninsular Malaysia due to climate change. Finally, this FGD studies local communities observations on adaptive strategy and measures to alleviate the adverse effects of climate change such as floods, salinity intrusion, coastal erosion, and sea level rise. Consequently, this FGD session output will help the local government to take into consideration the increment of the fishermen incentives and subsidies. As a result, this will in turn help fishermen to develop the socioeconomic situation of these targeted groups.", "venue": "", "year": 2019.0, "author_names": ["Siti Zulaiha Binti Zolkaply", "Lubna Alam", "Labonnah Farzana Rahman", "Md Azizul Bari", "Sharina Abdul Halim", "Mazlin bin Mokhtar"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 221941187, "title": "Determinants of adaptation strategies to climate change among the smallholder farmers in Adama District, Ethiopia", "abstract": "The Ethiopian economy is mainly based on the rain fed agriculture practiced by smallholder farmers. The sector is highly vulnerable to climate change impacts. This study aims to examine the determinants of adaptation strategies to climate change among the smallholder farmers in Adama District, Ethiopia.,A cross sectional survey design was used to collect quantitative data using questionnaire with 351 randomly selected smallholder farmers. To collect qualitative data focus group discussions, key informant interviews and field observations were also used. Triangulated with thematic analysis, descriptive statistics and binary logistic regression model were used for the analysis.,The result indicated that the majority of the smallholder farmers use at least one climate change adaptation strategy in their local areas though the strategy is generally weak. In this regard, some of the dominant climate change adaptation activities identified in the study area are using improved crop varieties, planting trees, watershed management, adjusting planting date and terracing. The result from binary logistic regression model showed that age and sex of household head, as well as their education, family size, access to agricultural extension services and training on climate change significantly influence the practices of adaptation measures.,This study would help the practitioners to modify the existing weak adaptation activities by introducing advanced and technological based adaptation strategies to the rural farming communities.", "venue": "", "year": 2020.0, "author_names": ["Hurgesa Hundera Hirpha", "Sylvester N Mpandeli", "Amare Bantider"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "stereomicroscope morphological dental traits", "session_id": 3755396281071014, "user_id": 586583046287313, "candidates": [{"corpus_id": 146810043, "title": "The role of clinical examination in the detection of permanent maxillary molars with two palatal roots.", "abstract": "BACKGROUND To determine whether the presence of two palatal roots in permanent maxillary molars (PMMs) could be predicted by observing dental morphological traits during the clinical examination. MATERIALS AND METHODS A total of 18 second and 26 third PMMs with two palatal roots (2PR) were examined from the collection of extracted teeth. The reference sample of 44 extracted PMMs with one palatal root was selected such that pairs of morphologically matching PMMs with one and 2PR were formed. The external morphology of these tooth pairs was examined under a stereomicroscope and distinguishing traits were registered. The Fisher's exact test was applied to examine differences between second and third PMMs. Additionally, the external morphology of 17 PMM with 2PR in 15 patients was analyzed retrospectively. RESULTS Extracted PMMs with 2PR possessed the following distinguishing morphological traits: crown wider on the palatal half (55.3% double Carabelli cusps (23.7% pronounced palatal indentation of the crown (20.5% thick palatal enamel extension (16.3% palato radicular groove (11.6% and palatal enamel pearl (2.3% Differences between second and third PMMs were not statistically significant (P .05) At least one distinguishing trait was present on 63.4% and 94.1% of extracted and clinically evaluated PMMs with 2PR, respectively. Omega shaped deformation of the dental arch may be the first clinically observable clue to this root constellation. CONCLUSIONS Clinical examination of tooth morphology and shape of the dental arch is essential for the detection PMMs with 2PR.", "venue": "Folia morphologica", "year": 2019.0, "author_names": ["Tomaz Hitij", "Iztok Stamfelj"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 202196250, "title": "Intraspecific Variation of One of the Oldest Litopterna (Mammalia) Protolipterna ellipsodontoides, and Redescription of the Species", "abstract": "Abstract. Acknowledgment of intraspecific variation is an important part of a reliable species diagnosis. For mammalian taxonomy, tooth morphology is an especially important trait and, therefore, the morphological variability of these structures is also greatly significant. Here we describe major dental variation present for Protolipterna ellipsodontoides Cifelli (Litopterna) based on a sample of more than 500 teeth and provide a differential diagnosis distinguishing the species from its sister taxa. Initially, the sample was carefully observed under stereomicroscope in order to identify variable traits. Further variable features were also highlighted by a geometric morphometric analysis. We conducted statistic tests to assess the likelihood that the fossils belong to a single species and the single identity of the sample was recovered. A great deal of our results show that structures like the parastyle, hypocone, conular cristae and ectoflexus can show different degrees of development for the same species. The tooth outline may also show some variation in shape and proportions. Many of those features are used as characters of phylogenies, however, the results obtained indicate these traits to be inappropriate for this sort of study, as they are variable within a single species. Some accessory cusps were also found, but they have very low frequency. Furthermore, these traits lack a clear phylogenetic signal and hence are of dubious validity to taxonomy and systematics.", "venue": "Ameghiniana", "year": 2019.0, "author_names": ["Tabata Zanesco", "Lilian P Bergqvist", "Agatha Agnes Pereira"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 213801783, "title": "Analysis of Metric and Morphological Dental Traits in Relatives", "abstract": "Background: Teeth can provide evidence about the nature and extent of variation among populations. Teeth are also valuable evidence in living and nonliving populations for anthropological, genetic, odontologic, and forensic investigations. It is known that dental traits are characterized by low sexual dimorphism. This study aims to analyze dental traits of permanent teethes within a group of related individuals on the basis of the frequency of dental morphological and metric traits. Methodology: 82 adult individuals were grouped according to relation and according to gender. Twenty six dental morphological traits were scored from prepared dental casts of all individuals. Dental metric data were recorded for 14 bucco lingual crown dimensions and mesio distal dimensions. Results: The study showed high frequency of tuberculum dentale, carabelli's cusp and four cusped mandibular second molars. Dental traits with low frequency included winging, interruption groove, congenital absence of incisors, four cusped mandibular first molars, and six cusped mandibular first molars. In addition to, statistically significant differences between the related and non related groups with respect to the frequency of occurrence of the winging, accessory cusps of maxillary second premolars, hypocone, lingual cusp number of mandibular second premolars, anterior fovea, Deflecting wrinkle, Protostylid, groove pattern of mandibular first molars and cusp number of mandibular second molars. Regarding metric traits, the study demonstrated significant difference between means of buccolingual diameter of upper canines, upper second molars and lower first premolars of related and unrelated individuals and mesodistal diameter of upper lateral incisors. Conclusion: low frequency traits would be of great value for evaluation of kinships more than the common traits that be of limited value in kinship evaluation while due to their high frequency in different population.", "venue": "", "year": 2020.0, "author_names": ["Nora Z Abdellah", "Heba A Yassa", "Rana Zeidan"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 217694666, "title": "Chapter 25 The Biocultural Evolution in the Osmore Valley: Morphological Dental Traits in Pre Inca Populations", "abstract": "Bioarchaeological studies on population dynamics in the pre Inca Osmore Valley (Peru) have shown a level of biological affinity between colonies in the valley (Chen Chen) and the people in the Tiwanaku state, suggesting the Tiwanaku expansion brought about the foundation of two colonies and settlements in the Central Osmore Valley (Chen Chen) and perhaps along the coast where, according to some theories, they may have given rise to the Chiribaya. Conversely, archaeological data suggest an absence of cultural contact between the Tiwanaku and the Wari outposts in the Upper Valley. The present study investigates 46 dental nonmetric traits in seven pre Inca groups to provide a geographically expanded view by comparing sites from the coastal region and the Upper Osmore Valley with groups representing the Wari and Moche cultures. Multivariate statistical analyses indicate that the Tiwanaku colony of Chen Chen shows affinity with the Wari and Moche samples, but not with the later coastal Chiribaya collection. Despite the lack of a true Tiwanaku comparative sample, this evidence suggests a biological interaction between ethnically diverse groups in the region. However caution must be taken with any final interpretation.", "venue": "", "year": 2016.0, "author_names": ["Andrea Cucina", "Claudia Arganini", "Alfredo Coppa", "Francesca Candilio"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 5556295, "title": "Italian Populations During the Copper Age: Assessment of Biological Affinities Through Morphological Dental Traits", "abstract": "Abstract The Copper Age (3rd millennium BC) was characterized by considerable socioeconomic transformations and coincided with the discovery of metallurgy. In this study we reconstruct the peopling of Italy during this period on the basis of dental morphology traits. Dental remains from 41 sites throughout Italy were analyzed; only three of the sites (Laterza and two from Sicily) span from the late Copper Age to the early Bronze Age. To work with adequate samples, we pooled the collections into nine geographically and culturally homogeneous groups. Dental morphological traits were scored on 8,891 teeth from 1,302 individuals using the ASUDAS scale. The correlation between the mean measure of divergence and geographic distances (calculated as air distances) was computed. Multidimensional scaling with the minimum spanning tree and maximum likelihood methods was applied to assess the relationships between groups. The results revealed a substantial genetic homogeneity among the populations throughout the Italian peninsula during the Copper Age with the exception of Sardinia, which tends to diverge from the continental samples. Phenetic and geographic distances correlate highly significantly only when the southern samples from Sicily and Laterza are removed from the analysis, which indicates that these groups may have experienced genetic admixture with external populations.", "venue": "Human biology", "year": 2009.0, "author_names": ["Rita Vargiu", "Andrea Cucina", "Alfredo Coppa"], "n_citations": 26, "n_key_citations": 1, "score": 0}, {"corpus_id": 223187256, "title": "Study of a mortality crisis in the catacomb of Saints Peter and Marcellinus, Rome (1st 3rd century AD) Assessment of biological affinities of the population through morphological dental traits and stable isotope analysis (d13C, d15N, d18O)", "abstract": "", "venue": "", "year": 2012.0, "author_names": ["Kevin Salesse", "Elise Dufour", "Christopher M Wurster", "Jaroslav Bruzek", "R Giuliani", "Dominique Castex"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 231124112, "title": "The Pleistocene Holocene transition in Italy The contribution of the morphological dental traits", "abstract": "", "venue": "", "year": 1999.0, "author_names": ["Alfredo Coppa", "Andrea Cucina", "Rita Vargiu"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 56391562, "title": "Technical Note: The Definition of New Dental Morphological Variants Related to Malocclusion: Traits of Malocclusion", "abstract": "Since the codification of the Arizona State University Dental Anthropology System over 25 years ago, few additional morphological traits have been defined. This work serves to expand the current suite of traits currently collected by biological anthropologists. These traits surround various issues of malocclusion and follow clinical definitions of these traits as well as incorporate observed population variation in character states. These traits include issues of spacing (i.e. diastema and crowding) as well as mandibular and maxillary occlusion (i.e. overbite, underbite) A discussion of the etiology and utility of these traits in bioarchaeological and forensic anthropological research is also given.", "venue": "", "year": 2018.0, "author_names": ["Marin A Pilloud"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 164616067, "title": "Desarrollo de la Investigacion sobre Variacion Morfologica de Poblaciones Historicas Sudamericanas Utilizando Rasgos Dentales No Metricos Development of Research on Morphological Variation of Historical South American Populations Based on Non metric Dental Traits", "abstract": "FONSECA, G. M. ARAMBURU, G. RODRIGUEZ, I. BOLLINI, G. A. ATENCIO, J. P. BERTA, M. J. LOPEZ LAZARO,S. CANTIN, M. LISSERA, R. G. Desarrollo de la investigacion sobre variacion morfologica de poblaciones historicas Sudameri canas utilizando rasgos dentales no metricos. Int. J. Morphol. 34(1):116 126, 2016.RESUMEN: El analisis de rasgos no metricos dentales ha logrado establecer relaciones biologicas de grupos humanos pasadosy actuales con un alto valor taxonomico. Aunque Sudamerica ha sido objeto de un numero considerable de investigaciones sobrepoblamiento, migraciones y mestizaje, son relativamente pocos los estudios que han utilizado informacion de rasgos dentales par a estefin, con las consiguiente ausencia de datos en amplias zonas geograficas. Se realizo una revision sistematica de la literatura en MEDLINE,SciELO, REDALYC y LILACS, sin restriccion de fecha de publicacion. Se incluyeron articulos completos y disponibles primarios ysecundarios en espanol, ingles y portugues donde se realice el analisis de rasgos morfologicos dentales en poblaciones sudamericanascon un contexto historico anterior al siglo XX. Los articulos seleccionados fueron evaluados por dos investigadores de manera i ndepen diente. La busqueda arrojo 2210 articulos de los cuales 19 cumplieron los criterios de inclusion, a los que se agregaron 9 luego de unabusqueda manual complementaria. Existe un desarrollo no equilibrado de la investigacion sudamericana, tanto en el foco geograficodonde esta se realiza, como de los paises y filiaciones de sus autores. Aunque se han logrado estandarizar los instrumentos de valoracionde esos rasgos, se sugiere promover una profesionalizacion interdisciplinaria, el apoyo internacional de sus proyectos y el abordajeholistico de sus contenidos para potenciar la aplicabilidad de su valor taxonomico a poblaciones actuales.PALABRAS CLAVE: Antropologia dental; Rasgos no metricos; Sudamerica; Odontologia forense.", "venue": "", "year": 2016.0, "author_names": ["Gabriel M Fonseca", "Guillermo Aramburu", "Ivana Franco Rodriguez", "Gabriel A Bollini", "Juan P Atencio", "Maria J Berta", "Sandra Lopez-Lazaro", "Mario Cantin", "Rosa G Lissera"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 221842462, "title": "Interspecific variation of seed morphological and micro morphological traits in the genus Vicia (Fabaceae)", "abstract": "Seed macro and micro morphology were analyzed to evaluate their capacity to discriminate species in the genus Vicia (Fabaceae) To assess the interspecific variation of the taxa in the genus Vicia, 41 accessions were obtained from the USDA ARS germplasm collection in the USA and 19 accessions were collected from Korea. Seed morphological characteristics such as shape, color, mottling, finish, length, width, diameter, hilum shape, hilum color, hilum length, and lens distance from the hilum were examined under a stereomicroscope. Testa texture characteristics such as testa pattern, papillae type, density, height, ribbing, surface deposits, and peaks topped with wax were examined under scanning electron microscopy. Various gross morphological traits of seeds of Vicia species have been analyzed and compared. The present study revealed significant variation in testa traits. Testa were papillose and papillose with mounds, the latter being observed only in Vicia lathyroides. The present study revealed 20 key traits that could be used to diagnose Vicia species and classify them.", "venue": "Microscopy research and technique", "year": 2020.0, "author_names": ["Seahee Han", "Raveendar Sebastin", "Kyung Jun Lee", "Xiaohan Wang", "Myoung-Jae Shin", "Seong-Hoon Kim", "Sookyeon Lee", "Jung-ro Lee", "Gyu-Taek Cho", "Do yoon Hyun", "Jong-Wook Chung"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "How to Teach Speaking", "session_id": 56289558555533, "user_id": 7016621979519412, "candidates": [{"corpus_id": 61919653, "title": "How to Teach Speaking", "abstract": "Learning a language and being able to speak it do not go hand in hand. This book provides both structured activities to get students speaking and ideas for developing confidence in using English outside the classroom. Essentially, this book: examines the different approaches and activities that can be used for teaching and testing speaking. covers areas of speech such as articulation, fluency and register. looks at classroom approaches such as drilling, discussions, drama, dialogues and conversation.", "venue": "", "year": 2005.0, "author_names": ["Scott Thornbury"], "n_citations": 705, "n_key_citations": 73, "score": 1}, {"corpus_id": 150360749, "title": "How To Teach Speaking Based on The Principle of Multiple Intelligences", "abstract": "ABSTRAKTujuan dari penelitian ini adalah untuk mendeskripsikan proses implementasi dari kecerdasan majemuk di dalam pengajaran speaking. Metode yang digunakan pada penelitian ini adalah metode kajian pustaka. Data penelitian diperoleh dari buku buku, jurnal jurnal ilmiah, artikel, dan website. Data dianalisa dengan menggunakan teknik kualitatif melalui analisis kritis.Hasil yang didapat dari penelitian ini adalah adanya tiga langkah dalamimplementasi kecerdasan majemuk di dalam pengajaran speaking. Langkah langkahnya adalah: a) perencanaan, b) implementasi, dan c) evaluasi. Di dalam perencanaan, terdapat beberapa langkah untuk membuat rencana pembelajaran berdasarkan kecerdasan majemuk. Langkah langkah tersebut adalah a) fokus pada spesifik objek atau topic, b) menanyakan pertanyaan yang berkaitan dengan kecerdasan majemuk, c) mempertimbangkan kemungkinan, d) brainstorm, e) memilih kegiatan yang sesuai, f) membuat rencana, g) mengimplementasikan rencana. Penggunaan kecerdasan majemuk dalam pengajaran speakingmemfasilitasi kegiatan belajar dan mengajar berdasarkan kecerdasan siswa, ini sangat efektif membantu siswa dalam memahami materi. Guru juga seharusnya menjalin hubungan dengan guru lain, para siswa, dan juga orang tua siswa untuk mendapatkan data perkembangan siswa secara spesifik. Dapat disimpulkan bahwa kecerdasan majemuk mampu mendorong kegiatan pengajaran speaking.Kata Kunci multiple intelligences, kecerdasan majemuk, pengajaran speaking, teaching speaking.", "venue": "", "year": 2018.0, "author_names": ["Amanda Ayu Septary"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 151916831, "title": "A Library Research on How to Teach Speaking Using World Style Debate", "abstract": "The objectives of this research are: (1) to describethe implementation of World Style debate to teach speaking; and (2) to explain how the World Style debate improve students' speaking competence in Senior High School. The research is a result of library research entitled \"How to Teach Speaking Using World Style Debate\" The data of the research obtained frombooks and scientific journals which are explained World Style debate or British Parliamentary debate. The research data were analyzed by using qualitative techniques through critical analysis.The research findings prove that the World Style debate is useful to teachspeaking, the benefit of World Style debate is viewed from several dimensions. First, World Style debate can improve students' speaking ability, Second, World Style debate can improve the students' involvement in learning process, Third, World Style debate also promote students' critical thinking skill, Fourth, World Style debate can increase the motivation and excitement, Fifth, World Style debate can help the students to improve subject knowledge and easier to understand the material, Sixth, World Style debate can develop the students' oral communication skill.World Style debate is a collaborative teaching strategy that benefits for students achievement and for students' interest. As the suggestion, to develop the students' speaking ability teacher should apply Wolds Style debate as an interesting activity to teach speaking skill.Key Words: World Style debate, speaking, library reseacrh", "venue": "", "year": 2016.0, "author_names": ["Aditya Ristianang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 60909011, "title": "How to Teach Speaking Skill", "abstract": "One of main concern of the most language teachers is how to help language learners to develop satisfying language proficiency. In this regard, speaking proficiency has received the greatest attention among both the language teachers as well as the language learners. This is because speaking is a crucial part of the language learning process. The major goal of teaching speaking skill is communicative efficiency. Language learners should be able to make themselves understood by using their current proficiency. They should try to avoid confusion in the message because of the faulty pronunciation, grammar, or vocabulary. In the same line, a common characteristic of many language classes is a heavy focus on the language system. Vocabulary and grammar seem to gain far more attention than the skills needed to use this vocabulary and grammar. To help students develop communicative efficiency in speaking, instructors can use activities that combine language input and communicative output. To this end, the present paper tries to take a closer look at the type of activities that language teachers can utilize to promote speaking proficiency. Accordingly, effective instructors can teach students speaking strategies by using minimal responses, recognizing scripts, and language to talk about language. These instructors help students learn to speak so that the students can use speaking to learn. Key words: Teach, Language proficiency, Speaking skill", "venue": "", "year": 2012.0, "author_names": ["Taher Bahrani", "Rahmatollah Soltani"], "n_citations": 32, "n_key_citations": 2, "score": 0}, {"corpus_id": 217285592, "title": "How to teach speaking skills to the students of HeI", "abstract": "", "venue": "", "year": 2017.0, "author_names": ["Pazylov Elyor Abduvayit Ugli"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 114172276, "title": "THE IMPLEMENTATION OF CHAIN STORY GAME TO TEACH SPEAKING IN RECOUNT TEXT FOR EIGHTH GRADERS OF SMPN 39 SURABAYA", "abstract": "THE IMPLEMENTATION OF CHAIN STORY GAME TO TEACH SPEAKING IN RECOUNT TEXT FOR EIGHTH GRADERS OF SMPN 39 SURABAYA Kharisma Narendra English Education Faculty of Languages and Arts State University of Surabaya unesakharisma@gmail.com Drs. Fahri, M.A. English Education Faculty of Languages and Arts State University of Surabaya fahri@unesa.ac.id Abstrak Penelitian ini fokus pada penerapan Chain Story Game untuk Mengajarkan Speaking dalam Recount text dan tujuan dari penelitian ini adalah untuk memberikan penjelasan yang jelas tentang mengajarkan Speaking melalui Chain Story Game Penggambaran nya meliputi (1) Bagaimana Chain Story Game di terapkan dalam mengajarkan Speaking Dalam bentuk Recount text pada siswa kelas delapan di SMPN 39 Surabaya, dan (2) bagaimana siswa siswa merespon penerapan Chain Story Game pada pengajaran Speaking dalam Recount text. Penulis menerapkan penelitian qualitative dan menggunakan 2 instrumen penelitian ,yakni Lembaran Observasi dan kuesioner. Hasil dari penelitian ini menunjukan bahwa Penerapan Chain Story Game untuk Mengajarkan Speaking Dalam Recount text pada siswa kelas delapan di SMPN 39 Surabaya sudah di lakukan dengan sangat baik. Itu terlihat dari tahap awal aktifitas ,selama aktifitas, hingga pada akhir aktifitas. Sehingga respon para siswa terhadap materi pembelajaran Recount text dari guru juga sangat baik. Dan para siswa merasa lebih mudah menerima materi dengan metode cooperative learning yang di terapkan dalam dalam bentuk Chain Story Game. Dengan Chain story game, para siswa menunjukan keberanian untuk tampil dalam proses belajar mengajar, yang mana di tunjukan dari keaktifan siswa untuk tampil di depan kelas untuk bercerita dan membangun kepercayaan diri dalam menyampaikan pendapat dalam kelompok mereka. Chain Story Game membuat siswa merasa menikmati dalam memahami materi pelajaran recount text. Kata Kunci Speaking,Recount text,Game,Chain story game Abstract This research focuses on the implementation of Chain story game to teach speaking in Recount text. And, the purpose of this research is to give the brief explanation about teaching speaking trough Chain story game The description includes: (1) How is \"Chain Story Game\" implemented in teaching speaking in the form of recount text to the eight graders of SMPN 39 Surabaya, and (2) How are the students response the implementation of chain story game in teaching speaking in recount text to the eighth graders of SMPN 39 Surabaya. The writer applies qualitative research and uses 2 research instruments, they are: observation sheet and questionnaire. The result of this research shows that The implementation of chain story game to teach speaking for eighth graders of SMPN 39 surabaya had done very well. It showed from the stage of Pre Activity until Post Activity.Then, the students were very good in response the lesson material of Recount text from the teacher. The students perceived easier when they received the material by using Cooperative Learning method which implemented in the form of chain story game. By Chain story game, the students showed the bravery in the teaching learning process. It showed from the students were active in performing in front of the class to tell story and developing confidence in delivering opinion in their group. Chain story game made the students perceived enjoy in understanding the lesson material of Recount text. Key Words: Speaking, Recount text,Game,Chain story game", "venue": "", "year": 2016.0, "author_names": ["Kharisma Narendra"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 82275397, "title": "How to teach Speaking", "abstract": "", "venue": "", "year": 2006.0, "author_names": ["Neil Mcbeath"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 190434641, "title": "Speaking Frames: How to Teach Talk for Writing: Ages 10 14", "abstract": "", "venue": "", "year": 2010.0, "author_names": ["Sue Palmer"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 62150861, "title": "APPLYING THE \"WORD CHAIN\" GAME TO TEACH DESCRIPTIVE SPEAKING TO THE EIGHT GRADERS IN SMPN 26 SURABAYA", "abstract": "Abstrak Selamabeberapatahun, kegiatanspeakingdalampelajaranbahasaInggrisdidominasiolehpara guru.Para muridkekurangankesempatanuntukmelatihketerampilanspeakingmereka.Kondisiinimembuatpenguasaankemampuanspeakingmenjadilebihsulitbagimerekadan guru tidakbisamencapaitujuandaripengajaranspeaking. Salah satutujuanpengajaranspeakingdanmungkinmenjaditujuanakhiradalahmembuatmuridmampuberbicaradalambahasa targetdengan natural (Kayi: 2006) Dalampenelitianini, penelitimenggunakanpermainan \"word chain\" yang telahdimodifikasi, yang manabiasanyadigunakanhanyauntukmeningkatkanperbendaharaan kata saja, untukmengajarspeaking deskriptif. Penelitianinimenggunakanmetodepenelitiandeskriptifkualitatif, karenapenelitianinibertujuanuntukmendeskripsikandanmenjelaskanfenomenaalami: apa yang sedangterjadi?,Mengapahaltersebutterjadi? danbagaimanahaltersebutterjadi? (Chariri, 2009:9) Penulisfokuspadapenerapanpermainan \"word chain\" untukmengajarspeaking descriptive danresponsiswaterhadappenerapantersebut.Padabagiananalisis data, penulismendeskripsikan data yang diperolehdariobservasi, bagaimana guru menerapkantekniktersebut, bagaimanapartisipasisiswa, danbagaimanaresponsiswa. Hasil yang diperolehmenunjukkanbahwapermainan \"word chain\" dapatditerapkandalamkelasspeakingketikamaterinyaadalahteksdeskriptif. Penulismenemukanbahwapermainantersebuttidakmembutuhkanpersiapan yang lama, tidakmemerlukaninstrumenatau media, danperaturannyamudahuntukdimengerti.Permainaninimendorongsiswauntukberpartisipasilebihaktifdalamkelas, lebihbanyakberbicaradalambahasa target, danmembuatsiswalebihnyamandalammengikutipelajaran. Kata Kunci: keterampilanspeaking, permainan \"word chain\" teksdeskriptif Abstract For many years, speaking activity in English class has been dominated by teachers. The students lack opportunities to practise their speaking skill. This condition makes mastering speaking ability is more difficult to them and teachers could not fulfill the aims of teaching speaking. One of the aims of teaching speaking and may be the final target of it is to enable students to speak the language naturally (Kayi: 2006) In this study, the researcher uses the modified word chain game, which is commonly used to improve students' vocabulary, to teach descriptive speaking. This study uses descriptive qualitative research method, because it aims to describe and to explain a natural phenomenon: what is happening? why is it happening? and how is it happening? (Chariri, 2009:9) The writer focuses on the application of the word chain game to teach descriptive speaking and the students' responses toward it. In the data analysis, the writer describes the data gained from an observation, how the teacher applied the technique, how the students participated, and how the students' responses were. The result shows that the word chain game is applicable in speaking class where the material is descriptive text. The writer found that game does not require much preparation, does not need any instrument or media, and its rules are easy to understand. It encourages students to participate actively in the class, to speak the target language more, and makes the students enjoy the lesson. Keywords: speaking skill, \"word chain\" game, descriptive text", "venue": "", "year": 2015.0, "author_names": ["Achmad Yanuar Firmansyah"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 69545918, "title": "Waves: Scaffolding Self regulated Learning to Teach Science in a Whole Body Educational Game", "abstract": "This study employed mixed methods to investigate the efficacy of scaffolding self regulated learning prompts within a whole body educational game, Waves. This game was designed to teach middle school aged children basic concepts of waves by moving their bodies to mimic the motions of waves, physically experiencing different velocities and wavelengths. Textual prompts intended to scaffold self regulated learning behaviors respond to learner actions (or non actions) within Waves. The adult facilitator reinforced the in game prompts by speaking them aloud. This study is framed around the research questions: (1) How does a whole body educational game effectively teach players about STEM concepts? (2) How can self regulated learning be effectively scaffolded in a whole body educational game? and (3) Are self regulated learning scaffolds utilized the same way by all players? A quantitative pre post assessment of learning and self regulation skills was further elucidated by a case study analyzing the recorded discourse of two partners while participating in the larger study.", "venue": "", "year": 2019.0, "author_names": ["Emily Kuzneski Johnson"], "n_citations": 5, "n_key_citations": 0, "score": 0}]} -{"query": "t\u00fcrkiye b\u00f6lgeleri sosyal yap\u0131lar\u0131", "session_id": 7399927692502409, "user_id": 3895612399012408, "candidates": [{"corpus_id": 163780445, "title": "Turkiye'de Yenilikci Girisimcilerin Sosyo Ekonomik Durumlari Uzerine Sosyolojik Bir Arastirma (Teknoloji Gelistirme Bolgeleri Ornegi)", "abstract": "CaliGmada Turkiye'de Teknoloji GeliGtirme Bolgelerinde faaliyet gosteren giriGimcilerin sosyo ekonomik ve kulturel ozellikleri ile iGletmelerin ekonomik yapilari anket ve mulakat teknikleri kullanilarak analizedilmiGtir. Bu ozelliklerden yola cikarak hizli buyuyen ve buyumeyen iGletmeler karGilaGtirilmiGtir. GiriGimcilerin baGarilariyla iliGkili sosyal ve ekonomik faktorler tespit edilmiGtir. GiriGimcilerin iGe baGlama sureclerinde ve baGarilarinda; ekonomik ve politik etkenlerin yani sira aile yapilari, aldiklari egitim, icinde yaGadiklari sosyo kulturel ortam, giriGimcilerin deneyimleri, iGletme donemi faaliyetleri ve kullanabilecekleri sosyal sermaye duzeyleri etkilidir.", "venue": "", "year": 2012.0, "author_names": ["Mehmet Akif Cansiz"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 153761226, "title": "KURESELLESEN DUNYADA BOLGESEL KALKINMA DINAMIKLERI, KAMU POLITIKALARI VE BOLGESEL KALKINMA AJANSLARI", "abstract": "Her ulkede kendilerine has ekonomik yapilari, gelisme duzeyleri ve sistematigi bulunan bolgeler bulunmaktadir. Bolgeler arasinda ekonomik, fiziki ve sosyal sartlar bakimindan farklilasmalar meydana gelmekte ve bolgelerarasi gelismislik farklari ekonomilerin gelismislik ve kalkinma duzeylerini etkileyebilmektedir. Bolgesel kalkinma, ulkenin cesitli bolgelerinde ekonomik ve sosyal yapinin iyilestirilerek kaynaklarin etkin dagilimina, ekonomik ve sosyal butunlesmenin saglanmasina ve bolgeler arasinda refah seviyesinin artmasina imkan saglamaktadir. Her ekonomi, bolgeleri arasindaki dinamikleri etkinlestirmek, bolgesel kalkinmasini saglamak ve surdurmek icin kalkinma planlarindan, kamu politikalarindan ve bolgesel kalkinma ajanslarindan yararlanmaktadir. Calismada, kuresellesen dunyada bolgesel kalkinma cabalari teorik ve uygulama duzeyinde belirtilerek Turkiye'nin bolgesel kalkinmasi icin ele aldigi kamu politikalari ve kalkinma planlari degerlendirilmistir", "venue": "", "year": 2015.0, "author_names": ["Ahmet Tekin"], "n_citations": 5, "n_key_citations": 0, "score": 1}, {"corpus_id": 191286490, "title": "Nusayrilerin Sosyal Yapilari ve Cumhuriyetin Ilk Yillarinda Turkiye'de Yasayan Bu Topluluga Devletin Yaklasimlari", "abstract": "Nusayri kavrami ile ilgili olarak farkli bakis acilarina dayali pek cok tanimlama yapilmistir. Bu calismada, Nusayrilik konusunda yapilan kavramsal cercevelere yer verilerek bu cerceveler aciklanmaya calisilmistir. Osmanli Devleti donemindeki sosyal ve hukuki statuleri aciklandiktan sonra Turkiye Cumhuriyeti'nin ilk yillarinda devletin bu topluluga yaklasimi uzerinde durulmustur. Bu yaklasim, hem Cumhuriyetin ilk yillarinda yapilan akademik calismalardan hem de devlet tarafindan yaptirilmis olan raporlardan faydalanilarak ortaya konulmustur. Bu calismalardan hareketle, devletin Nusayrileri Turk milletini olusturan unsurlardan biri olarak degerlendirdigi ve bu cercevede bu topluluga yonelik politikalar gelistirmeye calistigi gorulmektedir. Anahtar Kelimeler: Alevi, Nusayri, sosyal yapi, Osmanli devleti, millet sistemi, takiyye, Turkiye Cumhuriyeti.", "venue": "", "year": 2010.0, "author_names": ["Erdal Aksoy"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 226320075, "title": "Suc Gelir Dagilimi Iliskisinde Mekansalligin Etkisi: Turkiye'de Duzey 2 Bolgeleri Icin Bir Analiz", "abstract": "Suc, tum sosyal bilimcilerin oldugu gibi iktisatcilarin da oldukca ilgisini ceken bir konudur. Suc ile pek cok makroekonomik degisken arasinda farkli duzeylerde iliskiler bulunmaktadir. Bu degiskenlerden biri de toplumda yaratilan gelirin birimler arasinda adil bir sekilde dagitilmasi durumunu ifade eden gelir dagilimidir. Toplumlarda suc olgusunun beslenmesinde gelir dagilimindaki bozulmalarin etkili oldugu sonucuna ulasan birtakim calismalar mevcuttur. Bu calismada, literaturden farkli olarak suc ve gelir dagilimi arasindaki iliskinin mekansallik icerip icermedigi yani bolgelerin sinir komsuluklarinin suc gelir dagilimi iliskisinde belirleyici bir ozellik tasiyip tasimadigi arastirilmaktadir. Bu amacla Turkiye Istatistik Kurumu (TUIK) bolgesel veri tabanindan elde edilen 2016 yilina ait veriler kullanilarak Turkiye'de gelir dagilimi esitsizligi ile mala karsi islenen suclar arasindaki iliski yatay kesit verilerle IBBS 2 (Istatistiki Bolge Birimleri Siniflandirmasi) duzey bolgeleri icin mekansal ekonometrik yontemler kullanilarak incelenmistir. Mekansal belirleme testlerinin sonuclari, Mekansal Hata Modelinin en uygun model oldugunu gostermektedir. Calismanin bulgulari, Turkiye'de IBBS 2 duzey bolgeleri icin suc ile gelir dagilimi arasinda pozitif ve anlamli bir iliski oldugunu gosterirken mekansallik etkisinin de bulunduguna isaret etmektedir. Sonuc olarak, Turkiye'de 26 IBBS 2 duzey bolgesinde suc gelir dagilimi iliskisinde bolgelerin sinir komsuluklarinin etkili oldugunu soylemek mumkundur. Bu durum gerek gelir dagilimi gerekse sucu onlemeye yonelik politikalarda bolgelerin komsuluk iliskilerinin de dikkate alinmasi gerektigine isaret etmektedir.", "venue": "", "year": 2020.0, "author_names": ["Ugur Capar", "Nihal Yayla"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 146032744, "title": "TURKIYE SOSYAL GUVENLIK KURUMU 2010 2015 YILLARI IS KAZASI, MESLEK HASTALIGI VE MORTALITE SAYILARININ ILLERE GORE STANDARDIZASYONU", "abstract": "Amac: Ulkelerin durumunu gostermesi bakimindan onemli konulardan olan is kazalari, meslek hastaliklari ve buna bagli olum hizlari verilerini; Turkiye'de sigortali isciler ve isyerleri sayisina gore sehirler bazinda degerlendirmeyi amacladik. Gerec ve Yontem: 2010 2015 yillari arasinda, Turkiye Cumhuriyeti (T.C. Sosyal Guvenlik Kurumu (SGK) istatistik yilliklarindan, sehirler bazinda sigortali isci ve isyerleri verileri cizilmis ve yorumlanmistir. Karistirici etkenlerin etkilerini yorumlarken kontrol edebilmek amacli dolayli (indirekt) standardizasyon teknigi kullanilmistir. Calisma epidemiyolojik bir arastirma olup, gozlemsel tanimlayici tiptedir. Sonuclar Ulusal Mesleki Saglik ve Guvenlik Hedefleri ile karsilastirilmistir. Bulgular: Madenciligin on planda oldugu Bilecik, Zonguldak ve Manisa illerinin is kazalari ve meslek hastaliklarinin standardizasyonunda ilk sirayi aldiklari gorulmektedir. Meslek hastaliklarinin standardizasyonunda ozellikle son yillarda Kutahya ve mortalitede de Guneydogu Anadolu Bolgeleri en ust basamaklardadir. Sonuc: Soma Manisa ve Ermenek Karaman'daki madencilik is kazasi sonuclarinin, 2014'deki mortalite standardizasyon verilerine etki ettigi gozlenebilmektedir. Turkiye'de 2010 yilindan 2015'e kadar gerceklesen is kazalarinin, meslek hastaliklarinin ve buna bagli olum hizlarinin standardize edilmesi isini yurutuyoruz ve bu konuda daha ileri calismalar yapilabilmesi icin kriterler olusturuyoruz. Daha iyi bir degerlendirme ve analiz icin bildirimlerin daha sistematik ve kapsamli tutulmasinin onemli oldugu gorulmektedir.", "venue": "", "year": 2019.0, "author_names": ["O F Bayramlar", "Elif Ezirmik", "Halim Issever", "Zeynep Bayramlar"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 188450829, "title": "1990'lardan Gunumuze Turkiye'de Sosyal Sermaye ve Sivil Toplum: Bolgeler Arasi bir Karsilastirma", "abstract": "Bu calismanin amaci, 1990'lardan gunumuze, Turkiye'de sivil toplum kuruluslarina (STK) katilimin ne sekilde degistigini analiz ederek, STK'larin sosyal sermaye uretebilme potansiyelini tartismaktir. Bu ozelligi ile calisma, Turkiye'nin sosyal sermayesi uzerine yapilan calismalara katki saglamayi amaclamaktadir. Analiz icin Dunya Degerler Arastirmasi'nin (DDA) 1994 2014 Turkiye verisi kullanilmistir. Bulgular, 2000'li yillarda Turkiye'de STK katilim oranlarinin dustugunu; geleneksel cikar gruplarina katilimda gozlenen dususlerin dikkat cekici oldugunu ve sosyal sermaye literaturunun onem atfettigi yeni siyasi hareketler ve kendini ifade etme degerleri ile ilgili STK turlerine katilim oranlarinin, Turkiye'nin bolgeleri arasinda ciddi farkliliklar gosterdigini ortaya koymaktadir. Calisma, ayrica, ayni donemde, STK katilimcilarinin degisen demografik ozelliklerine de dikkat cekmektedir. 2000'li yillarda, daha fazla kadin ve genc, STK'lara katilim gostermis, STK katilimcilarinin ortalama egitim seviyeleri de yukselmistir.", "venue": "", "year": 2018.0, "author_names": ["Cerem I Cenker-Ozek"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 166186208, "title": "Ortaokul Sosyal Bilgiler ve Esdeger Derslerin Turkiye, Azerbaycan ve Turkmenistan'daki Ogretim Programlarinin Yapi Yonunden Karsilastirilmasi", "abstract": "Bu calisma; Turkiye'deki ortaokullarda (5. 6. 7. ve 8. siniflarda) okutulan Sosyal Bilgiler dersi ve Turkiye Cumhuriyeti Inkilap Tarihi ve Ataturkculuk dersi ogretim programlarinin yapilari ile Azerbaycan ve Turkmenistan'daki Sosyal Bilgiler dersi ile esdeger derslerin ogretim programlarinin yapilarini (amac, icerik, ogrenme ogretme sureci ve olcme degerlendirme) inceleyerek benzerlik ve farkliliklari ortaya koymayi amaclamaktadir. Bu calismayi diger calismalardan ayiran ve orijinal kilan tarafi: simdiye kadar ulkemizde okutulan Sosyal Bilgiler dersi ve Turkiye Cumhuriyeti Inkilap Tarihi ve Ataturkculuk dersi ogretim programlarinin yapilarinin Azerbaycan'da ve Turkmenistan'da okutulan Sosyal Bilgiler ile esdeger derslerin ogretim programlarin yapilariyla derinlemesine karsilastiran bir calismanin olmamasi ve bu calismanin bu eksikligi gidermek adina onemli olmasidir. Bu calisma Turk egitim sisteminin eksikliklerini tespit ederek bu eksikliklerin giderilmesi acisindan da onemlidir. Arastirmada nitel arastirma modellerinden dokuman incelemesi metodu kullanilmistir. Ayrica arastirmada var olan bir durum ortaya konmak amaclandigindan tarama modeli kullanilmistir.", "venue": "", "year": 2017.0, "author_names": ["Baris Metin", "Kamil Uygun", "Mehmet Oran"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 225335288, "title": "YENI KORONAVIRUS (KOVID 19) PANDEMISIYLE MUCADELEDE TURKIYE DEVLETININ IZLEDIGI STRATEJIK ILETISIM", "abstract": "Butun dunyayi etkisi altina alan yeni tip koronavirus (19) ile devletlerin mucadelesi surmektedir. Salginin saglik, ekonomi, sosyal yasam, siyasal yapilari nasil etkileyecegi tartisilmaktadir. Salgindan sonra hicbir seyin eskisi gibi olmayacagi, dunyanin ya \"yeni normal\"e ya da yeni bir duruma donusecegi kabul edilmektedir. Sorunun kuresel olmasi, kuresel isbirliklerini gerektirmektedir. Ayni zamanda ulusal onlemler alinmasi da beklenmektedir. Devletler salginla mucadelede kapasiteleri, sosyal yapilari, ekonomik duzeyleri ve siyasal duzenlerine bagli olarak farkli yol ve yontemler izlemektedir. Yine, siyasal duzenlerinden kaynakli olarak vatandaslariyla ve diger devletlerle iliskilerinde izledigi iletisim yol ve yontemleri de birbirinden ayrilmaktadir. Bu arastirmada, Turkiye Cumhuriyeti Devleti'nin salginla mucadelede izledigi stratejik iletisim uzerinde durulmaktadir. Cumhurbaskanligi Iletisim Baskani Fahrettin Artun'un 12 Mart 01 Haziran 2020 tarihleri arasinda konuyla ilgili attigi tweet'lerin icerik cozumlemesi yapilmaktadir.", "venue": "", "year": 2020.0, "author_names": ["Ferhan MUTLUER - GUNDUZ", "Nazar Bal"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 225905612, "title": "Sinir Bolgelerinde Sosyo Mekansal Etkilesim ve Yonetisim: Turkiye Ornegi", "abstract": "Ulus devlet sinirlarinin gecirgenliklerinin artmasi ile sinir bolgeleri eskiye kiyasla daha yogun mal ve insan akisina ev sahipligi yapmakta ve ceperde kalan dislanmis mekanlar olmak yerine, daha merkezi ve onemli aktivitelerin mekanlarina donusmektedir. Hacimsel olarak artan etkilesimler, sinirin cok boyutlu ve analitik olarak degerlendirilmesini ve bu cercevede ortak yonetisim arayislarini gundeme getirmistir. Bu kapsamda Turkiye'nin ulusal sinirlarindaki, sinir otesi ekonomik, sosyal, idari ve mekansal etkilesim seviyelerinin, merkezi duzeyde elde edilebilen nesnel gostergeler yardimiyla olculmesi, etkilesimin gorece yuksek seviyede oldugu sinir bolgelerinde sosyo mekansal etkilesim bicimlerinin ag analiz yontemleri ile tanimlanmasi ve sinir bolgelerinin ozgun nitelikleri baglaminda ortak yonetisim cercevesi gelistirilmistir. Bu calisma ile; sinir bolgeleri icin ulusal duzeyde Cok Degiskenli Sinir Gecirgenlik Endeksi (CDSGE) gelistirilmistir. Yapilan ag analizleri, sinir bolgelerindeki merkezi yerlesimlerinin ag karakterlerine bagli olarak farklilastigini, sinir bolgelerinin etki alaninin literaturde ve uygulamadaki mesafelerin otesine gecebildigini, sinirin mekansal etki alani disinda bulunan ulusal duzeyde merkezi ozellik tasiyan yerlesmelerin de sinir otesi ile onemli derecelerde iliskilerinin oldugunu ortaya cikarmistir. Yonetisim boyutunda ise, kirilganlik ve komsu devletler arasindaki hassas dengelerin sinir bolgeleri icin kuvvetli bir yonetisim cercevesinin surdurulmesine olanak saglayamadigi ve bu cercevede sinir bolgeleri icin \"ortak yonetisim araligi\" kavraminin onemi vurgulanmistir.", "venue": "", "year": 2020.0, "author_names": ["Emrah Soylemez", "Cigdem Varol"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219733339, "title": "Kulturun Isletme Yonetimine Yansimalari Turkiye ve Almanya Ornekleri", "abstract": "Derleme calismasi olarak nitelendirilebilecek olan bu calismada, toplumsal kulturun isletme yonetimine nasil yansidigi ele alinmaya calisilacaktir. Bilindigi uzere yonetim; insani bir kavram ve olgudur. Yonetilenlerin ve yonetenlerin sosyal varlik alani icinde kisilik ve kulturel ozelliklerine gore gelisip gerceklesir. Insani tanima ve etkileme sanati olarak tanimlanabilecek olan yonetimi, hedef kitlesi olan bireylerin ozelliklerine ve kulturune hakim olmadan ideal manada uygulayabilmek mumkun degildir. Bircok arastirma ortaya koymustur ki; Orgutler sosyal bir sistem olarak dusunulmelidir yani is gorenlerin davranislari uzerinde kurallardan ve yonetim ilkelerinde daha ziyade bu sistem yani kulturel ortam etki etmektedir. Dolayisiyla yonetim kulture gore sekillenir denilebilir. Bu baglamda bu calismada ilk once Turkiye ve Almanya'nin kulturel yapilari ele alinacak daha sonra bu yapinin o ulkede var olan Isletme yonetimlerine ne derece etki ettigi karsilastirmali olarak ortaya konmaya calisilacaktir.", "venue": "", "year": 2020.0, "author_names": ["Halim Kadri Ozturk"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "seed priming drought stress", "session_id": 885209292555310, "user_id": 7152364208963975, "candidates": [{"corpus_id": 219657513, "title": "Effect of Hormonal Seed Priming on Germination, Growth, Yield and Biomass Allocation in Soybean Grown under Induced Drought Stress", "abstract": "Seed priming has potential to improve seedling development and plant growth under environmental stress. In this study, seeds of soybean cultivar LS678 and TGx1835 10E were pretreated with an optimum level of benzyladenine (4.87 mgL 1) before sowing into pots containing pasteurised mixture of vermiculite and sand. Plants were grown up to V3 stage before exposure to moderate and severe drought stress. According to the results, germination was rapid in hydroprimed seeds than BA primed seeds, which took longer to emerge. However, growth, yield and biomass of BA primed plants were increased (number of branches per plant 7.32, flowering 87.6% 100 seed weight 22.6 g, overall biomass fraction >40.5% compared to plants developed from hydroprimed seeds (number of branches 3.61, seed weight 19.2 g, biomass <12% under similar growth conditions. This study indicated that, hormonal seed priming with BA reasonably enhanced soybean growth, particularly root biomass, flowering and fruiting. These effects further suggest that BA may play a significant role in improving drought tolerance in soybean.", "venue": "", "year": 2020.0, "author_names": ["Phetole Mangena"], "n_citations": 4, "n_key_citations": 1, "score": 0}, {"corpus_id": 229193547, "title": "Effect of Seed priming on Osmolytes Accumulation and Antioxidant Enzymes under Drought stress in Wheat", "abstract": "Drought is one of the most important abiotic stresses that negatively influences plant growth and development. It is an important yield decreasing factor in winter wheat (Triticum aestivum L. Osmotic stress is also one of the consequences of drought stress. Polyethylene glycol (PEG) is most commonly used to create osmotic stress in plants. Among various strategies, seed priming is low cost, easy and low risk approach to improve the abiotic stress tolerance in crop plants. The present research work was carried out with a view to evaluate the effect of some seed priming methods viz. Hydropriming, Halopriming, Osmopriming and Hormonal priming for 12 hours at room temperature on accumulation of osmolytes (proline and glycine betaine) and activity of antioxidant enzymes (SOD and POX) under PEG 6000 induced osmotic stress at two level (2% and 5% stress level) Determination of Proline content and Glycine betaine, Activity of Antioxidant enzymes viz. Superoxide dismutase (SOD) and Peroxidase (POX) were analyzed in primed and non primed wheat seedlings. It was found that all seed priming method except Hormonal priming (50 ppm Salicylic acid) failed to germinate and grown but other three priming methods were resulted in better improvement in accumulation of osmolytes and antioxidant enzymes activity than Non primed wheat seedlings. Amongst all seed priming methods Osmopriming with Ascorbic acid (2 mM) was showed best results at both stress level (2% and 5% PEG 6000 stress level.", "venue": "", "year": 2020.0, "author_names": ["Siddhant Gahininath Jaybhaye"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 229627911, "title": "Evaluation of seed priming with nutrients on some germination characteristics of quinoa (chenopodium quinoa willd) under drought stress", "abstract": "To evaluate seed priming with nutrients on some germination characteristics of quinoa (chenopodiumquinoa willd) under drought stress, factorial experiment with completely randomized design with four", "venue": "", "year": 2020.0, "author_names": ["Nasibe Pakbaz", "Heshmat Omidi", "Hasanali Naghdibadi", "Amir Bostani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 229684097, "title": "Seed Priming with Endophytic Bacillus subtilis Modulates Physiological Responses of Two Different Triticum aestivum L. Cultivars under Drought Stress", "abstract": "The protective effects against drought stress of the endophytic bacterium Bacillus subtilis 10 4 were measured by studying the priming response in two wheat (Triticum aestivum L. Ekada70 (E70) and Salavat Yulaev (SY) lines, tolerant and susceptible to drought, respectively. B. subtilis 10 4 improved germination and growth parameters under normal conditions in both cultivars with the most pronounced effect observed in cv. E70. Under drought conditions, B. subtilis 10 4 significantly ameliorated the negative impact of stress on germination and growth of cv. E70, but had no protective effect on cv. SY. B. subtilis 10 4 induced an increase in the levels of photosynthetic chlorophyll (Chl) a, Chl b, and carotenoids (Car) in the leaves of cv. E70, both under normal and drought conditions. In cv. SY plants, bacterial inoculation decreased the contents of Chl a, Chl b, and Car under normal conditions, but pigment content were almost recovered under drought stress. B. subtilis 10 4 increased water holding capacity (WHC) of cv. E70 (but did not affect this parameter in cv. SY) and prevented the stress induced decline in WHC in both cultivars. Notably, B. subtilis 10 4 increased endogenous salicylic acid (SA) concentration in both cultivars, especially in cv. E70. Moreover, B. subtilis 10 4 reduced drought induced endogenous SA accumulation, which was correlated with the influence of endophyte on growth, indicating a possible involvement of endogenous SA in the implementation of B. subtilis mediated effects in both cultivars. Overall, B. subtilis 10 4 inoculation was found to increase drought tolerance in seedlings of both cultivars, as evidenced by decreased lipid peroxidation, proline content, and electrolyte leakage from tissues of wheat seedlings primed with B. subtilis 10 4 under drought conditions.", "venue": "Plants", "year": 2020.0, "author_names": ["Oksana Lastochkina", "Darya Garshina", "Sergey Ivanov", "Ruslan Yuldashev", "Regina Khafizova", "Ch R Allagulova", "Kristina Fedorova", "Azamat Avalbaev", "Dilara Maslennikova", "Massimo Bosacchi"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 201220072, "title": "Seed priming with melatonin coping drought stress in rapeseed by regulating reactive oxygen species detoxification: Antioxidant defense system, osmotic adjustment, stomatal traits and chloroplast ultrastructure perseveration", "abstract": "Abstract Water stress encumbers the rapeseed growth and development primarily by oxidative alteration to biological membranes and interrupted tissue water status. Seedling tolerance to water stress is vital for better crop development through the entire period under drought stress condition. The enhancement of plant stress tolerance due to melatonin application is already known; however, the specific underlying mechanisms are still not fully understood. The current study was done to investigate the effect of melatonin priming on rapeseed germination and subsequent seedling growth under PEG 6000 simulated drought stress. Drought stress significantly inhibited germination and seedling growth, led to oxidative stress from excessive H2O2 generation, and reduced the chlorophyll content. Melatonin priming alleviated the oxidative damage and the amount of chlorophyll reduction in rapeseed seedlings under the drought stress. The stomatal traits such as number, stomatal length and width were greatly improved in melatonin primed treatment. Melatonin priming also preserved the chloroplast structure, maintained cells expansion and strengthened the cell wall in response to drought stress. The results revealed that the ameliorative effects of melatonin priming may be ascribed to enhancement of antioxidant enzymes activities, increase in the content of non enzymatic antioxidants, and increase in the amounts of osmo protectants. The present data suggest that priming seeds with melatonin can be a prime strategy regarding the development of crop drought resistant in crop production.", "venue": "Industrial Crops and Products", "year": 2019.0, "author_names": ["Mohammad Nauman Khan", "Jinsong Zhang", "Tao Luo", "Jiahuan Liu", "Muhammad Rizwan", "Shah Fahad", "Zhenghua Xu", "Liyong Hu"], "n_citations": 35, "n_key_citations": 2, "score": 0}, {"corpus_id": 226570562, "title": "Chitosan seed priming improves yield and recall defence memory under drought stress in wheat (Triticum aestivum L.", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["Amjad Hameed", "Tahir Farooq", "Munir Ahmad Sheikh"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 219126696, "title": "Effect of Stevia (Stevia rebaudiana) Seed Priming Treatments with Salicylic Acid, Iron, and Zinc on Some Germination Traits and Photosynthetic Pigments under Drought Stress", "abstract": "stwy (Stevia rebaudiana Bert. gyhy `lfy w chndslh mt`lq bh khnwdh Astraceae st. glykhwzydhy stwywl mwjwd dr yn gyh 300 250 br shyryn!tr z skhrz hstnd w bwjwd mzh shyryn, jdhb bdn nmy!shwnd.", "venue": "", "year": 2020.0, "author_names": ["Ali Gorzi", "Heshmat Omidi", "Abdolamir Bostani"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 216355108, "title": "Effect of Seed Priming with Nanosilicon on Morpho Physiological Characterestics, Quercetin Content and Antioxidant Capacity in Calendula officinalis L. under Drought Stress Conditions", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["Saeedeh Rahimi", "Mehrnaz Hatami", "Mansour Ghorbanpour"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 216326411, "title": "Seed priming with hydrogen peroxide alleviates the effects of drought stress in rice (Oryza sativa L. seedlings", "abstract": "Drought stress is a major factor limiting crop growth and yield. Hydrogen peroxide (H2O2) is known as a signalling molecule in the plant cell in which activates multiple physiological changes that play essential roles in tolerance mechanism. This study investigated the effects of seed priming with H2O2 on growth, some physiological characteristics and antioxidant enzyme activities in rice seedling under drought stress. Rice (Oryza sativa L. cv. 'Khao Dawk Mali 105' seeds were primed with 0 (distilled water) 1, 5, 10, and 15 mM H2O2 and grown for 21 days. The seedlings were subjected to drought stress by withholding water for 7 days. The results showed that priming with low concentrations of H2O2 improved plant growth and biomass as well as relative water content, malondialdehyde content, electrolyte leakage. Priming with H2O2, however, had no beneficial effect on chlorophyll content, proline and leaf total soluble sugar. Seed priming with appropriate levels of H2O2 also enhanced antioxidant enzyme activities including superoxide dismutase (SOD) catalase (CAT) ascorbate peroxidase (APX) and guaiacol peroxidase (GPX) It is concluded that seed priming with 210 mM H2O2, is beneficial for enhancing drought tolerance in rice seedling by increasing antioxidant capacity, which in turn reduces oxidative stress and damages to the cellular components.", "venue": "", "year": 2020.0, "author_names": ["Weeraphorn Jira-anunkul", "Wattana Pattanagul"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 202017891, "title": "Effects of mycorrhizal symbiosis and seed priming on yield and water use efficiency of sesame under drought stress condition", "abstract": "Abstract In order to study the effect of drought stress, seed priming, and mycorrhizal symbiosis on yield, harvest index, and water use efficiency (WUE) of sesame, an experiment was carried out based on the randomized complete block design as split factorial arrangement with three replications at Agricultural Research Station of Haji Abad, Hormozgan Iran, in 2014 and 2015. The results showed that grain yield, biological yield and harvest index were reduced by drought stress. On the other hands, WUE of grain, biological yield and oil product traits increased by drought stress. The highest efficiency of grain yield water usage, biological yield and oil product (0.39, 1.6, and 0.199 kg m 3, respectively) were obtained in the severe drought stress conditions. The results also showed that the highest grain yield, biological yield, and harvest index were related to the normal irrigation. In compared to the appropriate irrigation conditions, moderate to severe water stress reduced grain yield and biological yield rate by 39.4 and 26.7% respectively. Mycorrhizal symbiosis improved all traits compared to the control group (no mycorrhiza) Inoculation with Funneliformis and Rhizoglomus fungi in optimum irrigation and stress conditions also improved traits compared to the control group.", "venue": "Scientia Horticulturae", "year": 2019.0, "author_names": ["Abdolhossein Askari", "Mohammad Reza Mollahoseini Ardakani", "Farzad Paknejad", "Yaaghoob Hosseini"], "n_citations": 7, "n_key_citations": 0, "score": 0}]} -{"query": "Semantic parsing via staged query graph generation: Question answering with knowledge base", "session_id": 3034709218327783, "user_id": 2942522160968728, "candidates": [{"corpus_id": 18309765, "title": "Semantic Parsing via Staged Query Graph Generation: Question Answering with Knowledge Base", "abstract": "We propose a novel semantic parsing framework for question answering using a knowledge base. We define a query graph that resembles subgraphs of the knowledge base and can be directly mapped to a logical form. Semantic parsing is reduced to query graph generation, formulated as a staged search problem. Unlike traditional approaches, our method leverages the knowledge base in an early stage to prune the search space and thus simplifies the semantic matching problem. By applying an advanced entity linking system and a deep convolutional neural network model that matches questions and predicate sequences, our system outperforms previous methods substantially, and achieves an F1 measure of 52.5% on the WEBQUESTIONS dataset.", "venue": "ACL", "year": 2015.0, "author_names": ["Wen-tau Yih", "Ming-Wei Chang", "Xiaodong He", "Jianfeng Gao"], "n_citations": 498, "n_key_citations": 80, "score": 1}, {"corpus_id": 181879352, "title": "ComQA: Question Answering Over Knowledge Base via Semantic Matching", "abstract": "Question answering over knowledge base (KBQA) is a powerful tool to extract answers from graph like knowledge bases. Here, we present ComQA a three phase KBQA framework by which end users can ask complex questions and get answers in a natural way. In ComQA, a complex question is decomposed into several triple patterns. Then, ComQA retrieves candidate subgraphs matching the triple patterns from the knowledge base and evaluates the semantic similarity between the subgraphs and the triple patterns to find the answer. It is a long standing problem to evaluate the semantic similarity between the question and the heterogeneous subgraph containing the answer. To handle this problem, first, a semantic based extension method is proposed to identify entities and relations in the question while considering the underlying knowledge base. The precision of identifying entities and relations determines the correctness of successive steps. Second, by exploiting the syntactic pattern in the question, ComQA constructs the query graphs for natural language questions so that it can filter out topology mismatch subgraphs and narrow down the search space in knowledge bases. Finally, by incorporating the information from the underlying knowledge base, we fine tune general word vectors, making them more specific to ranking possible answers in KBQA task. Extensive experiments over a series of QALD challenges confirm that the performance of ComQA is comparable to those state of the art approaches with respect to precision, recall, and F1 score.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Hai Jin", "Yi Luo", "Chenjing Gao", "Xunzhu Tang", "Pingpeng Yuan"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 53079601, "title": "Knowledge Base Question Answering via Encoding of Complex Query Graphs", "abstract": "Answering complex questions that involve multiple entities and multiple relations using a standard knowledge base is an open and challenging task. Most existing KBQA approaches focus on simpler questions and do not work very well on complex questions because they were not able to simultaneously represent the question and the corresponding complex query structure. In this work, we encode such complex query structure into a uniform vector representation, and thus successfully capture the interactions between individual semantic components within a complex question. This approach consistently outperforms existing methods on complex questions while staying competitive on simple questions.", "venue": "EMNLP", "year": 2018.0, "author_names": ["Kangqi Luo", "Fengli Lin", "Xusheng Luo", "Kenny Q Zhu"], "n_citations": 46, "n_key_citations": 12, "score": 0}, {"corpus_id": 51974493, "title": "Modeling Semantics with Gated Graph Neural Networks for Knowledge Base Question Answering", "abstract": "The most approaches to Knowledge Base Question Answering are based on semantic parsing. In this paper, we address the problem of learning vector representations for complex semantic parses that consist of multiple entities and relations. Previous work largely focused on selecting the correct semantic relations for a question and disregarded the structure of the semantic parse: the connections between entities and the directions of the relations. We propose to use Gated Graph Neural Networks to encode the graph structure of the semantic parse. We show on two data sets that the graph networks outperform all baseline models that do not explicitly model the structure. The error analysis confirms that our approach can successfully process complex semantic parses.", "venue": "COLING", "year": 2018.0, "author_names": ["Daniil Sorokin", "Iryna Gurevych"], "n_citations": 47, "n_key_citations": 5, "score": 0}, {"corpus_id": 6296015, "title": "Natural Language Supported Relation Matching for Question Answering with Knowledge Graphs", "abstract": "This work focuses on the relation matching problem in knowledge based question answering systems. Finding the right relation a natural question asks is a key step in current knowledge based question answering systems, while also being the most difficult one, because of the mismatch between natural language question and formal relation type definitions. In this paper, we present two approaches to tackle this problem. The first approach tries to directly learn the soft match between the question and the relations from the training data using neural networks. The second approach enriches the relation name with natural language support sentences generated from Wikipedia, which provide additional matches with the question. Experiments on the WebQuestions dataset demonstrate that both of our approaches improve the relation matching accuracy of a prior state of the art. Our further analysis reveals the high quality of support sentences and suggests the rich potential of support sentences in question answering and semantic parsing tasks.", "venue": "KG4IR@SIGIR", "year": 2017.0, "author_names": ["Hongyu Li", "Chenyan Xiong", "Jamie Callan"], "n_citations": 8, "n_key_citations": 0, "score": 0}, {"corpus_id": 2725912, "title": "Open Domain Question Answering via Semantic Enrichment", "abstract": "Most recent question answering (QA) systems query large scale knowledge bases (KBs) to answer a question, after parsing and transforming natural language questions to KBs executable forms (e.g. logical forms) As a well known fact, KBs are far from complete, so that information required to answer questions may not always exist in KBs. In this paper, we develop a new QA system that mines answers directly from the Web, and meanwhile employs KBs as a significant auxiliary to further boost the QA performance. Specifically, to the best of our knowledge, we make the first attempt to link answer candidates to entities in Freebase, during answer candidate generation. Several remarkable advantages follow: (1) Redundancy among answer candidates is automatically reduced. (2) The types of an answer candidate can be effortlessly determined by those of its corresponding entity in Freebase. (3) Capitalizing on the rich information about entities in Freebase, we can develop semantic features for each answer candidate after linking them to Freebase. Particularly, we construct answer type related features with two novel probabilistic models, which directly evaluate the appropriateness of an answer candidate's types under a given question. Overall, such semantic features turn out to play significant roles in determining the true answers from the large answer candidate pool. The experimental results show that across two testing datasets, our QA system achieves an 18%~54% improvement under F_1 metric, compared with various existing QA systems.", "venue": "WWW", "year": 2015.0, "author_names": ["Huan Sun", "Hao Ma", "Wen-tau Yih", "Chen-Tse Tsai", "Jingjing Liu", "Ming-Wei Chang"], "n_citations": 103, "n_key_citations": 5, "score": 0}, {"corpus_id": 1336493, "title": "Semantic Parsing via Paraphrasing", "abstract": "A central challenge in semantic parsing is handling the myriad ways in which knowledge base predicates can be expressed. Traditionally, semantic parsers are trained primarily from text paired with knowledge base information. Our goal is to exploit the much larger amounts of raw text not tied to any knowledge base. In this paper, we turn semantic parsing on its head. Given an input utterance, we first use a simple method to deterministically generate a set of candidate logical forms with a canonical realization in natural language for each. Then, we use a paraphrase model to choose the realization that best paraphrases the input, and output the corresponding logical form. We present two simple paraphrase models, an association model and a vector space model, and train them jointly from question answer pairs. Our system PARASEMPRE improves stateof the art accuracies on two recently released question answering datasets.", "venue": "ACL", "year": 2014.0, "author_names": ["Jonathan Berant", "Percy Liang"], "n_citations": 432, "n_key_citations": 40, "score": 0}, {"corpus_id": 57574360, "title": "Neural architecture for question answering using a knowledge graph and web corpus", "abstract": "In Web search, entity seeking queries often trigger a special question answering (QA) system. It may use a parser to interpret the question to a structured query, execute that on a knowledge graph (KG) and return direct entity responses. QA systems based on precise parsing tend to be brittle: minor syntax variations may dramatically change the response. Moreover, KG coverage is patchy. At the other extreme, a large corpus may provide broader coverage, but in an unstructured, unreliable form. We present AQQUCN, a QA system that gracefully combines KG and corpus evidence. AQQUCN accepts a broad spectrum of query syntax, between well formed questions to short \"telegraphic\" keyword sequences. In the face of inherent query ambiguities, AQQUCN aggregates signals from KGs and large corpora to directly rank KG entities, rather than commit to one semantic interpretation of the query. AQQUCN models the ideal interpretation as an unobservable or latent variable. Interpretations and candidate entity responses are scored as pairs, by combining signals from multiple convolutional networks that operate collectively on the query, KG and corpus. On four public query workloads, amounting to over 8000 queries with diverse query syntax, we see 5 16% absolute improvement in mean average precision (MAP) compared to the entity ranking performance of recent systems. Our system is also competitive at entity set retrieval, almost doubling F1 scores for challenging short queries.", "venue": "Information Retrieval Journal", "year": 2018.0, "author_names": ["Uma Sawant", "S Garg", "Soumen Chakrabarti", "Ganesh Ramakrishnan"], "n_citations": 24, "n_key_citations": 2, "score": 0}, {"corpus_id": 70349933, "title": "Bidirectional Attentive Memory Networks for Question Answering over Knowledge Bases", "abstract": "When answering natural language questions over knowledge bases (KBs) different question components and KB aspects play different roles. However, most existing embedding based methods for knowledge base question answering (KBQA) ignore the subtle inter relationships between the question and the KB (e.g. entity types, relation paths and context) In this work, we propose to directly model the two way flow of interactions between the questions and the KB via a novel Bidirectional Attentive Memory Network, called BAMnet. Requiring no external resources and only very few hand crafted features, on the WebQuestions benchmark, our method significantly outperforms existing information retrieval based methods, and remains competitive with (hand crafted) semantic parsing based methods. Also, since we use attention mechanisms, our method offers better interpretability compared to other baselines.", "venue": "NAACL", "year": 2019.0, "author_names": ["Yu Chen", "Lingfei Wu", "Mohammed J Zaki"], "n_citations": 53, "n_key_citations": 8, "score": 0}, {"corpus_id": 748402, "title": "Automated Template Generation for Question Answering over Knowledge Graphs", "abstract": "Templates are an important asset for question answering over knowledge graphs, simplifying the semantic parsing of input utterances and generating structured queries for interpretable answers. State of the art methods rely on hand crafted templates with limited coverage. This paper presents QUINT, a system that automatically learns utterance query templates solely from user questions paired with their answers. Additionally, QUINT is able to harness language compositionality for answering complex questions without having any templates for the entire question. Experiments with different benchmarks demonstrate the high quality of QUINT.", "venue": "WWW", "year": 2017.0, "author_names": ["Abdalghani Abujabal", "Mohamed Yahya", "Mirek Riedewald", "Gerhard Weikum"], "n_citations": 127, "n_key_citations": 21, "score": 0}]} -{"query": "automatic speech recognition", "session_id": 2236914976087435, "user_id": 1691761848722094, "candidates": [{"corpus_id": 121321299, "title": "SpecAugment: A Simple Data Augmentation Method for Automatic Speech Recognition", "abstract": "We present SpecAugment, a simple data augmentation method for speech recognition. SpecAugment is applied directly to the feature inputs of a neural network (i.e. filter bank coefficients) The augmentation policy consists of warping the features, masking blocks of frequency channels, and masking blocks of time steps. We apply SpecAugment on Listen, Attend and Spell networks for end to end speech recognition tasks. We achieve state of the art performance on the LibriSpeech 960h and Swichboard 300h tasks, outperforming all prior work. On LibriSpeech, we achieve 6.8% WER on test other without the use of a language model, and 5.8% WER with shallow fusion with a language model. This compares to the previous state of the art hybrid system of 7.5% WER. For Switchboard, we achieve 7.2%/14.6% on the Switchboard/CallHome portion of the Hub5'00 test set without the use of a language model, and 6.8%/14.1% with shallow fusion, which compares to the previous state of the art hybrid system at 8.3%/17.3% WER.", "venue": "INTERSPEECH", "year": 2019.0, "author_names": ["Daniel S Park", "William Chan", "Yu Zhang", "Chung-Cheng Chiu", "Barret Zoph", "Ekin Dogus Cubuk", "Quoc V Le"], "n_citations": 1055, "n_key_citations": 207, "score": 1}, {"corpus_id": 204838028, "title": "Quartznet: Deep Automatic Speech Recognition with 1D Time Channel Separable Convolutions", "abstract": "We propose a new end to end neural acoustic model for automatic speech recognition. The model is composed of multiple blocks with residual connections between them. Each block consists of one or more modules with 1D time channel separable convolutional layers, batch normalization, and ReLU layers. It is trained with CTC loss. The proposed network achieves near state of the art accuracy on LibriSpeech and Wall Street Journal, while having fewer parameters than all competing models. We also demonstrate that this model can be effectively fine tuned on new datasets.", "venue": "ICASSP 2020 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "year": 2020.0, "author_names": ["Samuel Kriman", "Stanislav Beliaev", "Boris Ginsburg", "Jocelyn Huang", "Oleksii Kuchaiev", "Vitaly Lavrukhin", "Ryan Leary", "Jason Li", "Yang Zhang"], "n_citations": 87, "n_key_citations": 22, "score": 0}, {"corpus_id": 210064350, "title": "Streaming Automatic Speech Recognition with the Transformer Model", "abstract": "Encoder decoder based sequence to sequence models have demonstrated state of the art results in end to end automatic speech recognition (ASR) Recently, the transformer architecture, which uses self attention to model temporal context information, has been shown to achieve significantly lower word error rates (WERs) compared to recurrent neural network (RNN) based system architectures. Despite its success, the practical usage is limited to offline ASR tasks, since encoder decoder architectures typically require an entire speech utterance as input. In this work, we propose a transformer based end to end ASR system for streaming ASR, where an output must be generated shortly after each spoken word. To achieve this, we apply time restricted self attention for the encoder and triggered attention for the encoder decoder attention mechanism. Our proposed streaming transformer architecture achieves 2.8% and 7.3% WER for the \"clean\" and \"other\" test data of LibriSpeech, which to our knowledge is the best published streaming end to end ASR result for this task.", "venue": "ICASSP 2020 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "year": 2020.0, "author_names": ["Niko Moritz", "Takaaki Hori", "Jonathan Le Roux"], "n_citations": 70, "n_key_citations": 13, "score": 0}, {"corpus_id": 218684988, "title": "Improved Noisy Student Training for Automatic Speech Recognition", "abstract": "Recently, a semi supervised learning method known as \"noisy student training\" has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab 60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state of the art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20% and LibriSpeech (1.9%/4.1%", "venue": "INTERSPEECH", "year": 2020.0, "author_names": ["Daniel S Park", "Yu Zhang", "Ye Jia", "Wei Han", "Chung-Cheng Chiu", "Bo Li", "Yonghui Wu", "Quoc V Le"], "n_citations": 74, "n_key_citations": 14, "score": 0}, {"corpus_id": 208316402, "title": "Automatic Speech Recognition", "abstract": "", "venue": "Speech to Speech Translation", "year": 2020.0, "author_names": ["Xugang Lu", "Sheng Li", "Masakiyo Fujimoto"], "n_citations": 27, "n_key_citations": 1, "score": 0}, {"corpus_id": 52040758, "title": "Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding", "abstract": "Voice interfaces are becoming accepted widely as input methods for a diverse set of devices. This development is driven by rapid improvements in automatic speech recognition (ASR) which now performs on par with human listening in many tasks. These improvements base on an ongoing evolution of DNNs as the computational core of ASR. However, recent research results show that DNNs are vulnerable to adversarial perturbations, which allow attackers to force the transcription into a malicious output. In this paper, we introduce a new type of adversarial examples based on psychoacoustic hiding. Our attack exploits the characteristics of DNN based ASR systems, where we extend the original analysis procedure by an additional backpropagation step. We use this backpropagation to learn the degrees of freedom for the adversarial perturbation of the input signal, i.e. we apply a psychoacoustic model and manipulate the acoustic signal below the thresholds of human perception. To further minimize the perceptibility of the perturbations, we use forced alignment to find the best fitting temporal alignment between the original audio sample and the malicious target transcription. These extensions allow us to embed an arbitrary audio input with a malicious voice command that is then transcribed by the ASR system, with the audio signal remaining barely distinguishable from the original signal. In an experimental evaluation, we attack the state of the art speech recognition system Kaldi and determine the best performing parameter and analysis setup for different types of input. Our results show that we are successful in up to 98% of cases with a computational effort of fewer than two minutes for a ten second audio file. Based on user studies, we found that none of our target transcriptions were audible to human listeners, who still understand the original speech content with unchanged accuracy.", "venue": "NDSS", "year": 2019.0, "author_names": ["Lea Schonherr", "Katharina Siobhan Kohls", "Steffen Zeiler", "Thorsten Holz", "Dorothea Kolossa"], "n_citations": 123, "n_key_citations": 10, "score": 0}, {"corpus_id": 85502769, "title": "Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition", "abstract": "Adversarial examples are inputs to machine learning models designed by an adversary to cause an incorrect output. So far, adversarial examples have been studied most extensively in the image domain. In this domain, adversarial examples can be constructed by imperceptibly modifying images to cause misclassification, and are practical in the physical world. In contrast, current targeted adversarial examples applied to speech recognition systems have neither of these properties: humans can easily identify the adversarial perturbations, and they are not effective when played over the air. This paper makes advances on both of these fronts. First, we develop effectively imperceptible audio adversarial examples (verified through a human study) by leveraging the psychoacoustic principle of auditory masking, while retaining 100% targeted success rate on arbitrary full sentence targets. Next, we make progress towards physical world over the air audio adversarial examples by constructing perturbations which remain effective even after applying realistic simulated environmental distortions.", "venue": "ICML", "year": 2019.0, "author_names": ["Yao Qin", "Nicholas Carlini", "Ian J Goodfellow", "G Cottrell", "Colin Raffel"], "n_citations": 146, "n_key_citations": 22, "score": 0}, {"corpus_id": 5127812, "title": "Automatic speech segmentation in syllable centric speech recognition system", "abstract": "Speech recognition is the process of understanding the human or natural language speech by a computer. A syllable centric speech recognition system in this aspect identifies the syllable boundaries in the input speech and converts it into the respective written scripts or text units. Appropriate segmentation of the acoustic speech signal into syllabic units is an important task for development of highly accurate speech recognition system. This paper presents an automatic syllable based segmentation technique for segmenting continuous speech signals in Indian languages at syllable boundaries. To analyze the performance of the proposed technique, a set of experiments are carried out on different speech samples in three Indian languages Hindi, Bengali and Odia and are compared with the existing group delay based segmentation technique along with the manual segmentation technique. The results of all our experiments show the effectiveness of the proposed technique in segmenting the syllable units from the original speech samples compared to the existing techniques.", "venue": "Int. J. Speech Technol.", "year": 2016.0, "author_names": ["Soumya Priyadarsini Panda", "Ajit Kumar Nayak"], "n_citations": 31, "n_key_citations": 1, "score": 0}, {"corpus_id": 155099853, "title": "Almost Unsupervised Text to Speech and Automatic Speech Recognition", "abstract": "Text to speech (TTS) and automatic speech recognition (ASR) are two dual tasks in speech processing and both achieve impressive performance thanks to the recent advance in deep learning and large amount of aligned speech and text data. However, the lack of aligned data poses a major practical problem for TTS and ASR on low resource languages. In this paper, by leveraging the dual nature of the two tasks, we propose an almost unsupervised learning method that only leverages few hundreds of paired data and extra unpaired data for TTS and ASR. Our method consists of the following components: (1) a denoising auto encoder, which reconstructs speech and text sequences respectively to develop the capability of language modeling both in speech and text domain; (2) dual transformation, where the TTS model transforms the text $y$ into speech \\hat{x} and the ASR model leverages the transformed pair \\hat{x},y) for training, and vice versa, to boost the accuracy of the two tasks; (3) bidirectional sequence modeling, which addresses error propagation especially in the long speech and text sequence when training with few paired data; (4) a unified model structure, which combines all the above components for TTS and ASR based on Transformer model. Our method achieves 99.84% in terms of word level intelligible rate and 2.68 MOS for TTS, and 11.7% PER for ASR on LJSpeech dataset, by leveraging only 200 paired speech and text data (about 20 minutes audio) together with extra unpaired speech and text data.", "venue": "ICML", "year": 2019.0, "author_names": ["Yi Ren", "Xu Tan", "Tao Qin", "Sheng Zhao", "Zhou Zhao", "Tie-Yan Liu"], "n_citations": 45, "n_key_citations": 7, "score": 0}, {"corpus_id": 85542081, "title": "End to end acoustic modeling using convolutional neural networks for HMM based automatic speech recognition", "abstract": "Abstract In hidden Markov model (HMM) based automatic speech recognition (ASR) system, modeling the statistical relationship between the acoustic speech signal and the HMM states that represent linguistically motivated subword units such as phonemes is a crucial step. This is typically achieved by first extracting acoustic features from the speech signal based on prior knowledge such as, speech perception or/and speech production knowledge, and, then training a classifier such as artificial neural networks (ANN) Gaussian mixture model that estimates the emission probabilities of the HMM states. This paper investigates an end to end acoustic modeling approach using convolutional neural networks (CNNs) where the CNN takes as input raw speech signal and estimates the HMM states class conditional probabilities at the output. Alternately, as opposed to a divide and conquer strategy (i.e. separating feature extraction and statistical modeling steps) in the proposed acoustic modeling approach the relevant features and the classifier are jointly learned from the raw speech signal. Through ASR studies and analyses on multiple languages and multiple tasks, we show that: (a) the proposed approach yields consistently a better system with fewer parameters when compared to the conventional approach of cepstral feature extraction followed by ANN training, (b) unlike conventional method of speech processing, in the proposed approach the relevant feature representations are learned by first processing the input raw speech at the sub segmental level 2 ms) Specifically, through an analysis we show that the filters in the first convolution layer automatically learn \"in parts\" formant like information present in the sub segmental speech, and (c) the intermediate feature representations obtained by subsequent filtering of the first convolution layer output are more discriminative compared to standard cepstral features and could be transferred across languages and domains.", "venue": "Speech Commun.", "year": 2019.0, "author_names": ["Dimitri Palaz", "Mathew Magimai -Doss", "Ronan Collobert"], "n_citations": 53, "n_key_citations": 6, "score": 0}]} -{"query": "Single Phase Voltage Source Inverter Photovoltaic Application", "session_id": 4551829136229260, "user_id": 3145554453496674, "candidates": [{"corpus_id": 114722373, "title": "Single Phase Voltage Source Inverter Photovoltaic Application", "abstract": "Photovoltaic applications have been developing and spreading rapidly in recent times. This paper describes the control strategy of the Voltage Source Inverter that is the important tail end of many photovoltaic applications.In order to supply the grid with a sinusoidal line current without harmonic distortion, the inverter is connected to the supply network via a L C L filter. The output current is controlled by the hysteresis controller. To improve the behaviors of the L C L filter, active damping of the filter is being used. This paper discusses controller design and simulation results.", "venue": "", "year": 2010.0, "author_names": ["Josef Georg Bauer"], "n_citations": 23, "n_key_citations": 2, "score": 1}, {"corpus_id": 14278578, "title": "A review of single phase grid connected inverters for photovoltaic modules", "abstract": "This review focuses on inverter technologies for connecting photovoltaic (PV) modules to a single phase grid. The inverters are categorized into four classifications: 1) the number of power processing stages in cascade; 2) the type of power decoupling between the PV module(s) and the single phase grid; 3) whether they utilizes a transformer (either line or high frequency) or not; and 4) the type of grid connected power stage. Various inverter topologies are presented, compared, and evaluated against demands, lifetime, component ratings, and cost. Finally, some of the topologies are pointed out as the best candidates for either single PV module or multiple PV module applications.", "venue": "IEEE Transactions on Industry Applications", "year": 2005.0, "author_names": ["S B Kjaer", "John Kim Pedersen", "Frede Blaabjerg"], "n_citations": 3470, "n_key_citations": 151, "score": 0}, {"corpus_id": 60685930, "title": "A multilevel voltage source inverter with separate DC sources for static VAr generation", "abstract": "A new multilevel voltage source inverter with separate DC sources is proposed for high voltage, high power applications, such as flexible AC transmission systems (FACTS) including static VAr generation (SVG) power line conditioning, series compensation, phase shifting, voltage balancing, fuel cell and photovoltaic utility systems interfacing, etc. The new M level inverter consists of (M 1)/2 single phase full bridges in which each bridge has its own separate DC source. This inverter can generate almost sinusoidal waveform voltage with only one time switching per cycle as the number of levels increases. It can solve the problems of conventional transformer based multipulse inverters and the problems of the multilevel diode clamped inverter and the multilevel living capacitor inverter. To demonstrate the superiority of the new inverter, a SVG system using the new inverter topology is discussed through analysis, simulation and experiment.", "venue": "IAS '95. Conference Record of the 1995 IEEE Industry Applications Conference Thirtieth IAS Annual Meeting", "year": 1995.0, "author_names": ["Fang Zheng Peng", "Jih-Sheng Jason Lai", "John W McKeever", "James Vancoevering"], "n_citations": 1071, "n_key_citations": 41, "score": 0}, {"corpus_id": 108473474, "title": "Multi level conversion: high voltage choppers and voltage source inverters", "abstract": "The authors discuss high voltage power conversion. Conventional series connection and three level voltage source inverter techniques are reviewed and compared. A novel versatile multilevel commutation cell is introduced: it is shown that this topology is safer and more simple to control, and delivers purer output waveforms. The authors show how this technique can be applied to either choppers or voltage source inverters and generalized to any number of switches.", "venue": "PESC '92 Record. 23rd Annual IEEE Power Electronics Specialists Conference", "year": 1992.0, "author_names": ["Thierry A Meynard", "Henri Foch"], "n_citations": 1236, "n_key_citations": 37, "score": 0}, {"corpus_id": 44572515, "title": "Four quasi Z Source inverters", "abstract": "In this paper, theoretical results are shown for several novel inverters. These inverters are similar to the Z source inverters presented in previous works, but have several advantages, including in some combination; lower component ratings, reduced source stress, reduced component count and simplified control strategies. Like the Z source inverter, these inverters are particularly suited for applications which require a large range of gain, such as in motor controllers or renewable energy. Simulation and experimental results are shown for one topology to verify the analysis. Also, a back to back inverter system featuring bidirectionality on both inverters, as well as secondary energy storage with only a single additional switch, is shown.", "venue": "2008 IEEE Power Electronics Specialists Conference", "year": 2008.0, "author_names": ["Joel E Anderson", "Fang Zheng Peng"], "n_citations": 924, "n_key_citations": 51, "score": 0}, {"corpus_id": 12053603, "title": "Topologies of single phase inverters for small distributed power generators: an overview", "abstract": "This paper presents an overview of single phase inverters developed for small distributed power generators. The functions of inverters in distributed power generation (DG) systems include dc ac conversion, output power quality assurance, various protection mechanisms, and system controls. Unique requirements for small distributed power generation systems include low cost, high efficiency and tolerance for an extremely wide range of input voltage variations. These requirements have driven the inverter development toward simpler topologies and structures, lower component counts, and tighter modular design. Both single stage and multiple stage inverters have been developed for power conversion in DG systems. Single stage inverters offer simple structure and low cost, but suffer from a limited range of input voltage variations and are often characterized by compromised system performance. On the other hand, multiple stage inverters accept a wide range of input voltage variations, but suffer from high cost, complicated structure and low efficiency. Various circuit topologies are presented, compared, and evaluated against the requirements of power decoupling and dual grounding, the capabilities for grid connected or/and stand alone operations, and specific DG applications in this paper, along with the identification of recent development trends of single phase inverters for distributed power generators.", "venue": "IEEE Transactions on Power Electronics", "year": 2004.0, "author_names": ["", "Josep Bordonau", "T Shimizu"], "n_citations": 899, "n_key_citations": 20, "score": 0}, {"corpus_id": 113863, "title": "Multilevel Voltage Source Converter Topologies for Industrial Medium Voltage Drives", "abstract": "This paper presents a technology review of voltage source converter topologies for industrial medium voltage drives. In this highly active area, different converter topologies and circuits have found their application in the market. This paper covers the high power voltage source inverter and the most used multilevel inverter topologies, including the neutral point clamped, cascaded H bridge, and flying capacitor converters. This paper presents the operating principle of each topology and a review of the most relevant modulation methods, focused mainly on those used by industry. In addition, the latest advances and future trends of the technology are discussed. It is concluded that the topology and modulation method selection are closely related to each particular application, leaving a space on the market for all the different solutions, depending on their unique features and limitations like power or voltage level, dynamic performance, reliability, costs, and other technical specifications.", "venue": "IEEE Transactions on Industrial Electronics", "year": 2007.0, "author_names": ["Jose R Rodriguez", "Steffen Bernet", "Bin Wu", "Jorge Pontt", "Samir Kouro"], "n_citations": 2101, "n_key_citations": 67, "score": 0}, {"corpus_id": 63720604, "title": "Z source inverter", "abstract": "This paper presents an impedance source (or impedance fed) power converter (abbreviated as Z source converter) and its control method for implementing DC to AC, AC to DC, AC to AC, and DC to DC power conversion. The Z source converter employs a unique impedance network (or circuit) to couple the converter main circuit to the power source, thus providing unique features that cannot be obtained in the traditional voltage source (or voltage fed) and current source (or current fed) converters where a capacitor and inductor are used, respectively. The Z source converter overcomes the conceptual and theoretical barriers and limitations of the traditional voltage source converter (abbreviated as V source converter) and current source converter (abbreviated as I source converter) and provides a novel power conversion concept. The Z source concept can be applied to all DC to AC, AC to DC, AC to AC, and DC to DC power conversion. To describe the operating principle and control, this paper focuses on an example: a Z source inverter for DC AC power conversion needed in fuel cell applications. Simulation and experimental results are presented to demonstrate the new features.", "venue": "", "year": 2002.0, "author_names": ["Fang Zheng Peng"], "n_citations": 2181, "n_key_citations": 240, "score": 0}, {"corpus_id": 110585148, "title": "Implementation of three level hysteresis current control for a single phase voltage source inverter", "abstract": "Single phase hysteresis current controllers have traditionally been implemented using two level modulation which is known to be inferior to three level modulation. This paper presents a simple, low cost, and effective technique to allow single phase hysteresis current regulation to be implemented as three level modulation process. This achieves a substantial reduction in the magnitude and variation of the switching frequency, thus improving efficiency, while retaining all of the advantages identified with hysteresis current control. The operation and control of the inverter are described, together with simulation and experimental results.", "venue": "2000 IEEE 31st Annual Power Electronics Specialists Conference. Conference Proceedings (Cat. No.00CH37018)", "year": 2000.0, "author_names": ["G H Bode", "Donald Grahame Holmes"], "n_citations": 95, "n_key_citations": 10, "score": 0}, {"corpus_id": 110734173, "title": "A voltage and frequency droop control method for parallel inverters", "abstract": "In this paper, a new control method for the parallel operation of one or several inverters in an island grid or the mains is described. Frequency and voltage control, including mitigation of voltage harmonics, are achieved without the need for any common control circuitry or communication between the inverters. Each inverter supplies a current that is the result of the voltage difference between a reference AC voltage source and the grid voltage across a virtual impedance with real and/or imaginary parts. The reference AC voltage source is synchronised with the grid, with a phase shift, depending on the difference between nominal and real grid frequency. A detailed analysis show that this approach has superior behaviour in comparison with the existing droop control methods, considering the mitigation of voltage harmonics, short circuit behaviour and, in the case of a non negligible line resistance, the 'efficient' control of frequency and voltage. Experiments show the behaviour of the method for an inverter feeding a highly distorted load and during the connection of two parallel inverters in operation.", "venue": "2004 IEEE 35th Annual Power Electronics Specialists Conference (IEEE Cat. No.04CH37551)", "year": 2004.0, "author_names": ["K De Brabandere", "Bruno Bolsens", "Jeroen van den Keybus", "Achim Woyte", "Johan Driesen", "Ronnie J M Belmans", "K U Leuven"], "n_citations": 830, "n_key_citations": 38, "score": 0}]} -{"query": "Soil Water Content Distributions between Two Emitters of a Subsurface Drip Irrigation System", "session_id": 6395472751904210, "user_id": 4175542289537562, "candidates": [{"corpus_id": 129708361, "title": "Soil Water Content Distributions between Two Emitters of a Subsurface Drip Irrigation System", "abstract": "Subsurface drip irrigation (SDI) systems are increasingly being used in agriculture in attempts to use the available water more efficiently. The proper design and management of SDI systems requires knowledge of precise distribution of water around emitters. We conducted both field and numerical experiments to evaluate the soil water content distributions between two neighboring emitters when their wetting patterns started to overlap. The experiments involved SDI systems with emitters installed at different depths and with different spacings along the drip lateral. The HYDRUS software package was used to analyze the field data, assuming modeling approaches in which emitters were represented as (i) a point source in an axisymmetrical two dimensional domain, (ii) a line source in a planar two dimensional domain, or (iii) a point source in a fully three dimensional domain. Results indicated that SDI systems can be accurately described using an axisymmetrical two dimensional model only before wetting patterns start to overlap, and a planar two dimensional model only after full merging of the wetting fronts from neighboring emitters. A fully three dimensional model appears to be required for describing subsurface drip irrigation processes in their entirety.", "venue": "", "year": 2011.0, "author_names": ["Maziar M Kandelous", "Jirka Simunek", "M Th van Genuchten", "Keyvan Malek"], "n_citations": 75, "n_key_citations": 3, "score": 1}, {"corpus_id": 67049028, "title": "Soil Water Content Distributions between Two Emitters of a Subsurface Drip Irrigation System Soil", "abstract": "SSSAJ: Volume 75: Number 2 March April 2011 Soil Sci. Soc. Am. J. 75:488 497 Posted online 3 Jan. 2011 doi:10.2136/sssaj2010.0181 Received 26 Apr. 2010. *Corresponding author (mkandelous@ucdavis.edu; mkandelous@gmail.com) (c) Soil Science Society of America, 5585 Guilford Rd. Madison WI 53711 USA All rights reserved. No part of this periodical may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Permission for printing and for reprinting the material contained herein has been obtained by the publisher. Soil Water Content Distributions between Two Emitters of a Subsurface Drip Irrigation System Soil Physics", "venue": "", "year": 2011.0, "author_names": ["Maziar M Kandelous"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 224882121, "title": "Variation in the flow rate of drip emitters in a subsurface irrigation system for different soil types", "abstract": "Abstract Several studies have shown that irrigation is essential for global agricultural development. However, water is a limited resource and should be used as efficiently as possible, which requires appropriate management. As such, the search for irrigation techniques that are more efficient in terms of water use, such as subsurface drip irrigation, is ongoing. Subsurface drip irrigation systems are highly efficient and can serve as suitable alternatives for the rational management of water. However, these systems also have limitations; specifically, variation in flow rate can occur depending on the soil characteristics. Subsurface drip irrigation systems covered by only a thin soil layer have been used, especially in irrigated coffee crops in Brazil; however, most related studies have investigated the variation in the flow rate at relatively great soil depths. Thus, the objective of the present study was to evaluate two emitters buried at a depth of 5 cm to determine the variation in the flow rate within four different soil types, and assess the wet bulb. The evaluated soil types were classified as a sandy loam, silty loam, clay loam or clay, and the two emitters evaluated included a pressure compensating drip emitter (PC) and a non pressure compensating model (NPC) With respect to the PC emitter, a flow rate reduction was detected only in the clay loam soil, but with respect to the NPC emitter, a reduction in the flow rate was detected in a sandy loam and clay loam. The flow rate varied even at shallow depths for some soils, and the soil type and emitter flow rate affected this variation, as well as the water distribution in the wet bulb. Thus, this variation should be considered even for systems installed at shallow depths.", "venue": "", "year": 2021.0, "author_names": ["Virgilio Henrique Barros Nogueira", "Adriano Valentim Diotto", "Michael Silveira Thebaldi", "Alberto Colombo", "Yasmin Fernandes Silva", "Elvis Marcio de Castro Lima", "Gabriel Felipe Lima Resende"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 223618420, "title": "Irrigation Water Use Efficiency and Yield of Pistachio under Aerated Subsurface Drip Irrigation System", "abstract": "Improving yield and Irrigation Water Use Efficiency (IWUE) is important for pistachio cultivation in Iran. Subsurface Drip Irrigation (SDI) is a novel irrigation system that is used by pistachio farmers. Oxygen deficiency could occur in the soil under crops irrigated by SDI, especially in heavy clay soils, due to creation of sustained wetting fronts around emitters. The focus of this work was to evaluate aerated water irrigation (oxygation) under SDI to overcome hypoxia in saline loam silt soil environments on 15 years old pistachio trees in desert climate. Two treatments including F3 (irrigation frequency once every 3 days without air injection) and F3 oxygation (irrigation frequency once every 3 days by air injection) were investigated in two years. The injection of 18% by volume air into irrigation water by SDI resulted in a beneficial effect on yield and IWUE in the second year of experiment; with yield values of 4.9 ton ha for F3 oxygation against 4.4 ton ha 1 for F3; and IWUE values of 4.2 kg ha 1 mm for F3 oxygation against 3.7 kg ha 1 mm for F3. Decreases in yield and IWUE in the F3 oxygation in comparison with F3 were 33.3 and 28.2% in the first year, respectively; but yield and IWUE due to F3 oxygation were 11.1 and 13.5% greater in the second year compared to F3, respectively. At the end of the irrigation season, the nitrogen content of the nuts removal in F3, and F3 oxygation were 1.9 and 2.1% in the first year and 1.4 and 1.6% in the second year, respectively. The leaf K and nut Fe concentrations in F3 oxygation were about 24 and 44% more than F3, respectively. Leaf area was larger in aeration treatment compared with the control. Overall, these results indicate positive effects of oxygated SDI system on pistachio trees and merit further investigation for improving yield and IWUE.", "venue": "", "year": 2020.0, "author_names": ["Azadeh Seifi", "M Mirlatifi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 109306565, "title": "Water storage in the soil profile under subsurface drip irrigation: Evaluating two installation depths of emitters and two water qualities", "abstract": "Knowledge about soil moisture is essential to maximize irrigation efficiency because it allows the application of water in the proper quantity and at the proper time, thus improving water management. The objective of this study was to evaluate water storage in the soil profile when using a subsurface drip irrigation system at two emitter installation depths (0.20 or 0.40m) and two water qualities (treated sewage effluent (TSE) and freshwater) in two crop cycles of sugarcane (Saccharum officinarum L. in Campinas SP (Brazil) The experiment was conducted in the experimental area of FEAGRI UNICAMP, Campinas SP, Brazil, adopting a randomized block design (RBD) in a factorial 2x2+1 with 3 replications. The factors studied included the installation of dripper tube at two depths (0.2 and 0.4m) and two qualities of water (TSE and freshwater) plus a non irrigation control. The TDR (time domain reflectometry) technique was used to evaluate the moisture in the soil profile by installing five probes with rods at 0.2m up to 1.0m depth. Replacement of the calibration equation provided by the TDR reduced the water depth between the first ratoon and the sugarcane plant and reduced the excess humidity from 0.029 and 0.045cm3 to 0.002 and 0.007cm3 when the drippers were installed at 0.2m depth (T2 and T4) The installation of a 0.2m drip tube proved to be an ideal solution for both environmental management and water use efficiency when using treated sewage effluent. No effect on the water distribution in the soil was observed when comparing the water qualities. For management of subsurface drip irrigation by the water balance in the soil, different layers in the soil profile should be considered to calculate the water depth, using the depth of the drip tube installation as a reference.", "venue": "", "year": 2016.0, "author_names": ["Leonardo Nazario Silva dos Santos", "Edson Eiji Matsura", "Ivo Z Goncalves", "Eduardo Augusto Agnellos Barbosa", "Aline Azevedo Nazario", "N F Tuta", "Marcelo Leite Conde Elaiuy", "D R C Feitosa", "Allan Charlles Mendes de Sousa"], "n_citations": 18, "n_key_citations": 1, "score": 0}, {"corpus_id": 14558006, "title": "EFFECT OF SUBSURFACE DRIP IRRIGATION SYSTEM DEPTH ON SOIL WATER CONTENT DISTRIBUTION AT DIFFERENT DEPTHS AND DIFFERENT TIMES AFTER IRRIGATION", "abstract": "To properly manage subsurface drip irrigation (SDI) and increase the efficiency of the water use while reducing water losses due to evaporation, the precise distribution of water around the emitters must be known. In this paper, we'll evaluate how different irrigation depths applied with SDI affected the redistribution of soil moisture in the semiarid climate of Tunisia. Data shows that with suitable management, SDI at 35cm depth can achieve higher efficiency rates with limited water to maximize yields. The objective of this work was to evaluate soil moisture distribution bef ore and after irrigation in an experiment carried out in the Higher Institute of Agronomy of Chott Meriem under subsurface drip irrigation. The results show that soil moisture is relatively more stable for subsurface drip irrigation buried at 35 cm (T3) than those buried at 5 (T1) and 20cm (T2) with a slight difference except of water 's contributions. There was greater increase in volumetric soil water content for T3 than for T1 and T2 with statistically significant increases", "venue": "", "year": 2013.0, "author_names": ["Boutheina Douh", "Abdelhamid Boujelben", "Sami Bhouri Khila", "Amel Mguidiche"], "n_citations": 26, "n_key_citations": 0, "score": 0}, {"corpus_id": 135077513, "title": "Assessing Hydrus 2D Model to Investigate the Effects of Different On Farm Irrigation Strategies on Potato Crop under Subsurface Drip Irrigation", "abstract": "The objective of this paper was to assess the performance of Hydrus 2D model to simulate the effects of different on farm irrigation strategies applied on potato crop. The ability of the model to simulate the stress coefficient (Ks) obtained as the ratio between actual and maximum transpiration, and to define the productive function of potato crop under the semi arid conditions of central Tunisia were also evaluated. Experiments were carried out on potato crop under full (FI) and deficit irrigation (DI) and two different water qualities supplied by means of a subsurface drip irrigation system. Results evidenced that the model, despite some discrepancies locally observed, can fairly accurately predict soil water contents and electrical conductivity around buried emitters. Furthermore, under water and salt stress conditions, \"measured\" Ks, based on crop water stress index (CWSI) obtained on thermal images, resulted in a good correlation with the corresponding estimated by the model (R2 0.8) The database collected during the three growth seasons also allowed the definition of the crop productive function represented by a linear relationship between the relative yield loss and Ks. This function represents a useful guidelines for the sustainable use of irrigation water in countries characterized by a semi arid climate and a limited availability of water for irrigation.", "venue": "Water", "year": 2019.0, "author_names": ["Hiba Ghazouani", "Giovanni Rallo", "Amel Mguidiche", "Basma Latrech", "Boutheina Douh", "Abdelhamid Boujelben", "Giuseppe Provenzano"], "n_citations": 8, "n_key_citations": 0, "score": 0}, {"corpus_id": 84837206, "title": "Assessing Hydrus 2 D Model to Investigate the Effects of Different On Farm Irrigation Strategies on Potato Crop under Subsurface Drip Irrigation", "abstract": "The objective of this paper was to assess the performance of Hydrus 2D model to simulate the effects of different on farm irrigation strategies applied on potato crop. The ability of the model to simulate the stress coefficient (Ks) obtained as the ratio between actual and maximum transpiration, and to define the productive function of potato crop under the semi arid conditions of central Tunisia were also evaluated. Experiments were carried out on potato crop under full (FI) and deficit irrigation (DI) and two different water qualities supplied by means of a subsurface drip irrigation system. Results evidenced that the model, despite some discrepancies locally observed, can fairly accurately predict soil water contents and electrical conductivity around buried emitters. Furthermore, under water and salt stress conditions, \"measured\" Ks, based on crop water stress index (CWSI) obtained on thermal images, resulted in a good correlation with the corresponding estimated by the model (R2 0.8) The database collected during the three growth seasons also allowed the definition of the crop productive function represented by a linear relationship between the relative yield loss and Ks. This function represents a useful guidelines for the sustainable use of irrigation water in countries characterized by a semi arid climate and a limited availability of water for irrigation.", "venue": "", "year": 2019.0, "author_names": ["Hiba Ghazouani", "Giovanni Rallo", "Amel Mguidiche", "Basma Latrech", "Boutheina Douh", "Abdelhamid Boujelben", "Giuseppe Provenzano"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 158964272, "title": "Effects of lateral spacing for drip irrigation and mulching on the distributions of soil water and nitrate, maize yield, and water use efficiency", "abstract": "In this study, a two year study in fields with and without a subsurface sand layer (identified as FSS and FNS) were conducted in the Hetao Irrigation District in Northwest China, to investigate the effects of irrigation lateral spacing and soil mulching on soil water and nitrate distribution uniformity, and the combined effects of soil water and nitrate distribution on crop yield and water use efficiency (WUE) of spring maize. The experiment followed a completely randomized block design with four treatments (S1M1, S1M2, S2M1, and S2M2) and three replicates for FSS and FNS, respectively. The four treatments resulted from the combination of two levels of lateral spacing (S1 for 1.0 m and S2 for 0.5 m) and two film covering modes (full mulching, M1; partial mulching, M2) In each treatment, the Christiansen uniformity coefficient (CU) was used to evaluate the uniformities of soil water (CUw) and nitrate (CUn) distribution in the vertical soil profile. The results showed that the narrower lateral spacing and full mulching enhanced CUn and relative chlorophyll content of leaf, compared with the wider lateral spacing and partial mulching. However, lateral spacing and mulching methods imposed no significant effect on CUw. The correlation between CUw and CUn was not significant under mulched drip irrigation system. In soils without a subsurface sand layer, crop yield may be greater with a higher CUn in root zone. Full film covering significantly enhanced CUn and then increased crop yields and WUE in FNS, however the combined effects of lateral spacing and mulching methods on grain yield were not significantly different. Thus, taking into account crop yields, WUE, and cost of drip laterals, the combination of wider irrigation lateral spacing and partial mulching was recommended for the FSS soil, while the combination of wider irrigation lateral spacing and full mulching for the FNS soil.", "venue": "", "year": 2018.0, "author_names": ["Lifeng Zhou", "Jianqiang He", "Zhijuan Qi", "Miles Dyck", "Yufeng Zou", "Tibin Zhang", "Hao Feng"], "n_citations": 14, "n_key_citations": 0, "score": 0}, {"corpus_id": 221763797, "title": "MAXIMIZING WATER USE EFFICIENCY WITH SUBSURFACE DRIP IRRIGATION SYSTEM", "abstract": "Accuracy of water application allows reducing average irrigation rate to a level that coincides with soil's hydraulic conductivity and minimizes percolation below the main root zone. Field experiment was conducted to confirm the efficiency of this approach, in a calcareous sandy clay loam soil. The source of irrigation water was ground shallow well. The treatments consisted of three irrigation systems (surface drip (T0) and subsurface drip (T15 and T30) and three levels of irrigation water application at 100, 80 and 60% of crop water requirements (T, 0.8T and 0.6T, respectively) 16 mm drip lines with 0.33 m GR emitter spacing were placed on the furrow ridge surface in the middle of alternative plant rows. Laterals with the same characteristics were buried at two depths (15 and 30 cm) in the subsurface drip irrigation (SDI) The obtained results indicated that the performance of the irrigation system was good throughout the cropping season. Values of statistical uniformity (SU) and distribution uniformity (DU) were 94.8% and 0.93, respectively. The moisture distribution in the soil monitored along plant growth stages indicated that SDI plots produced wider wetted patterns. Under scarce water, (0.8 T and 0.6T) the results demonstrated that SDI exceeded the surface drip irrigation in terms of potato yield and (IWUE) Maximum average yield (12.63 Mg/fed. was recorded with subsurface drip line buried at 15 cm depth (T15) The overall average yield of potato in the surface drip laterals (T0) declined by 26.9 and 25.1 compared with SDI (T15) and (T30) respectively. As the applied water decreased from 2209 to 1496.5 m/fed. by using SDI, the average values of IWUE under SDI were higher than those obtained by surface drip irrigation at any level of applied irrigation water treatments. Thus, in the case of saving 20% of irrigation water, (0.8T15) the highest IWUE value (8.913 kg/m) was obtained. Meanwhile, the lowest value of IWUE (4.178 kg/m) was obtained by surface drip irrigation with 100% water application amount level, (T0) This lowest value of IWUE may be reached to 50.5 and 51.7% compared with (T15) and (T30) respectively. In the same time, there was no significant effect for the level of water application on IWUE at (T) (0.8T) or (0.6T) treatments. Soil Cons. Dept. Desert Res. Center, Cairo, Egypt Misr J. Ag. Eng. 26(1) 132148 IRRIGATION AND DRAINAGE", "venue": "", "year": 2020.0, "author_names": ["Hosam A M Hiekal"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Acid rain and ozone depletion from Siberian Traps", "session_id": 5476173713947610, "user_id": 1521652717809782, "candidates": [{"corpus_id": 42223588, "title": "Acid rain and ozone depletion from pulsed Siberian Traps magmatism", "abstract": "The Siberian Traps flood basalts have been invoked as a trigger for the catastrophic end Permian mass extinction. Widespread aberrant plant remains across the Permian Triassic boundary provide evidence that atmospheric stress contributed to the collapse in terrestrial diversity. We used detailed estimates of magmatic degassing from the Siberian Traps to complete the first three dimensional global climate modeling of atmospheric chemistry during eruption of a large igneous province. Our results show that both strongly acidic rain and global ozone collapse are possible transient consequences of episodic pyroclastic volcanism and heating of volatile rich Siberian country rocks. We suggest that in conjunction with abrupt warming from greenhouse gas emissions, these repeated, rapidly applied atmospheric stresses directly linked Siberian magmatism to end Permian ecological failure on land. Our comprehensive modeling supplies the first picture of the global distribution and severity of acid rain and ozone depletion, providing testable predictions for the geography of end Permian environmental proxies.", "venue": "", "year": 2014.0, "author_names": ["Benjamin A Black", "Jean-Francois Lamarque", "Christine A Shields", "Linda T Elkins-Tanton", "Jeffrey T Kiehl"], "n_citations": 104, "n_key_citations": 6, "score": 1}, {"corpus_id": 230449404, "title": "Acid rain, ozone depletion, and the climate response to pulsed Siberian Traps magmatism", "abstract": "", "venue": "", "year": 2013.0, "author_names": ["Benjamin A Black", "Jean-Francois Lamarque", "Christine A Shields", "Linda T Elkins-Tanton", "Jeffrey T Kiehl"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 237111883, "title": "The shallow water Permian Triassic extinction record in western Tethys (Hungary and Turkey) evidence for ocean acidification or marine anoxia?", "abstract": "The Permian Triassic (PT) extinction was Earth's greatest ever biotic crisis. Despite controversy over the timing of losses, radio isotopic dating indicates that extensive damage was done to both terrestrial and marine ecosystems in a very brief interval around the PT boundary. Diverse proposed kill mechanisms include marine anoxia, ocean acidification, volcanic winter, hypercapnia, global warming, increased sediment influx, ozone destruction, acid rain, extreme atmospheric oxygen depletion, poisoning by toxic trace metals, and bolide impact. Apart from the last cause, these mechanisms are related to Siberian Traps volcanism.", "venue": "", "year": 2018.0, "author_names": ["David P G Bond", "Paul B Wignall"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213001275, "title": "The End Permian Mass Extinction", "abstract": "Abstract The end Permian mass extinction was triggered by geologically rapid carbon release from enormous flood basalt eruptions, the Siberian Traps. This perturbation to the carbon cycle led to a hyperthermal event: considerable temperature increases on land and in the oceans, expansion of oceanic oxygen minimum zones, heavy metal pollution, short term acid rain and disruption of the ozone layer, and intensification of the hydrologic cycle with shifts in rainfall patterns. If carbon release was sufficiently rapid, the oceans may have suffered a short lived acidification event. These environmental stresses caused a severe and rapid extinction in the oceans and probably also on land. In the oceans, extinctions occurred in a single pulse lasting", "venue": "", "year": 2019.0, "author_names": ["Matthew E Clapham"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 133833819, "title": "End Permian extinction amplified by plume induced release of recycled lithospheric volatiles", "abstract": "Magmatic volatile release to the atmosphere can lead to climatic changes and substantial environmental degradation including the production of acid rain, ocean acidification and ozone depletion, potentially resulting in the collapse of the biosphere. The largest recorded mass extinction in Earth's history occurred at the end of the Permian, coinciding with the emplacement of the Siberian large igneous province, suggesting that large scale magmatism is a key driver of global environmental change. However, the source and nature of volatiles in the Siberian large igneous province remain contentious. Here we present halogen compositions of sub continental lithospheric mantle xenoliths emplaced before and after the eruption of the Siberian flood basalts. We show that the Siberian lithosphere is massively enriched in halogens from the infiltration of subducted seawater derived volatiles and that a considerable amount (up to 70% of lithospheric halogens are assimilated into the plume and released to the atmosphere during emplacement. Plume lithosphere interaction is therefore a key process controlling the volatile content of large igneous provinces and thus the extent of environmental crises, leading to mass extinctions during their emplacement.Halogens in Siberian xenoliths show that plume lithosphere interaction controls the volatile content of large igneous provinces. The seawater derived volatiles, implicated in the end Permian mass extinction, infiltrated the lithosphere during subduction.", "venue": "Nature Geoscience", "year": 2018.0, "author_names": ["Michael W Broadley", "Peter H Barry", "Chris J Ballentine", "Lawrence A Taylor", "Ray Burgess"], "n_citations": 23, "n_key_citations": 0, "score": 0}, {"corpus_id": 129481689, "title": "Changing atmosphere: a global challenge.", "abstract": "In this book, a specialist in atmospheric research describes the causes of acid rain, ozone depletion and global warming and the evidence for each one's recent acceleration. He also provides practical and long range suggestions for controlling these and other forms of atmospheric deterioration. The author discusses how the emission of sulphur and nitrogen substances into the air leads to acid rain, how the release of chlorine bearing gases into the air causes destruction of ozone in the high atmosphere, and how the addition of infrared trapping gases to the atmosphere restricts the loss of radiation from the earth and thus leads to a heating of the climate. He argues that although it is almost impossible to bring the spread of chemicals into the air to a complete halt, steps to slow air pollution are technically feasible and in many cases economically beneficial. He describes these strategies, cautioning that they must be co ordinated with a larger goal of lessening the total impact human activities have on the earth. According to the author, we can work towards this goal by attempting to stabilize populations (in the developed as well as the developing world) protect forests, encourage the use of modern energy efficient technology in Third World countries (and the United States) and reduce poverty worldwide.", "venue": "", "year": 1990.0, "author_names": ["John W Firor"], "n_citations": 35, "n_key_citations": 1, "score": 0}, {"corpus_id": 94300681, "title": "Matrix Isolation Spectroscopy in Atmospheric Chemistry", "abstract": "There is a growing concern about our environment and it has been realized that quick and definite measures are required if we are to preserve our planet for posterity. The rising level of greenhouse gases in the atmosphere and its consequent effect on the global climatic conditions, the depletion of stratospheric ozone, acid rain, and photochemical smog are some of the issues that have been addressed by scientists and policy makers the world over. Solutions to most environmental problems can be obtained only through collective efforts and, to ensure that such efforts are effective, it is necessary that policies and legislation are made based on scientific data on our ever changing environment. The data must provide information on the nature of trace constituents present in the different layers of the atmosphere, their concentrations, and their chemistry. A variety of experimental techniques have been used for this purpose, such as fluorescence spectroscopy, infrared (IR) and ultraviolet/visible (UV/VIS) absorption, Raman scattering, and photoacoustic spectroscopy. All these techniques exhibit some combination of the following features: multielement detection, low detection limits (DLs) specificity for an unequivocal identification of species, accuracy, and precision. One of the experimental techniques that has found important applications in the study of the atmosphere is matrix isolation (MI) spectroscopy. In this technique, the molecules of interest are diluted in a large excess of an inert gas and sprayed onto a cold substrate held at 10 K. Under these conditions, the molecules are trapped, isolated from each other and only surrounded by inert gas atoms. Such cold, isolated molecules yield spectra that have narrow spectral widths. Furthermore, if the trapped species are reactive molecules or radicals, they are likely to have extended lifetimes in the inert cage, thus allowing their chemistry and spectroscopy to be studied at leisure. These features of MI have made it particularly useful in the study of atmospheric chemistry. The high resolution capability enables one to use this technique as an analytical tool, as closely lying spectral features of different molecules can now be resolved a feature that has been employed to identify and estimate trace constituents in the upper tropospheric and stratospheric samples. In conjunction with gas chromatography (GC) MI and Fourier transform infrared (FTIR) referred to as GC/MI/FTIR, offers a powerful tool with which to resolve isomeric forms of environmentally hazardous chemicals such as polycyclic aromatic hydrocarbons (PAHs) dioxins, and polychlorinated biphenyls (PCBs) Such isomeric resolution is essential, as it is well known that only certain isomeric forms are biologically active, whereas the others are not. In this respect, GC/MI/FTIR even scores over GC mass spectroscopy (MS) Analytical spectroscopy of matrix isolated species is also done using fluorescence spectroscopy, where again the advantage of high resolution enables one to resolve isomeric forms of compounds. Where the species of interest is a free radical, electron spin resonance (ESR) is the technique of choice to study the trapped species. This is a particularly powerful tool, as free radicals are known to play an important role in a number of atmospheric processes. Another aspect of MI spectroscopy is its ability to study reaction intermediates, a feature that has been employed to study the reaction mechanisms of atmospherically important chemical and photochemical reactions. All these aspects of the technique are discussed in this article, using examples.", "venue": "", "year": 2006.0, "author_names": ["K S Viswanathan", "K Sankaran", "K Sundararajan"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 100205822, "title": "Heterogeneous processing of {sup 13}NO{sub 2} at zero concentration by monodisperse carbon aerosols", "abstract": "The heterogeneous chemical processing of atmospheric cases by both natural and anthropogenic aerosols plays a key role in the regional as well as global environment. The oxides of nitrogen in the presence of soot present a particularly interesting and relevant topic covering a wide range of such diverse phenomena as acid rain and stratospheric ozone depletion. Detailed investigations of such systems is difficult due to low aerosol and gas species concentrations and, to date, most studies have investigated the chemistry using bulk samples. Nitrogen dioxide is known to be the most important reactive species in this system proceeding as, NO{sub 2} (C) {r_arrow} (NO{sub 2}{lg_bullet} C){r_arrow} NO (O {lg_bullet} C) In our current study, we have used {sup 13}N(T{sub 1/2} 9.96 min) radioisotope labeling techniques to investigate the uptake and chemical conversion of NO{sub 2} in the presence of monodisperse carbon aerosols under real atmospheric conditions, which represents a significant improvement over earlier studies in our lab. {sup 13}N was produced using 14 MeV protons from the PSI Philips cyclotron and a gas target of 2% O{sub 2} in He for the reaction {sup 16}O(p,{alpha} {sup 13}N. The resulting {sup 13}NO{sub y} were reduced to {sup 13}NOmore over molybdenum and subsequently oxidized to {sup 13}NO{sub 2} over CrO{sub 3} Carbon aerosol was generated by spark discharge between graphite rods in argon. Mono disperse size cuts were selected with a differential mobility analyzer operated with synthetic air. The NO{sub 2} and aerosol streams were admixed and passed through a reaction volume for a reaction time of 10s. A series of selective traps and one filter were used to separate products and reactants: (1) triethanolamine (TEA) denuder to remove unreacted gas phase NO{sub 2} (2) TEA impregnated class fiber filter to remove aerosol fraction and NO{sub 2} released after uptake, and (3) Co{sub x}O{sub y} trap to remove all residual NO{sub x} less", "venue": "", "year": 1995.0, "author_names": ["Kevin D Tabor", "Markus Kalberer", "Y Parrat"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 144277208, "title": "Seventh grade students' conceptions of global warming and climate change, Environmental Education Research (2009) 15, no. 5: 549 570 DOI: 10.1080/13504620903114592", "abstract": "The purpose of this study was to investigate sevent h grade students' conceptions of global warming and climate change. The study was descript ive in nature and involved the collection of qualitative data from 91 seventh grade students fro m three different schools in the Midwest, USA. These data were analyzed for content in an in ductive manner to identify students' concepts. The categories that emerged from the stu den s' responses reflected different degrees of sophistication or conceptualization about global w rming and climate change. Based on these findings we make curricular recommendations that bu ild on the students' conceptions and the NRC (1996) science education standards. Global Warming and Climate Change 2 Introduction As human activities continue to add greenhouse gase s carbon dioxide, methane, and nitrous oxides to the Earth's atmosphere, global te mp ratures are expected to rise, causing the Earth's climates to change. These climate changes may affect precipitation patterns, severe and extreme weather events, and over time environmental systems. Furthermore, human health and agriculture may be sensitive to climate change. Th e Intergovernmental Panel on Climate Change (IPCC) has concluded that global warming is inevita ble and that human activity is likely to be the main cause. The National Research Council's Grand Challenges in Environmental Sciences (NRC, 2000) identified eight \"grand challenges,\" fo ur f which are directly linked to climate and climate change. Thus, it is vital that students le arn about global warming and climate change. Teaching about global warming and climate change is essential for developing well rounded students, and for overcoming a critical def iciency in atmospheric science and climatology curricula (Serafin et al. 1991) Furth ermore, teaching about global warming and climate change provides a natural context for study ing science through personal and social applications. An understanding that is essential i f future citizens are to assume responsibility for the management and policymaking decisions facing ou r planet (Brown, 1992; Bybee, 1993) Therefore, if science education is to promote a cit izenry that is knowledgeable about global warming and climate change it is essential to deter mine what students' conceptions are about global warming and climate change (Osborne Freybe rg, 1985) in order to plan curriculum and design instruction that builds on students' concept ions (Driver et al. 1994) The purpose of this study was to investigate sevent h grade students' conceptions about global warming and climate change, add to the extan t literature base on students' geoscience and environmental science learning, and provide guidanc e to curricular development. We selected Global Warming and Climate Change 3 seventh grade as this is the grade level at which s tudents begin to learn about global climate, weather, and related phenomena (e.g. hydrologic cy cle) developing an understanding of the Earth as a system the interrelationship among the p hysical, chemical, and biological processes that shape and change the Earth (NRC, 1996) Speci fically, the research question guiding this study was: What are seventh grade students' concep tions of global warming and climate change? Based on these findings we make curricular recommen dations that build on students' conceptions and the National Research Council (NRC) science education standards (NRC, 1996) Furthermore our study expands on past resea rch, providing a historical perspective on students' conceptions as well as providing new insi ght into students' conceptions about the potential environmental impact of global warming an d climate change. Background We reviewed 14 international studies published betw e n 1993 and 2005 that investigated secondary students' conceptions of global warming a nd climate change. Because of the limited number of studies that specifically investigated se venth grade students' conceptions of global warming and climate change; we expanded our literat ure review to secondary students, grades 612. Instead of individually describing each of the studies, we report our interpretation and categorization of the findings in tabular form (Tab les 1 4) We grouped the findings from these studies into four themes and 20 categories. The fo ur themes are: conceptions about global warming, the greenhouse effect (Table 1) causes of global warming and climate change (Table 2) environmental impact of global warming and clim ate change (Table 3) and resolutions (Table 4) Within each theme we identified categor ies that reflect the students' conceptions. For each category we identify specific findings that ma ke up the category along with the authors. Global Warming and Climate Change 4 Table 1. Secondary students' conceptions about glo bal warming, the greenhouse effect Categories Findings and Author No Distinction between the kinds of radiation No distinction among \"ultraviolet rays,\" \"solar ray s,\" and \"thermal rays or heat (coming from the sun)\" (Koulaidis Christidou, 199 No distinction between \"heat rays\" and \"ultra viole t rays\" (Boyes Stanisstreet, 1997 Fisher, 1998; Osterlind, 2005) No distinction among \"heat rays,\" \"ultra violet ray s,\" \"heat or high ambient temperature\" (Boyes Stanisstreet,1998) The kind of radiation involved in the greenhouse effect Increased ultraviolet radiation, due to ozone deple tion, results in global warming (Koulaidis Christidou, 1999; Boyes Stan isstreet, 1997) Heat or solar rays coming from sun is involved in t he greenhouse effect, no concept of terrestrial radiation. (Koulaidis Chri stidou, 1999) The kinds of greenhouse gases Consider greenhouse gases as air pollutants and gre enhouse effect as an environmental problem (Koulaidis Christidou, 1999 Do not consider CO2 as a greenhouse gas (Boyes et al. 1993; Boyes Stanisstreet, 1993; Boyes Stanisstreet, 1997; Pr uneau et al. 2001) Do not consider water vapor as a greenhouse gas (Fi sher, 1998) The position and distribution of greenhouse gases in the atmosphere Greenhouse gases form a thin layer in atmosphere su rrounding the earth (Koulaidis Christidou, 1999; Pruneau et al. 2003 Carbon dioxide forms a \"lid\" or \"skin\" over the ear th (Andersson Wallin, 2000) Lots of gases form a \"roof\" over the earth (Anderss on Wallin, 2000) The definition of greenhouse effect Do not know about the greenhouse effect (Andersson Wallin, 2000; Pruneau et al. 2001) No distinction between greenhouse effect and global warming (Andersson Wallin, 2000; Boyes et al. 1993) Erroneously involving the ozone layer in greenhouse effect Greenhouse effect is that solar rays, reflected by the earth surface, are trapped by ozone layer (Koulaidis Christidou, 1999) Sun's rays get trapped in the ozone (Boyes Stanis strret, 1997; Pruneau et al. 2003) Global Warming and Climate Change 5 Table 2. Secondary students' conceptions about the causes of global warming and climate change Table 3. Secondary students' conceptions about the impact of global warming and climate change Categories Findings and Author Environmentally harmful actions Littering (Boyes Stanisstreet, 1993; Gowda et al. 1997) Using environmentally harmful products (Gowda et al 1997) Pollution Acid rain (Boyes Stanisstreet, 1993; Bo yes et al. 1993; Pruneau et al. 2001) Nuclear waste (Boyes Stanisstreet, 1993; Boyes et al. 1993) General air pollution (Andersson Wallin, 2000; Bo yes Stanisstreet, 1997; Gowda et al.,1997) Chemicals, harmful and unnatural gases (Gowda et al 1997) General pollution Fisher, 1998; Gowda et al. 199 7; Pruneau et al. 2001; Pruneau et al. 2003) Heat is trapped under a layer of dust created by po llution (Pruneau et al. 2001) Ozone hole Ozone depletion causes global warming (B oyes et al. 1993; Boyes Stanisstreet, 1993; Boyes Stanisstreet, 1998; Fis her, 1998; Gowda et al. 1997; Pruneau et al. 2001) Ozone hole allows more solar energy to reach the ea rth, causing global warming (Andersson Wallin, 2000; Boyes et al. 19 99; Boyes Stanisstreet, 1994; Boyes Stanisstreet, 1997; Kou laidis Christidou, 1999; Osterlind, 2005; Pruneau et al. 2003; Rye et al. 1997) Cool air escapes into space through the ozone hole, causing the earth to warm (Boyes Stanisstrret, 1997) Change in solar irradiation Increased penetration of solar radiation (Boyes et al. 1993; Boyes Stanisstreet, 1993; Pruneau et al. 2003) Earth gets closer to the sun; sunrays hit more plac es on the Earth (Pruneau et al. 2003) Barrier of gases Barrier of gases bounces back the heat from the Earth and keeps it from leaving the Earth (Andersson Wallin, 2000) Categories Findings and Author No change in my life No consequence in my life (Pruneau et al. 2001; Pr uneau et al, 2003) Over estimate global warming Much higher temperature estimations (Gowda et al. 1997) Skin cancer Causes skin cancer (Boyes et al. 1993; Boyes Stanisstreet, 1993; Boyes Stanisstrret, 1998; Pruneau et al. 2003) Do not understand regional variation Confusion over regional differences in that in some areas there will be more flooding and in other areas there will be more dese rtification (Boyes Stanisstreet, 1993) Depletion of ozone layer Greenhouse gases cause depletion of the ozone layer (Boyes et al. 1999; Boyes Stanisstrret, 1997; Boyes Stanisstreet, 1 994; Gowda et al. 1997; Rye et al. 1997) General air pollution Greenhouse gases are air poll utants, increased greenhouse gases will cause air pollution (Koulaidis Christidou, 1999) Global Warming and Climate Change 6 Table 4. Secondary students' conceptions about res olving global warming and climate change Theoretical and Methodological Framework A constructivist perspective guided this study. Constructivism, as a research referent, aims to understand the meanings constructed by stud en s participating in context specific activities using language (Schwandt, 1994) Centra l", "venue": "", "year": 2012.0, "author_names": ["Daniel P Shepardson", "Dev Niyogi", "Soyoung Choi", "Umarporn Charusombat"], "n_citations": 19, "n_key_citations": 3, "score": 0}, {"corpus_id": 54907398, "title": "INTERDISCIPLINARY COOPERATION AND SYSTEM MODELLING AS MEANS TO GOVERN THE ANTHROPOCENE", "abstract": "The global development has now come to a critical state where humanity act as a new geological force and it is obvious that there are numerous of environmental problems which arise from the present geosphere biosphere anthroposphere interactions which urgently need to be addressed. This paper argues that systems analysis and modelling of environmental systems is one necessary part in successful governing of societies towards sustainability. In the 1960th many observations and data made it evident that the environment in most countries was in a bad state. To get a holistic view of the complex problems and to clarify the relationships of structure and function, systems thinking was applied e.g. modelling, cybernetics, systems analysis, life cycle assessment and energy and material flow analysis. Such tools used collectively, conceptualized as 'integrated assessment' can help to communicate fundamental knowledge, and to support decision making when identifying, developing and implementing precautionary measures and solutions. There are good examples demonstrating the strength of such approaches; Solutions to the ozone depletion by replacing CFC's with more chemically reactive compounds that are degraded within the troposphere. Acidification of European low buffer soils and lakes, sensitive to acid rain, has decreased due to concerted action on Sulphur emission control in large parts of Europe. The handling and recycling of solid waste has resulted in a considerable reduction of deposits in large parts of the world. This basically natural scientific knowledge has also influenced the development within e.g. economy and jurisprudence and today ecological economy and environmental law assume ecological systems as fundamental. The complexity of ecosystems and environmental issues can only be understood by use of advanced scientific tools such as modelling as a base for establishing interdisciplinary co operation. Each component of such models will of course be an approximation, but validation and verification of the models will serve to make them useful. An ongoing research project at Mid Sweden University aims at building a complete carbon and energy balance model of an entire Swedish region, based on the Danish Samso model. Such models will make it possible to refer to a robust scientific base, thereby making it easier to argue for appropriate measures and actions. At the same time it will be clear what data these actions rest upon thereby making it easier to identify possible errors or limitations. Systems analysis and subsequent modes are constructs. According to systems theory and model development they are strategies as the best representations of nature, we can make. At the same time it must be assured, that a continuous adaptation and improvement in a studied area is possible i.e. that model outcomes are matched with phenomenological observations and that empirical work also is carried out. Model development can therefore be characterized as a dynamic and iterative process. Governance in the Anthropocene must be based on an understanding of the problem picture at hand, and learning how to appropriately address increasingly complex issues. For identifying potential solutions and consequences of policy implementation, systems modelling on relevant levels will be one necessary tool. The current project developing an environmental regional model, illustrates how modelling can provide decision support for the county of Jamtland regarding management of energy resources and planning of future infrastructure, as well as serving regional and national information purposes.", "venue": "", "year": 2015.0, "author_names": ["Torbjorn Skytt", "Soren Nors Nielsen", "Erik Gronlund", "Fredrik Stahl", "Anders Jonsson", "Inga Carlman", "Morgan Froling"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "eftover? I am a victorious woman", "session_id": 5268852568990156, "user_id": 1887591913892718, "candidates": [{"corpus_id": 216401463, "title": "\"Leftover? I am a victorious woman!\" the potential for the emergence of a new womanhood", "abstract": "ABSTRACT In the last decade, there has been a significant rise in the number of well educated, highly paid, and independent unmarried women who have been officially defined as sheng nu \"leftover women\" in China. Despite the presence of the term in media and political discourses, the voices of these women are rarely heard. This study explores their lived experiences regarding the meaning of singlehood through 26 interviews with Chinese sheng nu. This study argues that they are positively challenging the dominant societal identity and creating an alternative idea of womanhood by valuing independence and connections with others. In the active transformation of their identities, in reflecting upon their own values, in making choices for meaningful relationships with others, and in resisting patriarchal formations of womanhood in interaction with others. Thereby, they demonstrate the potential for change.", "venue": "", "year": 2020.0, "author_names": ["Chao Zhang"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 165785742, "title": "\"Leftover? No! I Am a Victorious Woman\" Exploring the Identity of Sheng Nv in Contemporary China", "abstract": "", "venue": "", "year": 2018.0, "author_names": ["Chao Zhang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 220888968, "title": "\"Now, I am Empowered. Now, I am a Woman With Spirit\" Evaluating CARE's Public Health Work Through a Community Organizing Framework in Sri Lanka and Bangladesh", "abstract": "Community based interventions are crucial to reducing health care disparities throughout the world. CARE, an international development nongovernmental organization (NGO) is a global leader in using a community based approach in public health. This qualitative study sought to understand the processes through which community organizing functions to effectively facilitate change and improve health among underserved populations in three programs in Sri Lanka and Bangladesh. Sixteen in depth interviews and two focus groups were conducted with NGO staff, partner organization staff, and community change agents. Programs are assessed through Ganz's community organizing model, which includes (a) leadership development, (b) storytelling strategies, and (c) team building. Our findings confirm existing literature showing that public health approaches can be augmented by using community organizing to develop local engagement. Results show that program success relates to developing community members' understanding of social inequality and its impact on society. Other important strategies include systems strengthening, political engagement, coalition building, and government outreach. Empowered communities were created through recruiting, activating, and investing in community members, their stories, and their collaborative potential, at least in the sites studied here. Collectively, these programs have begun to create empowered communities among some of the most marginalized people in Sri Lanka and Bangladesh.", "venue": "International quarterly of community health education", "year": 2020.0, "author_names": ["Andrew J Saxon", "Jessie VanNess Ford"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 149842271, "title": "\"What If I Am a Woman?\" Black Feminist Rhetorical Strategies of Intersectional Identification and Resistance in Maria Stewart's Texts", "abstract": "ABSTRACT In this essay, I argue that an analysis of Maria W. Stewart's rhetorical choices extends her legacy as an early proponent of the intersectionality of African American female identity. She uses casuistry as defined by Kenneth Burke, dissociation as articulated by Chaim Perelman and applied by Shirley Wilson Logan, and rearticulation as defined by Patricia Hill Collins to confirm herself as sacrificially American through consubstantiation, nobly African by history, and divinely feminine by God. She articulates a Black female consciousness that is empowered to move toward breaking the oppressive conditions of their triple consciousness. Her use of rearticulation to resolve the failures of respectability politics provides relevance for the use of African American feminist theories as a rhetorical technique.", "venue": "", "year": 2018.0, "author_names": ["Rhana A Gittens"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 6742075, "title": "\"I just feel like I am broken. I am the worst pregnant woman ever\" A qualitative exploration of the \"at odds\" experience of women's antenatal distress", "abstract": "ABSTRACT Advances in perinatal mental health research have provided valuable insights around risk factors for the overall development of maternal distress. However, there is still a limited understanding of the experience of women struggling emotionally during pregnancy. We explored how women view, experience, and interpret psychological distress antenatally. Eighteen Australian women participated in in depth interviews that were analyzed thematically within a critical realist theoretical framework. We present and situate the current findings within the dominant discourse of the good mother, which arguably promotes guilt and stigma and results in women self labeling as bad mothers.", "venue": "Health care for women international", "year": 2017.0, "author_names": ["Aleksandra Staneva", "Fiona Bogossian", "Alina Morawska", "Anja Wittkowski"], "n_citations": 10, "n_key_citations": 2, "score": 0}, {"corpus_id": 192283822, "title": "\"I AM A WOMAN\" PORTRAYING WOMANHOOD IN THE AUTO/BIOGRAPHY OF AN INDONESIAN TRANSSEXUAL CELEBRITY", "abstract": "Paper ini mendiskusikan femininitas di dalam auto/biografi selebritas transeksual \"Aku Perempuan: Jalan Berliku Seorang Dorce Gamalama\" 2005 Auto/biografi ini diterbitkan tahun 2005. Auto/biografi bukan sekadar merayakan karirnya tetapi yang lebih penting lagi adalah untuk menegaskan identitas dirinya sebagai perempuan. Saya berargumentasi bahwa peran feminine yang dituntut dari selebritas perempuan dapat juga di[per]tunjukkan oleh seorang transeksual seperti Dorce Gamalama tetapi dengan tuntutan ditampilkannya bentuk femininitas yang lebih meyakinkan dibandingkan yang dituntut dari selebritas yang secara biologis dilahirkan perempuan. Penelitian ini dilakukan dengan membaca secara dekat, mencermati struktur auto/biografi serta wacana yang ditampilkan. Analisis saya atas auto/biografi Dorce Gamalama ini menunjukkan bahwa persoalan makna perempuan sejati muncul berulang sejalan dengan perjuangan subjek auto/biografis dalam mengklaim identitas feminine yang otentik melalui tubuh, seksualitas dan peran femininnya sebagai ibu dan istri. Penegasan mengenai identitas sebagai perempuan sejati sangat erat dikaitkan dengan Islam sebagai kerangka beragama lokal di Indonesia. Abstract This paper examines femininity in the auto/biography of a transsexual celebrity, \"Aku Perempuan: Jalan Berliku Seorang Dorce Gamalama\" 2005 Her auto/biography was published in 2005. The auto/biography is not so much about celebrating her career as it is about endorsing her womanhood. I argue that these feminine roles expected of female celebrities can be performed by a transsexual (M2F) person as Dorce Gamalama but with the need to create a more convincing form of femininity than is required of a \"natural\" female celebrity. This research is conducted by reading the text closely, paying attention to the structure and the discourse presented. My examination of Dorce's auto/biography shows that this question about being a real woman recurs as the auto/biographical subject struggles to claim an authentic feminine identity through her body and sexuality as well as through the feminine roles of motherhood and wifehood. This assertion of being a real woman is tightly connected to Islam as Indonesian local religious frame.", "venue": "", "year": 2016.0, "author_names": ["Aquarini Priyatna", ""], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 199467148, "title": "Papular Sarcoidosis of the Knees Following Treatment with Interferon Alpha and Ribavirin in a Woman with Hepatitis C.", "abstract": "from injection sites and without thrombocytopenia. J Intern Med. 1998;243:313 5. 4. Gan WK. Diagnostic challenge of heparin induced skin necrosis. Ann Clin Lab Res. 2017;5:213. 5. Dominguez Espinosa E, Diaz Madrid M. Necrosis cutanea por heparina. Piel. 2009;24:362 3. 6. Sanchez PS, Angelillo SA, Masouye I, Borradori L. Widespread skin necrosis asociated with unfractioned heparin therapy in a patient under chronic coumarin anticoagulation. J Eur Acad Dermatol Venereol. 2006;20:327 30. 7. Llamas Velasco M, Alegria V, Santos Briz A, Cerroni L, Kutzner H, Requena L. Occlusive nonvasculitic vasculopathy: A review. Am J Dermatopathol. 2016;1:1 25. A. Estebanez, E. Silva, P. Cordero, J.M. Martin", "venue": "Actas dermo sifiliograficas", "year": 2019.0, "author_names": ["Benigno Monteagudo", "M C Grueiro", "Alejandro Vilas-Sueiro", "Fernando Campo-Cerecedo"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 79185672, "title": "\"I am the worst pregnant woman ever\" A mixed method study of the nature of psychological distress during pregnancy", "abstract": "", "venue": "", "year": 2016.0, "author_names": ["Aleksandra Staneva"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 53031189, "title": "Continuous subcutaneous hydrocortisone infusion in a woman with secondary adrenal insufficiency", "abstract": "Adrenal insufficiency requires long term, often life long, administration of 15 25 mg/day hydrocortisone in two to three daily doses, though personalized adjustments may be needed [1] Disruption of the circadian cortisol rhythm is associated with poor quality of life, sleep disturbances, asthenia, immune disturbances, and impairment of glucose/ lipid metabolisms. Programmable pumps may be used for continuous subcutaneous hydrocortisone infusion (CSHI) with modulated rates that mimic the circadian rhythm of serum cortisol concentrations [2] We report the use of CSHI in a woman with secondary adrenal insufficiency (SAI) and multiple allergic reactions, including to excipients of the standard oral hydrocortisone tablet. A 42 year old woman (body weight: 64 kg, BMI: 22.2 kg/m, body surface area (BSA) 1.75 m) with secondary hypothyroidism and SAI from idiopathic ACTH deficiency (basal serum cortisol: 42 nmol/L, plasmatic ACTH: 1.83 pmol/L) was admitted to our Endocrine Unit because of profound asthenia, inability to cope with daily activities, hypotension, and nausea and vomiting in the morning. She was suffering from allergic reactions to several drugs, perfumes, cosmetics, solvents, and paints with previous episodes of anaphylaxis. The oral administration of a 10 mg hydrocortisone tablet had previously induced an anaphylactic reaction with severe hypotension and laryngeal angio oedema, which required hospitalization and adrenalin treatment. At the time of admission, substitutive therapy was 25 mg/day of a hydrocortisone preparation containing biologic rice amid, in four daily administrations. Our CSHI protocol was approved by the local ethics committee (CEAS Regione Umbria, 3049/17) The Italian Health Ministry approved the use of a Medtronic MiniMedTM 640G (Medtronic MiniMed, 18000 Devonshire Street, Northridge, CA, USA) insulin infusion pump for CSHI in our patient (DGFDM.VI/P/I.5.i.n/2017/367) Written informed consent was obtained. We chose a hydrocortisone preparation containing only sodium phosphate as excipient (Flebocortid(r) 500 mg/5 mL) The ampoule and the infusion kit were changed every 3 days. We initially set the infusion program according to available data in the literature (2) with a total daily hydrocortisone dose slightly lower than that used with the oral preparation. 7.00 am serum cortisol concentration ranged from 295 to 345 nmol/L in the first 3 days after CSHI was started and 24 h free urine cortisol was within the normal range (712 nmol/24 h) Subsequently, within the first 2 weeks, we gradually reduced the total daily dose of hydrocortisone (from the initial dose of 21.625 mg/day, 12.4 mg/m BSA to 18.7 mg/day, 10.7 mg/m BSA) and programmed a more gradual increase of cortisol infusion between 2.00 and 7.00 am, because of referred sleep disturbances. During the following months, the patient experienced improvement of well being with CSHI, with resolution of nausea and vomiting at wakening. She was also able to restart many of her home work and daily physical activities (including sport activities) that had been stopped after diagnosis of SAI. During 14 months of follow up, she did not refer adverse reactions nor addisonian crisis, and all blood and urine values remained unchanged and within the normal range (including blood glucose, electrolytes, kidney/ liver parameters, total cholesterol, high density lypoproteins (HDL) low density lypoproteins (LDL) and triglycerides) An adrenal insufficiency specific quality of life questionnaire (AddiQoL 30) (score from 30 to 120) [3] scored 57 at baseline, 73 at 6 months, and 78 at 12 months CSHI. Alberto Falorni alberto.falorni@unipg.it", "venue": "Endocrine", "year": 2018.0, "author_names": ["Francesca Cardini", "Elisabetta Torlone", "Vittorio Bini", "Alberto Falorni"], "n_citations": 3, "n_key_citations": 1, "score": 0}, {"corpus_id": 91180904, "title": "Specific features of self perception and anxiety of a woman with pathology of pregnancy", "abstract": "Objective:The purpose of the article is to describe the empirical research of self perception and emotional state of a woman with a pathology of pregnancy. At present, reproductive problems, both in women and men, are quite widespread in the world. According to the WHO data there are about 80 million couples in the world who have some difficulties in conception, carrying and giving birth to children. The reproductive health impairment is becoming one of the main problems of modern society, and consequently, the number of psychological problems also increases, because the inability to conceive or carry a baby safely, provided that there is a conscious desire to have children, is one of the most difficult life situations.Method:The leading method to investigate this problem is diagnostic and static methods that allow us to identify the presence of specific features in self perception and emotional state of a woman during her pregnancy, focusing specifically on the psychological characteristics of women with pregnancy pathologies.Results:Based on the results of the empirical research, the hypothesis put forward about the presence of the specific features in self perception and anxiety levels of pregnant women without pathologies and women with pathology of pregnancy was confirmed. Women with pregnancy pathology are less likely to feel self confidence, they have lowered self acceptance, but compared to women without pathology of pregnancy, the subjects often blame themselves for the situation that happened, they start to be responsive to their health and react to any bodily changes.Conclusion:Psychological support and guidance of pregnant women at maternity welfare centers (level I) should focus on the formation of personal and social perception of the concepts \"I am pregnant\" and \"My child\" at antenatal clinics of maternity hospitals (II level) to focus on the formation of rational ideas about the emerged pathology of pregnancy.", "venue": "", "year": 2018.0, "author_names": ["Nigina S Babieva", "Natalia V Sidyacheva", "Sophia A Mudrak", "Igor V Kalinin", "Eugene V Zolotkova", "Valentina Vasilevna Buyanova", "Irina V Mikhailova"], "n_citations": 5, "n_key_citations": 0, "score": 0}]} -{"query": "depression in pediatric", "session_id": 4136762668737405, "user_id": 5249944009819363, "candidates": [{"corpus_id": 202747794, "title": "Standardized Screening for Depression in Pediatric Epilepsy.", "abstract": "INTRODUCTION Depression is a common comorbidity of epilepsy that is under recognized and under diagnosed. To improve recognition, a brief screening tool, the Neurological Disorders Depression Inventory Epilepsy Youth (NDDI E Y) was implemented in a level IV pediatric epilepsy clinic. METHOD This quality improvement is a pre post design measuring the impact of standardized depression screening, via the NDDI E Y tool, in youth 12 17 years with epilepsy. Those with positive screens, scores 32, received social work evaluation and mental health resources. Education was provided to all patients in standard discharge paperwork. RESULTS Of N 176 patients evaluated, n 112 met criteria to complete the NDDI E Y. Fifteen percent (n 17) of patients had positive screens, suggesting that they are at risk for depression. DISCUSSION Depression is a challenge when managing patients with epilepsy and may impact their quality of life and seizure control. Routine depression screening is recommended and feasible in the outpatient setting with a standardized work process.", "venue": "Journal of pediatric health care official publication of National Association of Pediatric Nurse Associates Practitioners", "year": 2019.0, "author_names": ["Erin Fecske", "Paul Glasier", "Lines M Vargas Collado", "Elizabeth K Rende"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 162171120, "title": "Treatment of Maternal Depression in Pediatric Primary Care", "abstract": "", "venue": "Clinical pediatrics", "year": 2019.0, "author_names": ["Rachel Becker Herbst", "Robert T Ammerman", "S Paul Perry", "Cynthia Zion", "Michelle K Rummel", "Jessica M McClure", "Lori J Stark"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 206317591, "title": "Executive Dysfunction and Depression in Pediatric Temporal Lobe Epilepsy: The Contribution of Hippocampal Sclerosis and Psychosocial Factors", "abstract": "Abstract Objectives: Temporal lobe epilepsy (TLE) has been identified as a risk factor for increased depression features in children and adolescents; however, less is known regarding specific neurocognitive predictors of depression in this population above and beyond seizure specific and sociodemographic factors. Methods: The study included 62 patients with TLE (64% male) aged 8 to 16 years (M=12.62; SD=2.26) who underwent comprehensive neuropsychological evaluation. Results: Correlation analyses revealed significant association between patient depression and WCST Total Perseverations, BRIEF Behavioral Regulation Index (BRI) and family stress. Perseverative errors on the WCST and the BRI were found to significantly predict depression features in youth with TLE. Patient performance on WCST was also found to fully mediate the significant relationship between hippocampal sclerosis (HS) and depression in pediatric TLE. Finally, logistic regression indicated HS in the presence of TLE was associated with a four fold risk of clinically significant depression ratings. Conclusions: The current findings offer strong support for the relationship between executive function (EF) and depression in pediatric TLE. Also, as HS is not modifiable, these findings suggest EF intervention may be a potential modality for improving health related quality of life (HRQOL) in youth with TLE. (JINS, 2018, 24, 606 616)", "venue": "Journal of the International Neuropsychological Society", "year": 2018.0, "author_names": ["William A Schraegle", "Nancy L Nussbaum", "Jeffrey B Titus"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 51713257, "title": "Screening for Depression in Pediatric Primary Care", "abstract": "Purpose of ReviewTo review the clinical practice guideline landscape for depression screening in pediatric primary care and to identify current gaps in knowledge.Recent FindingsVarious organizations have recommendations that support screening for depression in pediatric primary care, although some differ based on the age of the child. To date, guidelines have been made based on indirect evidence of efficacy. For example, indirect evidence shows that several screening tools exist for use in primary care, and various primary care administered or referred treatments for childhood depression have some evidence of efficacy (particularly among adolescents) In addition to determining the applicability of this evidence to younger children, more research is needed on the direct net benefits of screening and to identify factors that facilitate its effective implementation.SummaryIndirect evidence supports the benefits of screening for depression in pediatric primary care; most organizations that publish screening guidelines recommend its use.", "venue": "Current Psychiatry Reports", "year": 2018.0, "author_names": ["Valerie L Forman-Hoffman", "Meera Viswanathan"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 196488857, "title": "Beyond Screening: A Stepped Care Pathway for Managing Postpartum Depression in Pediatric Settings.", "abstract": "The negative consequences of untreated postpartum depression (PD) for both the woman and her infant are well established. The impact of maternal depression has led to recommendations on systematic perinatal depression screening. Unfortunately, large scale initiatives on PD screening have found no benefit unless systems are in place to facilitate appropriate interventions for women who screen positive. Pediatric primary care has been a focus of efforts to support screening and management of PD because pediatric providers, unlike adult healthcare providers, have the most frequent contact with postpartum women through well child visits. Well child visits thus present an unparalleled opportunity to detect and intervene with PD. Literature reviews suggest that specific strategies are feasible within pediatric settings and could benefit both the woman and her child. In this article, we present a stepped care approach for screening and managing PD, integrating common elements found in existing pediatric based models. A stepped care approach is ideal because PD is a heterogeneous condition, with a range of presentations and hence responsiveness to various interventions. This care pathway begins with systematic screening for depression symptoms, followed by a systematic risk assessment for women who screen positive and care management based on risk profiles and responsiveness. This approach allows pediatric providers to be optimally flexible and responsive in addressing the majority of women with PD within the context of the family centered medical home to improve child well being. Challenges to managing PD within pediatrics are discussed, including strategies for addressing them. Implications for research, policy, and practice are discussed.", "venue": "Journal of women's health", "year": 2017.0, "author_names": ["Su-chin Serene Olin", "Mary McCord", "Ruth EK Stein", "Bonnie D Kerker", "Dara Weiss", "Kimberly Eaton Hoagwood", "Sarah McCue Horwitz"], "n_citations": 13, "n_key_citations": 2, "score": 0}, {"corpus_id": 25186857, "title": "Identifying Maternal Depression in Pediatric Primary Care: Changes Over a Decade", "abstract": "Objective: Maternal depression affects 10% to 40% of mothers with young children and has negative consequences for children's health and development. The American Academy of Pediatrics (AAP) recommends that pediatricians identify women with maternal depression. The authors examined trends in inquiring about (asking informal questions) or screening for (using a standardized instrument) maternal depression by pediatricians in 2004 and 2013 and identified correlates of usually inquiring/screening to identify maternal depression. Methods: Data were ascertained from 778 nontrainee pediatricians exclusively practicing general pediatrics who completed the 2004 (n 457) and 2013 (n 321) AAP Periodic Surveys. Pediatricians answered questions about physician and practice characteristics, training, attitudes, and inquiring/screening to identify maternal depression. Sample weights were used to reduce nonresponse bias. Weighted descriptive and logistic regression analyses were conducted. Results: The prevalence of usually inquiring/screening to identify maternal depression increased from 33% to 44% (p .01) In both years, pediatricians who usually inquired about child/adolescent depression had increased odds of usually inquiring/screening to identify maternal depression. Patient race/ethnicity and training in adult Diagnostic and Statistical Manual of Mental Disorders (DSM) diagnostic criteria for depression were associated with inquiring/screening in 2004, and believing that family screening is within the scope of the pediatrician was associated with inquiring/screening in 2013. Conclusion: Although inquiring/screening about maternal depression has increased since 2004, less than half of pediatricians usually screen or inquire about maternal depression, representing a missed opportunity to identify depression and manage or refer women for treatment. Further training on the importance of mental and family health to children's health may increase identification of maternal depression in pediatric primary care.", "venue": "Journal of developmental and behavioral pediatrics JDBP", "year": 2016.0, "author_names": ["Bonnie D Kerker", "Amy Storfer-Isser", "Ruth EK Stein", "Andrew S Garner", "Moira A Szilagyi", "Karen G O'Connor", "Kimberly Eaton Hoagwood", "Sarah McCue Horwitz"], "n_citations": 30, "n_key_citations": 0, "score": 0}, {"corpus_id": 79432234, "title": "Depression in pediatric patients with type 1 diabetes", "abstract": "", "venue": "", "year": 2016.0, "author_names": ["Jennifer A Tilleman", "Edward M Desimone", "Elizabeth C Scheffel"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 22377511, "title": "Maternal posttraumatic stress disorder and depression in pediatric primary care: association with child maltreatment and frequency of child exposure to traumatic events.", "abstract": "IMPORTANCE Maternal posttraumatic stress disorder (PTSD) may be associated with increased risk for child maltreatment and child exposure to traumatic events. Exposure to multiple traumatic events is associated with a wide range of adverse health and social outcomes in children. OBJECTIVE To examine the association of probable maternal depression, PTSD, and comorbid PTSD and depression with the risk for child maltreatment and parenting stress and with the number of traumatic events to which preschool children are exposed. DESIGN Cross sectional observational design. We used analysis of variance to determine whether probable maternal psychopathology groups differed on child maltreatment, parenting stress, and children's exposure to traumatic events. Hierarchical regression analyses were used to examine the unique and interactive effects of depression and PTSD severity scores on these outcomes. SETTING Urban pediatric primary care outpatient clinic. PARTICIPANTS Ninety seven mothers of children aged 3 to 5 years. EXPOSURE Pediatric primary care visit. MAIN OUTCOMES AND MEASURES Probable maternal depression and/or PTSD, parenting stress, child exposure to traumatic events, and child maltreatment. RESULTS Mothers with probable comorbid PTSD and depression reported greater child directed psychological aggression and physical assault and greater parenting stress. The children of mothers with PTSD (mean number of events the child was exposed to, 5.0) or with comorbid PTSD and depression (3.5 events) experienced more traumatic events than those of mothers with depression (1.2 events) or neither disorder (1.4 events) Severity of depressive symptoms uniquely predicted physical assault and neglect. Symptom scores for PTSD and depression interacted to predict psychological aggression and child exposure to traumatic events. When PTSD symptom severity scores were high, psychological aggression and the number of traumatic events children experienced rose. Depressive symptom severity scores predicted the risk for psychological aggression and exposure to traumatic events only when PTSD symptom severity scores were low. CONCLUSIONS AND RELEVANCE Children of mothers with PTSD are exposed to more traumatic events. Posttraumatic stress disorder is associated with an increased risk for child maltreatment beyond that associated with depression. Screening and intervention for maternal PTSD, in addition to maternal depression, may increase our ability to reduce children's exposure to traumatic stress and maltreatment.", "venue": "JAMA pediatrics", "year": 2013.0, "author_names": ["Claude M Chemtob", "Omar G Gudino", "Danielle Laraque"], "n_citations": 60, "n_key_citations": 7, "score": 0}, {"corpus_id": 28692999, "title": "Depression in pediatric care: is the WHO Five Well Being Index a valid screening instrument for children and adolescents?", "abstract": "OBJECTIVE This study investigated the criterion validity of the WHO Five Well Being Index (WHO 5) in screening for depression in pediatric care. METHOD A total of 446 children aged 9 to 12 and 324 adolescents aged 13 to 16, recruited from pediatric hospitals, completed the WHO 5 and a structured diagnostic interview serving as the gold standard. Diagnoses of depressive disorder included major depression and minor depression. Criterion validity was analyzed using the area under the receiver operating curve (AUC) Sensitivity and specificity were computed for optimal cutoffs. Additionally, unaided clinical diagnoses of depression made by the attending pediatricians were assessed. RESULTS Diagnoses of depressive disorder were established for 3.6% of children and 11.7% of adolescents. AUCs were .88 for the child and .87 for the adolescent sample. A cutoff score of 10 for children maximized sensitivity .75) and specificity .92) For the adolescent sample, decreasing the cutoff score to 9 yielded optimal sensitivity .74) and specificity .89) Sensitivity of the unaided clinical diagnosis of depression was .09, while specificity was .96. CONCLUSIONS The WHO 5 demonstrated good diagnostic accuracy for both age groups. Further evidence is needed to support the feasibility of the WHO 5 as a depression screening instrument used in pediatric care.", "venue": "General hospital psychiatry", "year": 2012.0, "author_names": ["Antje-Kathrin Allgaier", "Kathrin Pietsch", "Barbara Fruhe", "E J Prast", "Johanna Sigl-Glockner", "Gerd Schulte-Korne"], "n_citations": 61, "n_key_citations": 3, "score": 0}, {"corpus_id": 22496395, "title": "Bipolar Depression in Pediatric Populations", "abstract": "Depression in children and adolescents with bipolar disorder is more commonly observed than mania or hypomania, and is associated with significant functional disability in multiple environmental realms. Optimal management of pediatric bipolar depression is often defined by its multimodal nature with emphasis on both psychopharmacological and psychosocial treatment. This article provides a brief overview of the epidemiology and clinical course of pediatric bipolar depression, a clinically oriented guide to the evidence based psychopharmacological and psychosocial management of bipolar depression in youth, and suggestions on how best to integrate medication and therapy. Recommended treatment for bipolar depression in pediatric populations usually includes both medication and psychosocial interventions given a paucity of double blind, placebo controlled psychopharmacological studies. Lithium and lamotrigine are feasible and tentatively efficacious options; however, treatment with quetiapine monotherapy may be no better than placebo. Furthermore, some youth may be at heightened risk for developing manic symptoms after treatment with selective serotonin reuptake inhibitors (SSRIs) Psychotherapy, either alone or adjunctively with medications, provides practitioners with a safe and feasible alternative. Interpersonal and Social Rhythm Therapy for Adolescents (IPSRT A) Child and Family Focused Cognitive Behavioral Therapy (CFF CBT) Dialectical Behavior Therapy for Adolescents (DBT A) family psychoeducation, and Family Focused Therapy for Adolescents (FFT A) are evidence based treatments available to clinicians treating youth with bipolar depression.", "venue": "Pediatric Drugs", "year": 2013.0, "author_names": ["Victoria E Cosgrove", "Donna J Roybal", "Kiki D Chang"], "n_citations": 20, "n_key_citations": 0, "score": 0}]} -{"query": "Adding vs. averaging in distributed primal-dual optimization", "session_id": 7812539536061568, "user_id": 4975488101473906, "candidates": [{"corpus_id": 13208387, "title": "Adding vs. Averaging in Distributed Primal Dual Optimization", "abstract": "Distributed optimization methods for large scale machine learning suffer from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines. In this paper, we present a novel generalization of the recent communication efficient primal dual framework (COCOA) for distributed optimization. Our framework, COCOA+ allows for additive combination of local updates to the global parameters at each iteration, whereas previous schemes with convergence guarantees only allow conservative averaging. We give stronger (primal dual) convergence rate guarantees for both COCOA as well as our new variants, and generalize the theory for both methods to cover non smooth convex loss functions. We provide an extensive experimental comparison that shows the markedly improved performance of COCOA+ on several real world distributed datasets, especially when scaling up the number of machines.", "venue": "ICML", "year": 2015.0, "author_names": ["Chenxin Ma", "Virginia Smith", "Martin Jaggi", "Michael I Jordan", "Peter Richtarik", "Martin Takac"], "n_citations": 149, "n_key_citations": 37, "score": 1}, {"corpus_id": 5517579, "title": "Primal Dual Interior Point Methods", "abstract": "Preface Notation 1. Introduction. Linear Programming Primal Dual Methods The Central Path A Primal Dual Framework Path Following Methods Potential Reduction Methods Infeasible Starting Points Superlinear Convergence Extensions Mehrotra's Predictor Corrector Algorithm Linear Algebra Issues Karmarkar's Algorithm 2. Background. Linear Programming and Interior Point Methods Standard Form Optimality Conditions, Duality, and Solution Sets The B {SYMBOL 200 \\f \"Symbol\" N Partition and Strict Complementarity A Strictly Interior Point Rank of the Matrix A Bases and Vertices Farkas's Lemma and a Proof of the Goldman Tucker Result The Central Path Background. Primal Method Primal Dual Methods. Development of the Fundamental Ideas Notes and References 3. Complexity Theory. Polynomial Versus Exponential, Worst Case vs Average Case Storing the Problem Data. Dimension and Size The Turing Machine and Rational Arithmetic Primal Dual Methods and Rational Arithmetic Linear Programming and Rational Numbers Moving to a Solution from an Interior Point Complexity of Simplex, Ellipsoid, and Interior Point Methods Polynomial and Strongly Polynomial Algorithms Beyond the Turing Machine Model More on the Real Number Model and Algebraic Complexity A General Complexity Theorem for Path Following Methods Notes and References 4. Potential Reduction Methods. A Primal Dual Potential Reduction Algorithm Reducing Forces Convergence A Quadratic Estimate of \\Phi _{\\rho along a Feasible Direction Bounding the Coefficients in The Quadratic Approximation An Estimate of the Reduction in \\Phi _{\\rho and Polynomial Complexity What About Centrality? Choosing {SYMBOL 114 \\f \"Symbol\" and {SYMBOL 97 \\f \"Symbol\" Notes and References 5. Path Following Algorithms. The Short Step Path Following Algorithm Technical Results The Predictor Corrector Method A Long Step Path Following Algorithm Limit Points of the Iteration Sequence Proof of Lemma 5.3 Notes and References 6. Infeasible Interior Point Algorithms. The Algorithm Convergence of Algorithm IPF Technical Results I. Bounds on \\nu _k \\delimiter \"026B30D (x^k,s^k) \\delimiter \"026B30D Technical Results II. Bounds on (D^k) 1} \\Delta x^k and D^k \\Delta s^k Technical Results III. A Uniform Lower Bound on {SYMBOL 97 \\f \"Symbol\"}k Proofs of Theorems 6.1 and 6.2 Limit Points of the Iteration Sequence 7. Superlinear Convergence and Finite Termination. Affine Scaling Steps An Estimate of {SYMBOL 68 \\f \"Symbol\"}x, {SYMBOL 68 \\f \"Symbol\" s) The Feasible Case An Estimate of {SYMBOL 68 \\f \"Symbol\" x, {SYMBOL 68 \\f \"Symbol\" s) The Infeasible Case Algorithm PC Is Superlinear Nearly Quadratic Methods Convergence of Algorithm LPF+ Convergence of the Iteration Sequence {SYMBOL 206 \\f \"Symbol\"(A,b,c) and Finite Termination A Finite Termination Strategy Recovering an Optimal Basis More on {SYMBOL 206 \\f \"Symbol\" (A,b,c) Notes and References 8. Extensions. The Monotone LCP Mixed and Horizontal LCP Strict Complementarity and LCP Convex QP Convex Programming Monotone Nonlinear Complementarity and Variational Inequalities Semidefinite Programming Proof of Theorem 8.4. Notes and References 9. Detecting Infeasibility. Self Duality The Simplified HSD Form The HSDl Form Identifying a Solution Free Region Implementations of the HSD Formulations Notes and References 10. Practical Aspects of Primal Dual Algorithms. Motivation for Mehrotra's Algorithm The Algorithm Superquadratic Convergence Second Order Trajectory Following Methods Higher Order Methods Further Enhancements Notes and References 11. Implementations. Three Forms of the Step Equation The Cholesky Factorization Sparse Cholesky Factorization. Minimum Degree Orderings Other Orderings Small Pivots in the Cholesky Factorization Dense Columns in A The Augmented System Formulat", "venue": "Other Titles in Applied Mathematics", "year": 1997.0, "author_names": ["Stephen J Wright"], "n_citations": 2189, "n_key_citations": 159, "score": 0}, {"corpus_id": 7845529, "title": "On the Implementation of a Primal Dual Interior Point Method", "abstract": "This paper gives an approach to implementing a second order primal dual interior point method. It uses a Taylor polynomial of second order to approximate a primal dual trajectory. The computations for the second derivative are combined with the computations for the centering direction. Computations in this approach do not require that primal and dual solutions be feasible. Expressions are given to compute all the higher order derivatives of the trajectory of interest. The implementation ensures that a suitable potential function is reduced by a constant amount at each iteration.There are several salient features of this approach. An adaptive heuristic for estimating the centering parameter is given. The approach used to compute the step length is also adaptive. A new practical approach to compute the starting point is given. This approach treats primal and dual problems symmetrically.Computational results on a subset of problems available from netlib are given. On mutually tested problems the results show.", "venue": "SIAM J. Optim.", "year": 1992.0, "author_names": ["Sanjay Mehrotra"], "n_citations": 1562, "n_key_citations": 128, "score": 0}, {"corpus_id": 207175707, "title": "A First Order Primal Dual Algorithm for Convex Problems with Applications to Imaging", "abstract": "In this paper we study a first order primal dual algorithm for non smooth convex optimization problems with known saddle point structure. We prove convergence to a saddle point with rate O(1/N) in finite dimensions for the complete class of problems. We further show accelerations of the proposed algorithm to yield improved rates on problems with some degree of smoothness. In particular we show that we can achieve O(1/N2) convergence on problems, where the primal or the dual objective is uniformly convex, and we can show linear convergence, i.e. O(oN) for some o(0,1) on smooth problems. The wide applicability of the proposed algorithm is demonstrated on several imaging problems such as image denoising, image deconvolution, image inpainting, motion estimation and multi label image segmentation.", "venue": "Journal of Mathematical Imaging and Vision", "year": 2010.0, "author_names": ["Antonin Chambolle", "Thomas Pock"], "n_citations": 3413, "n_key_citations": 543, "score": 0}, {"corpus_id": 9856587, "title": "Dual methods for nonconvex spectrum optimization of multicarrier systems", "abstract": "The design and optimization of multicarrier communications systems often involve a maximization of the total throughput subject to system resource constraints. The optimization problem is numerically difficult to solve when the problem does not have a convexity structure. This paper makes progress toward solving optimization problems of this type by showing that under a certain condition called the time sharing condition, the duality gap of the optimization problem is always zero, regardless of the convexity of the objective function. Further, we show that the time sharing condition is satisfied for practical multiuser spectrum optimization problems in multicarrier systems in the limit as the number of carriers goes to infinity. This result leads to efficient numerical algorithms that solve the nonconvex problem in the dual domain. We show that the recently proposed optimal spectrum balancing algorithm for digital subscriber lines can be interpreted as a dual algorithm. This new interpretation gives rise to more efficient dual update methods. It also suggests ways in which the dual objective may be evaluated approximately, further improving the numerical efficiency of the algorithm. We propose a low complexity iterative spectrum balancing algorithm based on these ideas, and show that the new algorithm achieves near optimal performance in many practical situations", "venue": "IEEE Transactions on Communications", "year": 2006.0, "author_names": ["Wei Yu", "Raymond Lui"], "n_citations": 1457, "n_key_citations": 139, "score": 0}, {"corpus_id": 7981562, "title": "Distributed Optimization by Ant Colonies", "abstract": "Ants colonies exhibit very interesting behaviours: even if a single ant only has simple capabilities, the behaviour of a whole ant colony is highly structured. This is the result of coordinated interactions. But, as communication possibilities among ants are very limited, interactions must be based on very simple flows of information. In this paper we explore the implications that the study of ants behaviour can have on problem solving and optimization. We introduce a distributed problem solving environment and propose its use to search for a solution to the travelling salesman problem.", "venue": "", "year": 1992.0, "author_names": ["Alberto Colorni", "Marco Dorigo", "Vittorio Maniezzo", "Francisco J Varela", "Paul Emile Bourgine"], "n_citations": 2842, "n_key_citations": 187, "score": 0}, {"corpus_id": 2166128, "title": "Dual Averaging Methods for Regularized Stochastic Learning and Online Optimization", "abstract": "We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1 norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularization structure in an online setting. In particular, at each iteration, the learning variables are adjusted by solving a simple optimization problem that involves the running average of all past subgradients of the loss functions and the whole regularization term, not just its subgradient. Computational experiments show that the RDA method can be very effective for sparse online learning with l1 regularization.", "venue": "J. Mach. Learn. Res.", "year": 2009.0, "author_names": ["Lin Xiao"], "n_citations": 729, "n_key_citations": 124, "score": 0}, {"corpus_id": 10288654, "title": "The Sample Average Approximation Method for Stochastic Discrete Optimization", "abstract": "In this paper we study a Monte Carlo simulation based approach to stochastic discrete optimization problems. The basic idea of such methods is that a random sample is generated and the expected value function is approximated by the corresponding sample average function. The obtained sample average optimization problem is solved, and the procedure is repeated several times until a stopping criterion is satisfied. We discuss convergence rates, stopping rules, and computational complexity of this procedure and present a numerical example for the stochastic knapsack problem.", "venue": "SIAM J. Optim.", "year": 2002.0, "author_names": ["Anton J Kleywegt", "Alexander Shapiro", "Tito Homem-de-Mello"], "n_citations": 1412, "n_key_citations": 118, "score": 0}, {"corpus_id": 51789432, "title": "Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers", "abstract": "Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.", "venue": "Found. Trends Mach. Learn.", "year": 2011.0, "author_names": ["Stephen P Boyd", "Neal Parikh", "Eric King-wah Chu", "Borja Peleato", "Jonathan Eckstein"], "n_citations": 13308, "n_key_citations": 2844, "score": 0}, {"corpus_id": 29764143, "title": "DSCOVR: Randomized Primal Dual Block Coordinate Algorithms for Asynchronous Distributed Optimization", "abstract": "Machine learning with big data often involves large optimization models. For distributed optimization over a cluster of machines, frequent communication and synchronization of all model parameters (optimization variables) can be very costly. A promising solution is to use parameter servers to store different subsets of the model parameters, and update them asynchronously at different machines using local datasets. In this paper, we focus on distributed optimization of large linear models with convex loss functions, and propose a family of randomized primal dual block coordinate algorithms that are especially suitable for asynchronous distributed implementation with parameter servers. In particular, we work with the saddle point formulation of such problems which allows simultaneous data and model partitioning, and exploit its structure by doubly stochastic coordinate optimization with variance reduction (DSCOVR) Compared with other first order distributed algorithms, we show that DSCOVR may require less amount of overall computation and communication, and less or no synchronization. We discuss the implementation details of the DSCOVR algorithms, and present numerical experiments on an industrial distributed computing system.", "venue": "J. Mach. Learn. Res.", "year": 2019.0, "author_names": ["Lin Xiao", "Adams Wei Yu", "Qihang Lin", "Weizhu Chen"], "n_citations": 43, "n_key_citations": 5, "score": 0}]} -{"query": "nitrogen and phosphorus six plant functional types", "session_id": 951397194269398, "user_id": 2094172747830749, "candidates": [{"corpus_id": 35289962, "title": "Nitrogen and phosphorus availabilities interact to modulate leaf trait scaling relationships across six plant functional types in a controlled environment study.", "abstract": "Nitrogen (N) and phosphorus (P) have key roles in leaf metabolism, resulting in a strong coupling of chemical composition traits to metabolic rates in field based studies. However, in such studies, it is difficult to disentangle the effects of nutrient supply per se on trait trait relationships. Our study assessed how high and low N (5 mM and 0.4 mM, respectively) and P (1 mM and 2 mM, respectively) supply in 37 species from six plant functional types (PTFs) affected photosynthesis (A) and respiration (R) (in darkness and light) in a controlled environment. Low P supply increased scaling exponents (slopes) of area based log log A N or R N relationships when N supply was not limiting, whereas there was no P effect under low N supply. By contrast, scaling exponents of A P and R P relationships were altered by P and N supply. Neither R A nor light inhibition of leaf R was affected by nutrient supply. Light inhibition was 26% across nutrient treatments; herbaceous species exhibited a lower degree of light inhibition than woody species. Because N and P supply modulates leaf trait trait relationships, the next generation of terrestrial biosphere models may need to consider how limitations in N and P availability affect trait trait relationships when predicting carbon exchange.", "venue": "The New phytologist", "year": 2017.0, "author_names": ["Kristine Y Crous", "Odhran S O'Sullivan", "Joana Zaragoza-Castells", "Keith J Bloomfield", "Anna Clarissa A Negrini", "Patrick Meir", "Matthew H Turnbull", "Kevin L Griffin", "Owen K Atkin"], "n_citations": 26, "n_key_citations": 3, "score": 1}, {"corpus_id": 225014342, "title": "Variations in leaf morphological and chemical traits in response to life stages, plant functional types, and habitat types in an old growth temperate forest", "abstract": "Abstract Intraspecific leaf trait variations are becoming a topic of interest for many ecologists because individual based traits are essentially the drivers of variations at the community level. Six coexisting major tree species in an old growth temperate forest, Northeast China (i.e. Abies nephrolepis, Pinus koraiensis, Acer mono, Fraxinus mandshurica, Tilia amurensis, and Ulmus laciniata) were sampled, and three habitat types (i.e. Hab I: high soil organic carbon with a moderate slope; Hab II: low soil organic carbon with a gentle slope; and Hab III: low soil organic carbon with a strong slope) were used in the plot. We performed a two way ANOVA to compare the specific leaf area (SLA) leaf dry matter content (LDMC) leaf nitrogen content (LNC) leaf phosphorus content (LPC) and leaf carbon content (LCC) between saplings (1", "venue": "", "year": 2020.0, "author_names": ["Dina Oktavia", "Guangze Jin"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 224875357, "title": "Litter chemical traits strongly drove the carbon fractions loss during decomposition across an alpine treeline ecotone.", "abstract": "The decomposition of litter carbon (C) fraction is a major determinant of soil organic matter pool and nutrient cycling. However, knowledge of litter chemical traits regulate C fractions release is still relatively limited. A litterbag experiment was conducted using six plant functional litter types at two vegetation type (coniferous forest and alpine shrubland) in a treeline ecotone. We evaluated the relative importance of litter chemistry (i.e. Nutrient, C quality, and stoichiometry) on the loss of litter mass, non polar extractables (NPE) water soluble extractables (WSE) acid hydrolyzable carbohydrates (ACID) and acid unhydrolyzable residue (AUR) during decomposition. Litter nutrients contain nitrogen (N) phosphorus (P) potassium (K) calcium (Ca) sodium (Na) magnesium (Mg) aluminium (Al) manganese (Mn) zinc (Zn) iron (Fe) and copper (Cu) litter C quality contains C, WSE, NPE, ACID, and AUR, and stoichiometry was defined by C:N, C:P; N:P, ACID:N, and AUR:N. The results showed single exponential model fitted decomposition rates of litter mass and C fractions better than double exponential or asymptotic decomposition, and the decomposition rates of C fractions were strongly correlated with initial litter nutrients, especially K, Na, Ca. Furthermore, the temporal dynamics of litter nutrients (Ca, Mg, Na, K, Zn, and Fe) strongly regulated C fractions loss during the decomposition process. Changes in litter C quality had an evident effect on the degradation of ACID and AUR, supporting the concept of \"priming effect\" of soluble carbon fraction. The significant differences were found in the release of NPE, WSE, and ACID rather than AUR among coniferous forest and alpine shrubland, and the vegetation type effects largely depend on the changes in litter stoichiometry, which is an important implication for the change in plant community abundance regulate decay. Collectively, elucidating the hierarchical drivers of litter chemistry on decomposition is critical to soil C sequestration in alpine ecosystems.", "venue": "The Science of the total environment", "year": 2021.0, "author_names": ["Lifeng Wang", "Ya-mei Chen", "Yu Zhou", "Haifeng Zheng", "Zhen-feng Xu", "Bo Tan", "Cheng-ming You", "Liuyan Zhang", "Han-Han Li", "Li Guo", "Lixia Wang", "Youyou Huang", "Junmin Zhang", "Yang Liu"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 14518214, "title": "Native and alien herbaceous plants in the Brazilian Cerrado are (co )limited by different nutrients", "abstract": "Background and aimsThe diverse flora of the Brazilian Cerrado is threatened by agricultural expansion, nutrient enrichment and invasion of alien plants. We performed a fertilization experiment to investigate the nature of nutrient limitation in Cerrado vegetation, and evaluate whether native and alien invasive species are limited by the same or different nutrients.MethodsWe applied various combinations of nutrients (phosphorus (P) nitrogen (N) and a mixture of other macro and micro nutrients 'cations treatment' to six types of Cerrado vegetation. We then studied over a 3 year period how these treatments affected the aboveground biomass of native forbs, native C3 and C4 grasses, and invasive C4 grasses.ResultsThe full nutrient treatment (N P+ 'cations' significantly increased total community biomass across our sites, but P alone had no effect. The nutrient treatments also affected the relative abundance of functional plant groups in the six vegetation types. P addition, either alone or in combination with other nutrients, increased the biomass of alien C4 grasses, where present, whereas the cations treatment stimulated growth of the native C4 grasses. Addition of N P reduced the biomass of native C3 grasses.ConclusionsOur results indicate co limitation by several nutrients, including P, perhaps N, and at least one other nutrient. Further research is needed to determine what the other nutrient (or nutrients) may be. Native and invasive species appear to be limited by different nutrients, with P alone stimulating growth of African C4 grasses. This should be considered in managing both natural and invaded communities.", "venue": "Plant and Soil", "year": 2015.0, "author_names": ["Luciola Santos Lannes", "Mercedes M C Bustamante", "Peter John Edwards", "Harry Olde Venterink"], "n_citations": 28, "n_key_citations": 4, "score": 0}, {"corpus_id": 84593484, "title": "Local variation in soil microbial community structure in seminatural and artificial grasslands", "abstract": "Although above ground and below ground biological communities have mutual functional links, less is known about structural relationships between them. This study examines how soil microbial community structure is related to plant vegetation types and soil fertility in seminatural and artificial grasslands of Shiriyazaki, Aomori Prefecture, Japan. Soil microbial community structure was analyzed by profiles of phospholipids fatty acid (PLFA) Soil nitrogen and phosphorus contents were much higher in artificial grasslands than in seminatural grasslands. Seminatural and artificial grasslands showed significant differences between them in relative abundances of six PLFA peaks out of 16 peaks used in this study. Seminatural grasslands consist of two distinctive vegetation types depending on grazing intensity: short grass vegetation and tall grass vegetation. Although plant species composition largely differed between the short grass and tall grass vegetation types, the soil microbial community structure did not show a significant difference between them. These results indicate strong influences of human management on the soil microbial community.", "venue": "", "year": 2007.0, "author_names": ["M Zabed Hossain", "Atsushi Okubo", "Shuichi Sugiyama"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 132738068, "title": "Diversidad funcional de bosques muy humedos tropicales en el noreste de Costa Rica a partir de rasgos foliares y densidad de la madera", "abstract": "Tropical wet forests are characterized by their high species diversity and a high level of complexity in the ecosystem processes taking place there. These properties are the ones that give these forests their high potential to offer ecosystem services to society. At the present time, forest ecology studies are focused on the influence of high taxonomic diversity on the quantity and quality of services that forests perform. Many functional approaches have been proposed to address this topic. In the study area dominant species were selected in terms of basal area and for each the leaf area, leaflet area, specific leaf area, leaf tensile strength, leaf dry matter content, leaf content of nitrogen and phosphorus and specific density of wood were determined. These are key traits in the determinational ecosystem functions like nutrient cycling and carbon capture and storage. Multivariate analysis was used to find six plant functional types (PFT's) that have significantive differences regarding attributes. The PFT's found were, respectively palms and five groups of trees named as legumes and others, intermediates, as net acquisitives, as larged laeved acquisitive and conservative species. It is proposed that each group has different potentialities to contribute to nutrient cycling processes and carbon capture and storage, according to its functional properties. Functional diversity was them quantified using three methodologies: the richness of PFT's, FAD2 and FD indices. It was found that secondary forest has lower FD than old growth and logged forests. These differences were not total, but only for the PFT legumes ones and others and PFT largedlaeved acquisitive. It is suggested that late successional forest is already performing some functions similar to the old growth forests, even when its species composition and structure do not reach the characteristics of the old growth forests. Lastly, the pertinency of the three FD", "venue": "", "year": 2007.0, "author_names": ["Fernando Fernandez Mendez"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 18408296, "title": "From Leaf to Litter: Nutrient resorption in a changing environment", "abstract": "Contents Page Chapter 1 General introduction 1 Chapter 2 Current measures of nutrient resorption efficiency lead to 7 a substantial underestimation of real resorption efficiency: facts and solutions Chapter 3 Plant functional types are good predictors of nitrogen 13 resorption proficiency along environmental gradients Chapter 4 Nitrogen and phosphorus resorption efficiency and proficiency 21 in six sub arctic bog species after 4 years of nitrogen fertilization Chapter 5 Effects of increased N availability and CO 2 concentration 33 on late seasonal N dynamics in the grass Molinia caerulea Chapter 6 General discussion 41 References 47 Summary 53 Samenvatting 57 Nawoord 61 1 Chapter 1 General introduction The ecological importance of nutrient resorption from senescing leaves Ecosystems are complex structures where abiotic conditions and biota interact. The potential presence of a species is determined by the combination of abiotic conditions and the biota already present. However, biota also alter their environment, with the emergence of a high oxygen concentration in the atmosphere being one of the most important biotic driven changes of abiotic conditions in history. Plants play an important role in ecosystems, because they are the primary producers and they strongly control nutrient cycles, especially those of N and P. A large part of the available N and P in the ecosystem is organically bound in plants, as organisms have a high demand of N and P to produce various components, like proteins, energy carriers, genetic material and phospholipids. These nutrients may be returned to the soil through exudation, leaching or turnover of dead material. A strategy to minimise nutrient losses through litter is to resorb these nutrients during tissue senescence, thus producing litter with low nutrient concentrations. Moreover, the slow turnover rate of litter with low nutritional value slows down nutrient cycling, and thus leads to a positive feedback between plant species dominance and nutrient availability (Chapin 1993, Aerts 1999) Plant growth in natural terrestrial ecosystems is mostly N limited, although P limitation also occurs frequently (Chapin 1980) Therefore, resorption of N and P from senescing tissue is of great adaptive significance, because the resorbed nutrients are directly available for further use (e.g. seed filling, bud growth, storage) making a species less dependent on current nutrient uptake (Aerts and Chapin 2000) In spring, remobilisation of nutrients from storage organs can lead to (competitive) early regrowth of foliage, even before the start of nutrient uptake from the soil (Thornton and Millard 1993, Millard", "venue": "", "year": 2004.0, "author_names": ["Luisa M Van Heerwaarden"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 4995957, "title": "Nitrogen and phosphorus concentrations and allocation strategies among shrub organs: the effects of plant growth forms and nitrogen fixation types", "abstract": "AimsWe aimed to explore the influences of plant functional groups on nutrient concentrations and allocation strategies among shrub organs, as well as to examine the effects of climate, soil and species on nutrient concentrations in shrubs of different plant functional groups.MethodsWe investigated the nitrogen (N) and phosphorus (P) concentrations in roots, stems and leaves and their influencing factors of 187 shrub species in the shrublands across southern China, and we also examined the relationships between N and P among various organs using scaling analysis.ResultsThe scaling relationships of N and P tended to be allometric between leaf and non leaf organs, while they tended to be isometric among non leaf organs. Plant functional groups affected nutrient allocation among shrub organs, where a higher proportion of nutrients were present in the stems and roots of evergreen shrubs and non legume shrubs when compared to deciduous shrubs and legume shrubs as nutrients within a plant increased. Among organs, N and P concentrations were higher in leaves than in stems and roots. Among functional groups, evergreen shrubs and legume shrubs were more P limited than deciduous shrubs and non legume shrubs, respectively. The N and P concentrations in evergreen shrubs were lower and more sensitive to environmental change than in deciduous shrubs. Both N and P contents in legume shrubs were higher and more homeostatic than those of non legume shrubs.ConclusionsPlant growth forms and N fixation types exerted strong effects on nutrient concentrations and allocations among shrub organs. The influences of climate and soil on shrub N and P concentrations differed by plant functional groups.", "venue": "Plant and Soil", "year": 2018.0, "author_names": ["Qiang Zhang", "Gaoming Xiong", "Jiaxiang Li", "Zhijun Lu", "Yuelin Li", "Wenting Xu", "Yang Sheng Wang", "Changming Zhao", "Zhiyao Tang", "Zongqiang Xie"], "n_citations": 15, "n_key_citations": 1, "score": 0}, {"corpus_id": 211044381, "title": "Multi Dimensional Plant Element Stoichiometry Looking Beyond Carbon, Nitrogen, and Phosphorus", "abstract": "Nutrient elements are important for plant growth. Element stoichiometry considers the balance between different nutrients and how this balance is affected by the environment. So far, focus of plant stoichiometry has mainly been on the three elements carbon (C) nitrogen (N) and phosphorus (P) but many additional elements are essential for proper plant growth. Our overall aim is to test the scaling relations of various additional elements (K, Ca, Mg, S, Cu, Zn, Fe, Mn) by using ten data sets from a range of plant functional types and environmental conditions. To simultaneously handle more than one element, we define a stoichiometric niche volume as the volume of an abstract multidimensional shape in n dimensions, with the n sides of this shape defined by the plant properties in question, here their element concentrations. Thus, a stoichiometric niche volume is here defined as the product of element concentrations. The volumes of N and P (VNP) are used as the basis, and we investigate how the volume of other elements (VOth) scales with respect to VNP, with the intention to explore if the concentrations of other elements increase faster (scaling exponent 1) or slower <1) than the concentrations of N and P. For example, scaling exponents >1 suggest that favorable conditions for plant growth, i.e. environments rich in N and P, may require proportionally higher uptake of other essential elements than poor conditions. We show that the scaling exponent is rather insensitive to environmental conditions or plant species, and ranges from 0.900 to 2.479 (average 1.58) in nine out of ten data sets. For single elements, Mg has the smallest scaling exponent (0.031) and Mn the largest (2.147) Comparison between laboratory determined stoichiometric relations and field observations suggest that element uptake in field conditions often exceeds the minimal physiological requirements. The results provide evidence for the view that the scaling relations previously reported for N and P can be extended to other elements; and that N and P are the driving elements in plant stoichiometric relations. The stoichiometric niche volumes defined here could be used to predict plant performances in different environments.", "venue": "Frontiers in Plant Science", "year": 2020.0, "author_names": ["Goran I Agren", "Martin Weih"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 59225217, "title": "Efficiency of nitrogen and phosphorus removal by six macrophytes from eutrophic water", "abstract": "Abstract Increased nitrogen and phosphorus pollution causes eutrophication in water bodies. Using aquatic plants to remove nutrients from water is an attractive phytoremediation. It is a cost effective, environment friendly, and efficient way that reduces water body eutrophication by the plant. It is important to choose suitable macrophytes to remove excess N and P under different nutrient conditions. In this study, six macrophyte species (Polygonum orientale, Juncus effuses, Iris pseudocorus, Phragmites australis, Iris sanguinea, Typha orientalis) were tested. Simulation experiment was conducted under five N and P levels. The removal rate, relative growth rate, and the dynamic nutrition concentration of cultivated solution were investigated. Of all the treatment, a 23 95% reduction in N removal and a 29 92% reduction in P removal were recorded. The results showed I. sanguinea is a promising species to treat various eutrophic waters and the other five species can be used specifically to treat certain types of water. The data provided a theoretical guidance to plant species selection for phytoremediation of polluted water bodies for the purpose of water quality improvement around the different reservoir in northern China.", "venue": "International journal of phytoremediation", "year": 2019.0, "author_names": ["Shuai Yu", "Chun-Hua Miao", "Hong-Ln Song", "Yanqing Huang", "Wei Chen", "Xingyuan He"], "n_citations": 12, "n_key_citations": 1, "score": 0}]} -{"query": "Wind Turbines: Is There a Human Health Risk?", "session_id": 4304022587303336, "user_id": 2756675151892000, "candidates": [{"corpus_id": 7855846, "title": "Wind turbines: is there a human health risk?", "abstract": "The term \"Wind Turbine Syndrome\" was coined in a recently self published book, which hypothesized that a multitude of symptoms such as headache and dizziness resulted from wind turbines generating low frequency sound (LFS) The objective of this article is to provide a summary of the peer reviewed literature on the research that has examined the relationship between human health effects and exposure to LFS and sound generated from the operation of wind turbines. At present, a specific health condition has not been documented in the peer reviewed literature that has been classified as a disease caused by exposure to sound levels and frequencies generated by the operation of wind turbines. Communities are experiencing a heightened sense of annoyance and fear from the development and siting of wind turbine farms. High quality research and effective risk communication can advance this course from one of panic to one of understanding and exemplification for other environmental advancements.", "venue": "Journal of environmental health", "year": 2013.0, "author_names": ["Jennifer D Roberts", "Mark A Roberts"], "n_citations": 14, "n_key_citations": 0, "score": 1}, {"corpus_id": 110188587, "title": "Adverse health effects of industrial wind turbines", "abstract": "Much of the feedback has been constructive and should help advance awareness of the health risks of placing industrial wind turbines (IWTs) too close to humans. However, the opinions expressed by blogger Mike G. Barnard deserve comment. 2 The Society for Wind Vigilance is not an \"anti wind\" campaigning organization. It is a not for profit organization, the purpose of which is to ensure safe positioning of wind turbine facilities based on human health research; educate through the dissemination of facts and references on the risk of adverse health effects of human exposure to IWTs; work constructively with interested parties to ensure that guidelines for wind turbine facilities will protect the health and safety of communities; and achieve vigilance monitoring and long term surveillance regarding the risks to health", "venue": "", "year": 2013.0, "author_names": ["David A Colby"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 17724278, "title": "Wind Turbines and Ghost Stories: The Effects of Infrasound on the Human Auditory System", "abstract": "Climate change and fossil fuel depletion have pushed many countries to seek and invest in alternative clean energy sources, such as wind energy. By converting kinetic energy from the wind into mechanical or electrical energy, wind farms in California, for example, power nearly 850,000 households each year, while producing negligible green house gases and contributing little to water pollution (see Fig. 1) Nevertheless, several ecological and environmental concerns remain. High levels of infrasound and low frequency sounds generated by wind turbines pose a potentially serious threat to communities near wind farms. Wind energy companies remain largely dismissive, claiming that wind turbine noise is subaudible, undetectable by humans, and therefore presents minimal risk to human health. However, various cochlear microphonic, distortion product otoacoustic emission, and functional magnetic resonance imaging (fMRI) studies have demonstrated the detection of infrasound by the human inner ear and auditory cortex. Additional psychosomatic stress and disorders, including the \"wind turbine syndrome\" and paranormal experiences, are also linked to infrasound exposures. With wind turbines generating substantial levels of infrasound and low frequency sound, modifications and regulations to wind farm engineering plans and geographical placements are necessary to minimize community exposure and potential human health risks.", "venue": "", "year": 2012.0, "author_names": ["Hsuan-hsiu Annie Chen", "Peter M Narins"], "n_citations": 8, "n_key_citations": 0, "score": 0}, {"corpus_id": 25448410, "title": "Wind turbines and idiopathic symptoms: The confounding effect of concurrent environmental exposures.", "abstract": "Whether or not wind turbines pose a risk to human health is a matter of heated debate. Personal reactions to other environmental exposures occurring in the same settings as wind turbines may be responsible of the reported symptoms. However, these have not been accounted for in previous studies. We investigated whether there is an association between residential proximity to wind turbines and idiopathic symptoms, after controlling for personal reactions to other environmental co exposures. We assessed wind turbine exposures in 454 residences as the distance to the closest wind turbine (Dw) and number of wind turbines <1000m (Nw1000) Information on symptoms, demographics and personal reactions to exposures was obtained by a blind questionnaire. We identified confounders using confounders' selection criteria and used adjusted logistic regression models to estimate associations. When controlling only for socio demographic characteristics, log10Dw was associated with \"unnatural fatigue\" (ORadj=0.38, 95%CI=0.15 1.00) and \"difficulty concentrating\" (ORadj=0.26, 95%CI=0.08 0.83) and Nw1000 was associated with \"unnatural fatigue\" (ORadj=1.35, 95%CI=1.07 1.70) and \"headache\" (ORadj=1.26, 95%CI=1.00 1.58) After controlling for personal reactions to noise from sources different from wind turbines and agricultural odor exposure, we did not observe a significant relationship between residential proximity to wind turbines and symptoms and the parameter estimates were attenuated toward zero. Wind turbines health associations can be confounded by personal reactions to other environmental co exposures. Isolated associations reported in the literature may be due to confounding bias.", "venue": "Neurotoxicology and teratology", "year": 2016.0, "author_names": ["Victoria Blanes-Vidal", "Joel D Schwartz"], "n_citations": 12, "n_key_citations": 1, "score": 0}, {"corpus_id": 53975165, "title": "Multi Criteria Decision Analysis for Benchmarking Human Free Lifting Solutions in the Offshore Wind Energy Environment", "abstract": "With single components weighing up to hundreds of tonnes and lifted to heights of approximately 100 m, offshore wind turbines can pose risks to personnel, assets, and the environment during installation and maintenance interventions. Guidelines and standards for health and safety in lifting operations exist; however, having people directly beneath the load is still common practice in offshore wind turbine installations. Concepts for human free offshore lifting operations in the categories of guidance and control, connections, and assembly are studied in this work. This paper documents the process of applying Multi Criteria Decision Analysis (MCDA) using experts' opinions for the importance of defined criteria obtained by conducting an industry survey, to benchmark the suitability of the concepts at two stages. Stage one streamlined possible options and stage two ranked the remaining suite of options after further development. The survey results showed that criteria such as 'reduction of risk' 'handling improvement' and 'reliability of operation' were most important. The most viable options, weighted by industry opinion, to remove personnel from areas of high risk are: Boom Lock and tag lines, a camera system with mechanical guidance, and automated bolt installation/fastening for seafastening. The decision analysis framework developed can be applied to similar problems to inform choices subject to multiple criteria.", "venue": "", "year": 2018.0, "author_names": ["Mark Richmond", "T M Balaam", "Paul Causon", "Debora Cevasco", "Mareike Leimeister", "Athanasios Kolios", "Feargal Brennan"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 115837574, "title": "Investigation of occupational noise annoyance in a wind turbine power plant", "abstract": "Noise, emitted by wind turbines, is one of the main health risk factors which has been recently considered in many researches. Noise annoyance is among the most important human responses to noise. The aim of this work was to modeling of annoyance due to noise at workplace coming from wind turbines in workers. All workers of a wind power plant consisted the study sample. The equivalent noise level was measured using a task based method. Moreover, data related to noise annoyance and noise sensitivity were acquired by standardized methods. Based on the results, noise exposure, noise sensitivity, visibility, age, and experience affected noise annoyance. According to path analysis, the most indirect and direct effect on noise annoyance were attributed to noise exposure. Age, sensitivity, and noise exposure were positively associated to annoyance. It can be concluded that there is a significant relationship between age, experience, sensitivity to noise, and exposure to the wind turbine noise with noise annoyance.", "venue": "", "year": 2018.0, "author_names": ["Mohammad Reza Monazzam", "Seyed Abolfazl Zakerian", "Zeinab Kazemi", "M H Ebrahimi", "Maryam Ghaljahi", "Ahmad Mehri", "Farzaneh Afkhaminia", "Milad Abbasi"], "n_citations": 12, "n_key_citations": 0, "score": 0}, {"corpus_id": 86295500, "title": "Wind Energy: The Next Frontier for Ecological Risk Assessment", "abstract": "There is global consensus that renewable sources of energy will benefit human health and the environment, for example, reducing health impacts from particulates in the air and potential adverse environmental impacts from climate change. Significant economic resources are being devoted to new research and incentives for broad deployment of renewable energy technologies. The U.S. Department of Energy released a report in 2008 in which the agency began to assess the feasibility of meeting a goal of 20% U.S. electricity provided by wind by 2030 (DOE 2008) The Obama Administration has publicized somewhat different goals, that is, to generate 10 percent of U.S. electricity from all renewable sources in 2012 and 25 percent in 2025 (see http:/www.whitehouse.gov/agenda/energy and environment/ Environmental risk issues are prominent in the challenges for meeting any of these goals (DOE 2008) Adverse ecological effects of some wind energy projects include bird and bat fatalities from collisions with wind turbines and potential decrements in abundance of wildlife through displacement by wind farms. These issues are in the spotlight because of the visible and local nature of the collisions and the direct pathway from the stressor to the \"receptor\" or endpoint. It is hard to ignore a situation where exposure is equivalent to mortality, whether that mortality comes from physical contact with a turbine blade or from the sudden drop in air pressure near the turbine blade. In contrast, downstream impacts of coal usage are generally less newsworthy. Environmental assessments conducted for wind developers or related regulatory purposes currently use inconsistent methods, with some assessments adopting more rigorous analysis and robust metrics than others, and with some attributes of birds or bats more adequately supported by monitoring and off site information than others. Some completed assessments are publicly available as examples, but other assessments and supporting data are held back for proprietary reasons. Many environmental assessments could be improved if they used the formal process of an ecological risk assessment framework that emphasizes problem formulation, a quantitative characterization of exposure, a quantitative characterization of effects, and a risk characterization that involves weight of evidence or a probabilistic endpoint. The use of risk assessment for siting wind energy projects has been advocated by the state of New York (e .g Chautauqua Windpower et al. 2004) The large scale deployment of wind energy has important implications for the field of ecological risk assessment. First, both prospective and retrospective", "venue": "", "year": 2009.0, "author_names": ["Rebecca Ann Efroymson"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 52228817, "title": "Frequency Weighting for the Evaluation of Human Response to Low Frequency Noise based on the Physiological Evidence of the Vestibular System", "abstract": "Several studies were found regarding adverse health effects due to low frequency noise emitted by industrial machines including wind turbines. However, the causal chain between low frequency noise and health effects still remains unclear. Meanwhile, from the physiological viewpoint, low frequency noise stimulate hair cells in the vestibular system, which could cause dizziness, vertigo, headache and nausea. The stimulating process is different from the hearing process in the cochlea, which implies that the A weighting is not appropriate for evaluating the risk of low frequency noise and that an alternative method is required. In this study, we developed a frequency weighting for low frequency noise based on existing physiological evidences of the vestibular system and a psychological experiment on vibration and/or pressure perceptions. The obtained frequency weighting showed steep peak around 40 80Hz, which was distinctly different from A weighting. We also derived the dose response relationship between the weighted sound pressure level and the perception of vibration and/or pressure which may be caused in the vestibular system. INCIDENT OCCURRED IN JAPAN ABOUT 40 YEARS AGO In Japan, there was an incident due to low frequency noise along an elevated motorway 40 years ago, where more than half of residences complain of headache or dizziness like wind turbine syndrome. Figure 1 shows the relationships between the prevalence rate and distance from the motorway. At that time, psychological laboratory experiences were conducted to obtain the relationship between low frequency noise and perceptions. The results revealed that subjects perceived vibration and/or pressure due to low frequency noise around 40 80Hz, which would not be caused in the cochlea but in the vestibular system.", "venue": "", "year": "", "author_names": ["Junta Tagusari", "Shouko Satou", "Toshihito Matsui"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 109285133, "title": "Chapter 10 Nanotechnology Safety in the Energy Industry", "abstract": "Nonrenewable sources of energy have been rapidly growing in recent years and research emphasis has been directed to the utilization of renewable energy sources for a cleaner and healthier environment. For over two decades, researchers have investigated many possibilities in terms of renewable energies to generate sustainable energy. Solar cells, fuel cells, photoelectrolysis, supercapacitors, batteries, and wind turbines have the potential to be efficient methods to directly convert one state of energy into another one. In these new energy systems, various types of nanotechnology and their products have been utilized to increase the efficiencies of these energy systems. However, these new developments also bring many uncertainties and risks to human health and the environment. Therefore, the future of nanotechnology depends mainly on public acceptance of the risks associated with the use of nanomaterials and their benefits. Risk assessment of nanomaterials is mainly the basis of formulating guidelines of protecting human health and the environment. This chapter provides information on the current state of nanomaterials used by the energy industry and offers suggestions for continuing our path toward sustainable development in the energy field.", "venue": "", "year": 2013.0, "author_names": ["Ramazan Asmatulu", "Waseem Sabir Khan"], "n_citations": 8, "n_key_citations": 0, "score": 0}, {"corpus_id": 220827647, "title": "Addressing low frequency sound and infrasound from wind turbines", "abstract": "The article addresses the low frequency sound and infrasound from wind turbines. Modem wind turbines produce broadband noise, with the dominant sound source related to turbulence at the trailing edge of the blades. In relation to human perception of the sound, the dominant frequency range is not the low frequency or infrasonic ranges, but low frequency sound will routinely be an audible component of the acoustic impact. Publications by medical professionals indicate that, at the typical setback distances in Ontario. the overall magnitude of the sound pressure levels produced by wind turbine generators does not represent a direct health risk. This includes noise at low and infrasound frequencies. The relationship between the sound level and the prevalence of annoyance is complicated, and is often influenced by other non acoustic factors. This situation does not relate exclusively to the low frequency component of the audible noise impact of wind turbines.", "venue": "", "year": 2011.0, "author_names": ["Brian Howe", "Nick McCabe", "Ian Bonsma"], "n_citations": 3, "n_key_citations": 0, "score": 0}]} -{"query": "Decentralized blockchain -based electronic marketplaces", "session_id": 687610089258881, "user_id": 7125732118650811, "candidates": [{"corpus_id": 40137794, "title": "Decentralized blockchain based electronic marketplaces", "abstract": "In a decentralized marketplace, buyers and sellers transact directly, without manipulation by intermediary platforms.", "venue": "Commun. ACM", "year": 2018.0, "author_names": ["Hemang Subramanian"], "n_citations": 141, "n_key_citations": 5, "score": 1}, {"corpus_id": 216011059, "title": "Decentralized blockchain based electronic marketplaces", "abstract": "In a decentralized marketplace, buyers and sellers transact directly, without manipulation by intermediary platforms.", "venue": "", "year": 2017.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 203494215, "title": "Toward a renaissance of cooperatives fostered by Blockchain on electronic marketplaces: a theory driven case study approach", "abstract": "Currently, there is a disparity of value distribution on electronic marketplaces. This is because central platform operators benefit monetarily from collecting and matching electronic information from users. Although those platforms are fueled by user information, only a small share is distributed to their users. However, in turn, disruptive technologies such as the Blockchain technology have the potential to counteract this imbalance of benefits. Through a theory driven case study approach, this study considers principles of cooperative theory as a foundation of Blockchain enabled electronic marketplaces BEEMs Specifically, we show that using the Blockchain technology can foster a renaissance of cooperative principles on electronic marketplaces.", "venue": "Electron. Mark.", "year": 2020.0, "author_names": ["Tobias Kollmann", "Simon Hensellek", "Katharina de Cruppe", "Andre Sirges"], "n_citations": 14, "n_key_citations": 0, "score": 0}, {"corpus_id": 225888063, "title": "Prototyping decentralized electronic blockchain voting system", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["I D Gorbenko", "O O Kuznetsov", "M O Poluianenko", "A S Kiian", "K Ie Lisits'kii", "S O Kandii"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219321998, "title": "Decentralized Electronic Health Records (DEHR) A Privacy preserving Consortium Blockchain Model for Managing Electronic Health Records", "abstract": "", "venue": "ICT4AWE", "year": 2020.0, "author_names": ["Mahdi Ghadamyari", "Saeed Samet"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 221839809, "title": "Toward a Decentralized Service Marketplace: The Interplay Between Blockchain and Algebraic Service Composition", "abstract": "Service marketplaces are supposed to guarantee an open platform for sellers and customers of cloud services. But their potentials cannot be fully released, due to the widely known shortcomings including but not limited to central power of authority, data privacy, lack of customization, rigid and complex trading procedure. We argue that decentralized marketplaces, although not mature, are the most promising solution to address these issues. In this paper, we present our work in progress, which is oriented toward a blockchain enabled marketplace for sharing services at different levels of granularity in a flexible and trustworthy manner.", "venue": "CLOUD", "year": 2020.0, "author_names": ["Chen Qian", "Wenjing Zhu"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 229265522, "title": "Decentralized Marketplace Using Blockchain, Cryptocurrency, and Swarm Technology", "abstract": "Over 1.8 billion people purchased goods online in 2018, and as a result, 2.8 trillion dollars were spent. Companies like Amazon, eBay, and PayPal thrive on being the middleman between sellers and buyers of online goods. Our project uses Blockchain Technology to decentralize the online marketplace and remove the middleman as well as the fees associated with it. To do this, we use smart contracts in the Ethereum Blockchain while maintaining a decentralized database using Swarm for the Webhosting. Finally, in order to phase out current online market systems, the same technology can be used to have a shared inventory, or ledger, between many marketplaces allowing manufacturers, or sellers, to announce their product freely among multiple existing Web site marketplaces concurrently.", "venue": "", "year": 2020.0, "author_names": ["Jorge Ramon Fonseca Cacho", "Binay Dahal", "Yoohwan Kim"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 220961786, "title": "PayPlace: Secure and Flexible Operator Mediated Payments in Blockchain Marketplaces at Scale", "abstract": "Decentralized marketplace applications demand fast, cheap and easy to use cryptocurrency payment mechanisms to facilitate high transaction volumes. The standard solution for off chain payments, state channels, are optimized for frequent transactions between two entities and impose prohibitive liquidity and capital requirements on payment senders for marketplace transactions. We propose PayPlace, a scalable off chain protocol for payments between consumers and sellers. Using PayPlace, consumers establish a virtual unidirectional payment channel with an intermediary operator to pay for their transactions. Unlike state channels, however, the PayPlace operator can reference the custodial funds accrued off chain in these channels to in turn make tamper proof off chain payments to merchants, without locking up corresponding capital in channels with merchants. Our design ensures that new payments made to merchants are guaranteed to be safe once notarized and provably mitigates well known drawbacks in previous constructions like the data availability attack and ensures that neither consumers nor merchants need to be online to ensure continued safety of their notarized funds. We show that the on chain monetary and computational costs for PayPlace is O(1) in the number of payment transactions processed, and is near constant in other parameters in most scenarios. PayPlace can hence scale the payment throughput for large scale marketplaces at no marginal cost and is orders of magnitude cheaper than the state of art solution for non pairwise off chain payments, Zero Knowledge Rollups.", "venue": "", "year": 2020.0, "author_names": ["Madhumitha Harishankar", "Dimitrios-Georgios Akestoridis", "Sriram Venkateswaran Iyer", "Aron Laszka", "Carlee Joe-Wong", "Patrick Tague"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 21654802, "title": "A framework for secure and decentralized sharing of medical imaging data via blockchain consensus", "abstract": "The electronic sharing of medical imaging data is an important element of modern healthcare systems, but current infrastructure for cross site image transfer depends on trust in third party intermediaries. In this work, we examine the blockchain concept, which enables parties to establish consensus without relying on a central authority. We develop a framework for cross domain image sharing that uses a blockchain as a distributed data store to establish a ledger of radiological studies and patient defined access permissions. The blockchain framework is shown to eliminate third party access to protected health information, satisfy many criteria of an interoperable health system, and readily generalize to domains beyond medical imaging. Relative drawbacks of the framework include the complexity of the privacy and security models and an unclear regulatory environment. Ultimately, the large scale feasibility of such an approach remains to be demonstrated and will depend on a number of factors which we discuss in detail.", "venue": "Health Informatics J.", "year": 2019.0, "author_names": ["Vishal Patel"], "n_citations": 130, "n_key_citations": 4, "score": 0}, {"corpus_id": 221280553, "title": "A Novel Decentralized Blockchain Networks Model with High Concurrenc(Blockchain Networks Model with High Concurrency)", "abstract": "Blockchain is very important in finance field and electronic business field, so many researchers are attracted to study the technologies of blockchain. Since the transactions in blockchain takes much time, and they make the blockchain poor efficiency, business processes across organizations require the transactions as soon as possible. Concurrency is attracted much attention and is very important in blockchain field. In this paper, a novel decentralized blockchain network model with high concurrency is proposed. First, the idea of the proposed model is stated. Second, the high concurrency blockchain network model is proposed. Third, the corresponding algorithms are designed according to the proposed model. Furthermore, the experiment is conduced and the results show that proposed model works well.", "venue": "2019 IEEE 14th International Conference on Intelligent Systems and Knowledge Engineering (ISKE)", "year": 2019.0, "author_names": ["Linghao Zhang", "Bingde Lu", "Tao Zhao", "Hongjun Wang"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Minimizing Contrast Media Dose in CT Pulmonary Angiography With High-Pitch Technique", "session_id": 7797863833282883, "user_id": 6131262288439145, "candidates": [{"corpus_id": 218767164, "title": "Minimizing contrast media dose in CT pulmonary angiography with high pitch technique", "abstract": "Objectives: To perform CT pulmonary angiography (CTPA) using a minimal amount of iodinated contrast media. Methods: 47 patients (25 females) with mean age 69 years (range 41 82 years) referred for contrast enhanced chest CT were prospectively included in this Phase IV clinical drug trial. All participants underwent a study specific CTPA in addition to the chest CT. The participants received 80 mg I/kg body weight Iohexol contrast media using a preparatory saline bolus, a dual flow contrast/saline bolus and a saline flush, and a scanner protocol with 80 kVp dual source high pitch mode. Three readers independently assessed the image quality on the 3 point scale non diagnostic, adequate or good excellent image quality. Additionally, the pulmonary arterial contrast opacification was measured. Results: On average, the patients received 16.8 ml Iohexol 350 mg I/mL (range 12 20 ml) Mean patient weight was 71 kg (range 50 85 kg) Identically for all readers, pulmonary embolism (PE) was detected in 1/47 participants. The median number of examinations visually scored concerning pulmonary embolism as good excellent was 47/47 (range 44 47) adequate 0/47 (0 3) and non diagnostic 0/47 (range 0 0) The proportion adequate or better examinations was for all readers 47/47, 100% [95% confidence interval 92 100% The mean attenuation standard deviation in the pulmonary trunk was 325 72 Hounsfield unit (range 165 531 Hounsfield unit) Conclusions: Diagnostic CTPA with 17 ml contrast media is possible in non obese patients using low kVp, high pitch and carefully designed contrast media administration. Advances in knowledge: By combining several procedures in a CTPA protocol, the contrast media dose can be minimized.", "venue": "The British journal of radiology", "year": 2020.0, "author_names": ["Hanan Alobeidi", "Muhammed Alshamari", "Jonas Widell", "Tomas Eriksson", "Mats Liden"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 146809926, "title": "Ultra low dose contrast CT pulmonary angiography in oncology patients using a high pitch helical dual source technology.", "abstract": "PURPOSE We aimed to determine if the image quality and vascular enhancement are preserved in computed tomography pulmonary angiography (CTPA) studies performed with ultra low contrast and optimized radiation dose using high pitch helical mode of a second generation dual source scanner. METHODS We retrospectively evaluated oncology patients who had CTPA on a 128 slice dual source scanner, with a high pitch helical mode (3.0) following injection of 30 mL of Ioversal at 4 mL/s with body mass index (BMI) dependent tube potential (80 120 kVp) and current (130 150 mAs) Attenuation, noise, and signal to noise ratio (SNR) were measured in multiple pulmonary arteries. Three independent readers graded the images on a 5 point Likert scale for central vascular enhancement (CVE) peripheral vascular enhancement (PVE) and overall quality. RESULTS There were 50 males and 101 females in our study. BMI ranged from 13 to 38 kg/m2 (22.8+ 4.4 kg/m2) Pulmonary embolism was present in 29 patients (18.9% Contrast enhancement and SNR were excellent in all the pulmonary arteries (395.3+ 131.1 and 18.3+ 5.7, respectively) Image quality was considered excellent by all the readers, with average reader scores near the highest possible score of 5.0 (CVE, 4.83+ 0.48; PVE, 4.68+ 0.65; noise/quality, 4.78+ 0.47) The average radiation dose length product (DLP) was 161+ 60 mGy.cm. CONCLUSION Using a helical high pitch acquisition technique, CTPA images of excellent diagnostic quality, including visualization of peripheral segmental/sub segmental branches can be obtained using an ultra low dose of iodinated contrast and low radiation dose.", "venue": "Diagnostic and interventional radiology", "year": 2019.0, "author_names": ["Prabhakar Rajiah", "Les Ciancibello", "Ronald D Novak", "Jennifer Sposato", "Luis A Landeras", "Robert C Gilkeson"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 6019051, "title": "Submillisievert standard pitch CT pulmonary angiography with ultra low dose contrast media administration: A comparison to standard CT imaging", "abstract": "Objectives To evaluate the image quality and radiation dose of submillisievert standard pitch CT pulmonary angiography (CTPA) with ultra low dose contrast media administration in comparison to standard CTPA. Materials and methods Hundred patients (56 females, 44 males, mean age 69.6+ 15.4 years; median BMI: 26.6, IQR: 5.9) with suspected pulmonary embolism were examined with two different protocols (n 50 each, group A: 80 kVp, ref. mAs 115, 25 ml of contrast medium; group B: 100 kVp, ref. mAs 150, 60 ml of contrast medium) using a dual source CT equipped with automated exposure control. Objective and subjective image qualities, radiation exposure as well as the frequency of pulmonary embolism were evaluated. Results There was no significant difference in subjective image quality scores between two groups regarding pulmonary arteries (p 0.776) whereby the interobserver agreement was excellent (group A: k 0.9; group B k 1.0) Objective image analysis revealed that signal intensities (SI) signal to noise ratio (SNR) and contrast to noise ratio (CNR) of the pulmonary arteries were equal or significantly higher in group B. There was no significant difference in the frequency of pulmonary embolism (p 0.65) Using the low dose and low contrast media protocol resulted in a radiation dose reduction by 71.8% (2.4 vs. 0.7 mSv; p<0.001) Conclusions This 80 kVp standard pitch CTPA protocol with 25 ml contrast agent volume can obtain sufficient image quality to exclude or diagnose pulmonary emboli while reducing radiation dose by approximately 71%", "venue": "PloS one", "year": 2017.0, "author_names": ["Saravanabavaan Suntharalingam", "Christian Mikat", "Elena Stenzel", "Youssef Erfanian", "Axel Wetter", "Thomas Wilfried Schlosser", "Michael Forsting", "Kai Nassenstein"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 963890, "title": "Feasibility of a Single Contrast Bolus High Pitch Pulmonary CT Angiography Protocol Followed by Low Dose Retrospectively ECG Gated Cardiac CT in Patients with Suspected Pulmonary Embolism.", "abstract": "INTRODUCTION To prospectively evaluate the feasibility of single contrast bolus high pitch CT pulmonary angiography (CTPA) subsequently followed by low dose retrospectively ECG gated cardiac CT (4D cCT) in patients with suspected pulmonary embolism (PE) to accurately evaluate right ventricular (RV) function. MATERIALS AND METHODS 62 patients (33 female, age 65.1 17.5 years) underwent high pitch CTPA examination with 80cc of iodinated contrast material. 5 s after the end of the high pitch CTPA study, a low dose retrospectively ECG gated cardiac CT examination was automatically started. The volume CT dose index (CTDI vol) and dose length product (DLP) were recorded in all patients and the effective dose was calculated. For the assessment of image quality, attenuation was measured as Hounsfield units (HUs) within various regions of interest (ROIs) These ROIs were used to calculate the signal to noise ratio (SNR) and contrast to noise ratio (CNR) Subjective image quality was assessed using a five point Likert scale. On 4D cCT, the ejection fraction of both ventricles (RVEF, LVEF) as well as the ratio of RVEF and LVEF (RVEF/LVEF) was assessed. The statistical difference of all parameters between the PE and non PE group was calculated. RESULTS The mean effective radiation dose was 4.22 2.05 mSv. Attenuation measurements on CTPA showed the highest attenuation values in the main pulmonary artery (442.01 187.64) On 4D cCT attenuation values were highest in the descending aorta (560.59 208.81) The CNR and SNR values on CTPA were highest within the main pulmonary artery (CNR 12.43 4.57; SNR 15.14 4.90) On 4D cCT images, the highest SNR and CNR could be measured in the descending aorta (CNR 10.26 5.57; SNR 10.86 5.17) The mean LVEF was 60.73 14.65 and the mean RVEF was 44.90 9.54 The mean RVEF/LVEF was 0.79 0.29. There was no significant difference between the PE and non PE group for either of the parameters. CONCLUSION The investigated combined CTPA and 4D cCT protocol is feasible using a single contrast bolus and allows the evaluation of RV function in patients with suspected PE. Further studies have to evaluate the additional value of this protocol regarding risk stratification in patients with PE. KEY POINTS High pitch CTPA is fast enough to leave sufficient contrast material within the heart that can be used for an additional low dose functional cardiac CT examination. The tube current of the evaluated 4D cCT is reduced over the entire cardiac cycle without any full dose peak. Low dose cardiac CT subsequently performed after high pitch CTPA allows for detailed analysis of RV function. CITATION FORMAT Schafer JC, Haubenreisser H, Meyer M et al. Feasibility of a Single Contrast Bolus High Pitch Pulmonary CT Angiography Protocol Followed by Low Dose Retrospectively ECG Gated Cardiac CT in Patients with Suspected Pulmonary Embolism. Fortschr Rontgenstr 2018; 190: 542 550.", "venue": "RoFo Fortschritte auf dem Gebiete der Rontgenstrahlen und der Nuklearmedizin", "year": 2018.0, "author_names": ["Julia Schafer", "Holger Haubenreisser", "Mathias Meyer", "Joachim Gruttner", "Thomas Walter", "Martin Borggrefe", "Joseph Uwe Schoepf", "John Nance", "Stefan O Schonberg", "Thomas Henzler"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 13682672, "title": "[Application of Low Concentration Contrast Agent Combined Double Low Dose in CT Pulmonary Angiography for Pulmonary Embolism]", "abstract": "OBJECTIVE To investigate the feasibility of low concentration contrast agent combined double low dose in CT pulmonary angiography. METHODS 60 patients with clinically suspected pulmonary embolism examed by CT pulmonary angiography (CTPA) were divided into two groups (experimental group: n=30,80 kV, 15 mL,320 mg I/mL;control group: n=30,120 kV,50 mL,370 mg I/mL) The average CT value of main right and left pulmonary arteries,lobar arteries was calculated. Imaging post processing techniques included curved plannar reconstruction (CPR),volume rendering (VR) and maximal intensity projection (MIP) The artifact of the remaining contract in the superior vena cava and overall quality of the image were observed and analyzed by two senior doctors who were double blinded. RESULTS All patients in two groups completed CTPA successfully. The image qualities of two groupssatisfy clinical diagnostic requirements and no difference of the image qualities was observed between two groups (P>0.05) The evaluation of venous pollution in experimental group was better than that of control group (P<0.01).No difference of CT values were observed between two groups [experimental group (423.2+ 89.4) HU,control group (465.7+ 85.6) HU](P>0.05) The SNR and CNR in experimental group were lower than those in control group (P<0.01 both).The CT dose index volume (CTDIvol),dose length product (DLP) and size specific dose estimates (SSDE) in experimental group were significantly lower than those incontrol group (P<0.01 all) CONCLUSION The low concentration contrast agent combined double low dose in CT pulmonary angiography satisfies clinical diagnostic requirements. It has good clinical value for it could reduce venous pollution,iodine contrast agent and radiation exposure.", "venue": "Sichuan da xue xue bao. Yi xue ban Journal of Sichuan University. Medical science edition", "year": 2018.0, "author_names": ["Lei Li", "Fei Zhao", "Yu-Mei Pu", "Kai Zhang", "Jin Pu", "Yu-ming Li", "Wan-lin Peng", "Jin-ge Zhang", "Chun-chao Xia", "Zhen-lin Li"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 31933844, "title": "Application of High pitch CT Pulmonary Angiography at 70 kV Tube Voltage with 15 ml Contrast Medium Using Third generation Dual source CT.", "abstract": "Objective To assess the application of high pitch CT pulmonary angiography (CTPA) at 70 kV tube voltage with 15 ml contrast medium using third generation dual source CT. Methods A total of 70 patients with clinically suspected pulmonary embolism were randomly divided into two groups: group A (n=35) underwent CTPA on conventional scanning mode (120 kV,80 ml contrast medium);and group B (n=35) underwent CTPA on high pitch scanning mode at 70 kV tube voltage with 15 ml contrast medium. The CT values and standard deviations of the main pulmonary artery,apical segment of right upper pulmonary lobe (S1),and posterior basal segment of the right lower pulmonary lobe (S10),anterior thoracic air,and back muscles were measured. The signal to noise ratio (SNR),contrast to noise ratio (CNR),and effective dose (ED) were calculated. The overall image quality was evaluated by two blinded radiologists. The quality image was compared using non parametric test on two independent samples. The potential differences in CT value,SNR,CNR,and ED were analyzed using the independent sample t test. Results The CT values of main pulmonary artery (300.62+ 77.54)HU vs.(332.80+ 102.80)HU;t= 1.53,P=0.13],S1 (361.72+ 84.92)HU vs. (325.37+ 87.86)HU;t=1.81,P=0.08],and S10 (359.54+ 89.61)HU vs. (318.26+ 87.19)HU;t=2.00,P=0.05] of right lung were not significantly different between group A and group B. The CNR of S1 (22.81+ 6.05 vs. 19.80+ 6.60;t=2.05,P=0.04) and S10 (22.65+ 6.37 vs. 19.28+ 6.63;t=2.23,P=0.03) of right lung in group A was significantly higher than in group B. The SNR of main pulmonary artery,S1,and S10 of right lung were not significantly different between group A and B. The subjective diagnostic quality values of group A and B were 1 (1,1) and 1 (1,1),respectively (Z= 0.08,P=0.93) The subjective diagnostic quality values evaluated by two radiologists showed excellent consistency(k=0.87,P=0.01) The mean ED was 79% lower in group B (0.92+ 0.23)mSv] than in group A (4.33+ 1.80) mSv] (t=11.72,P=0.00).Conclusion Application of high pitch mode in CTPA at 70 kV with 15 ml contrast medium using third generation dual source CT can remarkably reduce radiation dose without affecting image quality.", "venue": "Zhongguo yi xue ke xue yuan xue bao. Acta Academiae Medicinae Sinicae", "year": 2017.0, "author_names": ["Qianni Du", "Xin Sui", "Wei Song", "Lan Song", "Xiao-li Xu"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 8840393, "title": "CT pulmonary angiography: simultaneous low pitch dual source acquisition mode with 70 kVp and 40 ml of contrast medium and comparison with high pitch spiral dual source acquisition with automated tube potential selection.", "abstract": "OBJECTIVE To assess the feasibility of a 70 kVp CT pulmonary angiography (CTPA) protocol using simultaneous dual source (SimDS) acquisition mode with 40 ml of contrast medium (CM) and comparison with a high pitch spiral dual source (SpiralDS) acquisition protocol with automated tube potential selection (ATPS) METHODS Following the introduction of a new 70 kVp/40 ml SimDS CTPA protocol in December 2014 for all patients with a body mass index (BMI) below 35 kg m( 2) the first 35 patients were retrospectively included in this study and assigned to Group A (BMI: 27 4 kg m( 2) age: 66 15 years) The last 35 patients with a BMI below 35 kg m( 2) who had received SpiralDS CTPA with ATPS were included for comparison (Group B) (70 ml CM; BMI: 27 4 kg m( 2) age: 68 16 years) Subjective image quality (image quality) was assessed by two radiologists (from 1, non diagnostic, to 4, excellent) Signal to noise ratio (SNR) contrast to noise ratio (CNR) volumetric CT dose index (CTDIvol) dose length product (DLP) and effective dose were assessed. RESULTS All examinations were of diagnostic image quality. Subjective image quality, SNR and CNR were comparable between Groups A and B (3.7 0.6 vs 3.7 0.5, 14.6 6.0 vs 13.9 3.7 and 12.4 5.7 vs 11.6 3.3, respectively; p 0.05) CTDIvol, DLP and effective dose were significantly lower in Group A than in Group B (4.5 1.6 vs 7.5 2.1 mGy, 143.3 44.8 vs 278.3 79.44 mGy cm and 2.0 0.6 vs 3.9 1.1 mSv, respectively; p 0.05) CONCLUSION 70 kVp SimDS CTPA with 40 ml of CM is feasible and provides diagnostic image quality, while radiation dose and CM can be reduced by almost 50% and 40% respectively, compared with a SpiralDS CTPA protocol with ATPS. ADVANCES IN KNOWLEDGE 70 kVp SimDS CTPA with 40 ml of CM is feasible in patients with a BMI up to 35 kg m( 2) and can help reduce radiation exposure and CM in these patients.", "venue": "The British journal of radiology", "year": 2016.0, "author_names": ["Johannes Boos", "Patric Kropil", "Rotem Shlomo Lanzman", "Joel Aissa", "Christoph Schleich", "Philipp Heusch", "Lino M Sawicki", "Gerald Antoch", "Christoph Thomas"], "n_citations": 23, "n_key_citations": 1, "score": 0}, {"corpus_id": 55902042, "title": "Prospective triggered high pitch spiral versus sequential dual source CT coronary angiography: comparison of image quality and radiation dose", "abstract": "Background: Prospec vely electrocardiography (ECG) triggered high pitch spiral coronary computed tomography angiography (CCTA) is a unique scan mode for dual source CT (DSCT) Our reports aim to compare image quality and radia on dose of CCTA using high pitch spiral or sequen al acquisi on mode in pa ents with low and stable heart rates. Materials and Methods: Pa ents with low and stable heart rates (HR) (HR 70 beats per minute [bpm] heart rate variability [HRV] 10 bpm) were randomly assigned to high pitch spiral mode (group A; n 80) or sequen al acquisi on mode (group B; n 80) Image quality scores, image noise, effec ve radia on dose and influencing factors on image quality were assessed. Results: Mean image quality scores were 1.51 0.32 and 1.70 0.38 for groups A and B (P 0.05) respec vely. Image noises of the two groups were 19.05+ 4.70 Hu and 27.21+ 8.88 Hu (P 0.05) Contrast media cost in group A was lower than group B (P 0.05) No sta s cal difference was found in the rate of diagnos c pa ents between the two groups (P 0.416) The es mated radia on dose of group A was 26.0% reduced compared with group B (0.74 0.34 mSv vs. 1.00 0.48 mSv, P 0.05) Conclusion: In pa ents with regular and low heart rates, the prospec vely high pitch spiral acquisi on mode can reduce radia on dose and contrast media cost while maintaining image quality compared with the prospec vely sequen al mode.", "venue": "", "year": 2018.0, "author_names": ["Yingying Zhuang", "Wei Huang", "Yuzhen Shi", "Genji Bo", "Daoyan Lu", "Junxia Zhang", "David Kong", "B Wang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 39874558, "title": "Optimization of kV Selection on Third generation High pitch Dual source Coronary CT Angiography Using Ultra low Contrast Media Protocols in Patients with Body Mass Index between 20 30 kg/m2 under Automatic Tube Voltage Selection.", "abstract": "Objective To investigate the application of automatic tube voltage selection (CARE kV)coronary CT angiography (CCTA)using ultra low contrast media (CM)protocols in patients with body mass index (BMI)between 20 kg/m2 and 30 kg/m2 on third generation dual source CT (DSCT) Methods We prospectively included 134 consecutive patients with BMI between 20 kg/m2 and 30 kg/m2who underwent CARE kV prospective high pitch CCTA on third generation DSCT using the ultra low CM protocols and divided them into two groups according to the CARE kV results:70 kV group(n=91):65 patients with normal BMI(20 kg/m2<=BMI<=25 kg/m2)and 26 patients with high BMI(25 kg/m20.75 gCODremoved gCODadded 1) and a more consistent, PPB dominated >50% product, with a higher crude protein product >0.6 gCP gVSS 1) The microalgae tests achieved a better removal outcome (up to 91%COD, 91% NH4 N, 73%PO4 P) but with poorer quality product, and <30% abundance as algae.", "venue": "Bioresource technology", "year": 2018.0, "author_names": ["Tim Huelsen", "Kent Hsieh", "Yang Lu", "Stephan Tait", "Damien John Batstone"], "n_citations": 75, "n_key_citations": 2, "score": 0}, {"corpus_id": 201020552, "title": "[Advances in biological wastewater treatment technology of microalgae.", "abstract": "Microalgae has the advantages of high growth rates, high cellular lipid productivity and capability to bio sequester carbon dioxide, and thus being widely studied as a new generation of biomass energy. The sustained investment in freshwater resources and nutrients during its growth period, however, is a major obstacle to large scale cultivation. Combining a microalgae culture system with wastewater treatment is an economically viable wastewater resource utilization strategy. Based on the utilization mechanism of nutrients such as nitrogen and phosphorus during the growth of microalgae, we reviewed the application of microalgae in the biological wastewater treatment. The removal/inhibition ability of organic and inorganic compounds, heavy metals and pathogens were analyzed. The effects of environmental factors including the initial nutrient concentration, light, temperature, pH, salinity and gas exchange on the growth and metabolism of microalgae were investigated. In addition, combined with the problems faced by the large scale application of microalgae, the application prospect and development direction of microalgae wastewater treatment were prospected, with the aim to provide references for the construction and management of water ecosystems.", "venue": "Ying yong sheng tai xue bao The journal of applied ecology", "year": 2019.0, "author_names": ["Yu Pan", "Hua Wang", "Zuwen Liu", "Haibing Yan"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 200057829, "title": "Carbon Dioxide Biosequestration and Wastewater Treatment Using Microalgae", "abstract": "Algae have been studied for many years and recently microalgae have become a hot topic thanks to their multiple uses. This chapter studies the application of microalgae in biosequestration for carbon dioxide (CO2) capture. CO2 biosequestration is an important approach to tackle climate change. The use of algae to assimilate CO2 has multiple advantages: mitigation of emission risks at point sources (e.g. power plants) and no fertile soil requirements. Still, the application of microalgae cultivation techniques for CO2 biosequestration in situ on industrial sites faces some challenges, such as temperature management, CO2 storage and scalability. The second part of this chapter explores the application of microalgae strains in wastewater treatment technologies for the production of biofuels. The development of cost effective and environmentally friendly wastewater treatment technologies is an important research area on the road toward sustainable production processes. Algae can be used to control the chemical oxygen demand and the content of ammonia and total phosphorus. A high diversity exists among natural microalgae; therefore, strain screening techniques and the adoption of biotechnological tools for the development of commercial strains are an important research area. Not only the strain type is important, the development of stable microbial ecologies with other algae strain types and with bacteria or fungi is also essential to develop stable growth consortia.", "venue": "", "year": 2019.0, "author_names": ["Simona Francesca Consoletti", "Pepijn Prinsen"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 182123673, "title": "Microalgae: A Biorefinary Approach to the Treatment of Aquaculture Wastewater", "abstract": "Aquaculture food is one of the fastest growing food sectors in the world. To meet global aquaculture demand world have adopted cultivation of fish trough intensive system. In intensive aquaculture cultivation systems, a large amount of fresh water is used and concurrently a huge amount of aquaculture wastewater is generated. Aquaculture wastewater contains nitrates, nitrites, and phosphate, among other substances. To prevent eutrophication, it is very important to treat wastewater before it is released into a water body. To improve the economic prospects of the aquaculture industry, it is vital to treat aquaculture wastewater and reuse it. Phycoremediation is an emerging technology in which algae utilize nutrients available in the wastewater to produce biomass, which is rich in protein, lipid, carbohydrates, and other value added products. The cultivation of algae in aquaculture wastewater has several advantages. The nutrient removal efficiency and the production of biomass and various metabolites (proteins, lipids, and carbohydrates) have strain specific and adaptability towards aquaculture wastewater. To utilize this integrated process, not only close the loop in the aquaculture industry but also make economical, sustainable and feasible.", "venue": "", "year": 2019.0, "author_names": ["Faiz Ahmad Ansari", "Sanjay Kumar Gupta", "Faizal Bux"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 4910358, "title": "Binary culture of microalgae as an integrated approach for enhanced biomass and metabolites productivity, wastewater treatment, and bioflocculation.", "abstract": "Ecological studies of microalgae have revealed their potential to co exist in the natural environment. It provides an evidence of the symbiotic relationship of microalgae with other microorganisms. The symbiosis potential of microalgae is inherited with distinct advantages, providing a venue for their scale up applications. The deployment of large scale microalgae applications is limited due to the technical challenges such as slow growth rate, low metabolites yield, and high risk of biomass contamination by unwanted bacteria. However, these challenges can be overcome by exploring symbiotic potential of microalgae. In a symbiotic system, photosynthetic microalgae co exist with bacteria, fungi, as well as heterotrophic microalgae. In this consortium, they can exchange nutrients and metabolites, transfer gene, and interact with each other through complex metabolic mechanism. Microalgae in this system, termed as a binary culture, are reported to exhibit high growth rate, enhanced bio flocculation, and biochemical productivity without experiencing contamination. Binary culture also offers interesting applications in other biotechnological processes including bioremediation, wastewater treatment, and production of high value metabolites. The focus of the study is to provide a perspective to enhance the understanding about microalgae binary culture. In this review, the mechanism of binary culture, its potential, and limitations are briefly discussed. A number of queries are evolved through this study, which needs to be answered by executing future research to assess the real potential of binary culture.", "venue": "Chemosphere", "year": 2018.0, "author_names": ["Naim Rashid", "Won-Kun Park", "Thinesh Selvaratnam"], "n_citations": 29, "n_key_citations": 0, "score": 0}, {"corpus_id": 199097567, "title": "Cyanobacteria/Microalgae for Distillery Wastewater Treatment Past, Present and the Future", "abstract": "Abstract Distilleries are considered one of the major polluting industries due to the presence of high organics and recalcitrant compounds in the effluent. The distillery wastewater (DWW) has been treated via physicochemical, biological, and a combination of different processes. This chapter provides an overview of various processes adopted for DWW treatment, their advantages and disadvantages, reactor configurations and the operating conditions. Subsequently, the phycoremediation of DWW is discussed in detail along with the mechanism of organics and recalcitrant compounds removal, optimal growth conditions of algae and various reactor configurations for algal growth. Finally, the research challenges and future directions toward the sustainable treatment of DWW are addressed.", "venue": "", "year": 2019.0, "author_names": ["Inigo Johnson", "Mohamed Ali", "Mathava Kumar"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 52074421, "title": "Two step process: Enhanced strategy for wastewater treatment using microalgae.", "abstract": "Microalgae possess many advantages, but the lack of a suitable strategy to simultaneously facilitate their low cost cultivation and high value productions limits their commercial applications. In this study, two microalgae strains (RT_C and RT_F) isolated from a municipal wastewater treatment plant were used to establish a two step wastewater treatment process. During step 1, RT_C was cultivated in composite wastewater due to its high tolerance of sludge centrate; followed by step 2, in which the supernatant generated from RT_C culture was used to cultivate RT_F. The NH4+ N, PO43 P, and COD in the wastewater were removed almost completely using this strategy. Moreover, the majority of the metal ions in the wastewater were absorbed by RT_C during step 1, and thus the powdered RT_F only contained low levels of toxic metals. Our results demonstrate that this two step process is effective for removing pollutants and while generating a powder sufficiently clean for extracting valuable compounds.", "venue": "Bioresource technology", "year": 2018.0, "author_names": ["Jichang Han", "Laurenz Thomsen", "Kehou Pan", "Claudia Thomsen"], "n_citations": 8, "n_key_citations": 0, "score": 0}]} -{"query": "Smart Beta factor investing", "session_id": 3946984514656817, "user_id": 3793042999808953, "candidates": [{"corpus_id": 203229320, "title": "Index Fund Management: A Practical Guide to Smart Beta, Factor Investing, and Risk Premia", "abstract": "", "venue": "", "year": 2019.0, "author_names": ["Fadi M Zaher"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 158882715, "title": "Smart Beta Factor Investing", "abstract": "In an attempt to bridge the gap between active and passive investing, Smart Beta strategies have become a popular alternative for investors given their systematic, rules based approach to portfolio construction and historical tendency to capture market inefficiencies. In this thesis, we examine the performance of Smart Beta strategies versus the S&P 500 and the Euro Stoxx 600 index for time periods 1994 2016 and 2002 2016 respectively. The strategies analyzed are Value, Size, Sharpe Momentum, Quality and Low Volatility. Given that factor investing and various rules based strategies have previously been studied in academia, we fill the gap in the literature by providing our own variables to each factor as well as testing their performance across two geographical regions. The empirical analysis conducted in this thesis indicates that nine out of ten Smart Beta portfolios outperform their respective benchmark index on a risk adjusted basis. We therefore conclude that Smart Beta strategies can serve as a superior alternative to passively investing in a cap weighted index, which questions if markets are truly efficient from an asset allocation standpoint. (Less)", "venue": "", "year": 2017.0, "author_names": ["Alex Mikaelsson", "M Nilsson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 229487754, "title": "Factor Investing and Risk Management: Is Smart Beta Diversification Smart?", "abstract": "Abstract In this paper, we investigate the diversification benefits associated with factor investing in U.S. stock markets, using the dummy variable framework for asset allocation. We find that beta based investment strategies are primarily driven by beta specific sources of return variation. At the same time, both betas and characteristics explain the variance of characteristic based strategies, indicating that beta diversification is a more effective risk management tool than characteristic diversification. We also find that the correlations between the pure premiums of the 14 factor based strategies considered are small, which suggests that diversification across smart beta funds is beneficial. Monte Carlo simulations confirm these results.", "venue": "Finance Research Letters", "year": 2021.0, "author_names": ["Gregory Nazaire", "Mariana Pacurar", "Oumar Sy"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 213701189, "title": "Factor Investing: Challenging the Market Index with Smart Beta Products", "abstract": "In this chapter, we address factor research and factor investing from the perspective of academics, practitioners and investors. We will use terms like smart beta, strategic beta, risk premia investing, style investing and factor investing interchangeably. They all mean the same thing: a systematic process where securities (equities, bonds, currencies, commodities) are grouped into buckets with similar characteristics like small or large market capitalization (the size factor) high or low book to market ratio (the value factor) and positive or negative historical prices (the momentum factor) to name a few.", "venue": "", "year": 2019.0, "author_names": ["Elisabetta Basilico", "Tommi Johnsen"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213406469, "title": "Practical Applications of Trade Off in Multifactor Smart Beta Investing: Factor Premium and Implementation Cost", "abstract": "Practical Applications Summary In Trade Off in Multifactor Smart Beta Investing: Factor Premium and Implementation Cost, from the 2019 Quantitative Special Issue of The Journal of Portfolio Management, Feifei Li and Joseph (Yoseop) Shim (both of Research Affiliates) investigate the effects of implementation costs on multifactor portfolio construction. They examine the performance impact of rebalancing multifactor portfolios by looking at portfolio characteristics such as volume, tilt, turnover, and turnover concentration. They find that multifactor portfolios that included all six style factors under consideration value, low beta, profitability, investment, momentum, and size yielded the highest information ratios, both before and after trading costs, with a very small negative impact on Sharpe ratios compared with multifactor portfolios composed of fewer factors. The authors also determine that a uniform concentration level of 25% for all factors provides the greatest performance in the presence of implementation costs. Overall, they suggest investors pursue multifactor portfolios composed of the top 25% of stocks in each of the six style factors.", "venue": "Practical Application", "year": 2020.0, "author_names": ["Feifei Li", "Joseph Shim"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 158687345, "title": "Practical Applications of Smart Beta Is the Gateway Drug to Risk Factor Investing", "abstract": "Today's most common strategies using risk factor approaches are found on the opposite ends of the complexity spectrum. On one end lie simple, long only equity strategies based on factor tilts, such as low volatility, value, and momentum. Smart beta, at its most basic, is a good example. At the other end are the more complex, multi asset class, long/short, risk premia approaches that often employ leverage and derivatives. In Smart Beta Is the Gateway Drug to Risk Factor Investing, published in the Special Issue 2017 of The Journal of Portfolio Management, Eugene Podkaminer of Callan Associates establishes the two poles and explains the many opportunities that exist in the middle. As risk factors become a more common feature of both portfolio attribution and portfolio construction, the space between these two poles is just starting to be explored. Today's simple factor smart beta portfolios can be extended across multiple asset classes, and coupled with shorting, may constitute a robust and diversified risk.", "venue": "", "year": 2018.0, "author_names": ["Howard Moore"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169661780, "title": "Smart beta Strategy and Long Short Factor Investing in Style Rotation", "abstract": "Abstract According to the literature that an outperforming style changes due to time varying style premiums, I investigate the dynamic style allocation strategies with Korean stocks under regime switching. I find that value, size, and low volatility are the best styles in the entire sample period. However, low beta and low volatility styles produce superior returns in event regimes, and value and dividend styles outperform in normal regimes. As a result, regimedependent dynamic style allocations outperform the stock market, static equivalent strategies, and all single style portfolios, both before and after transaction costs. These outperformances are consistent in in sample and out of sample prediction analysis.", "venue": "", "year": 2018.0, "author_names": ["Ryumi Kim"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 697918, "title": "Smart Beta is the Gateway Drug to Risk Factor Investing", "abstract": "The most common strategies using risk factor approaches are found on the opposite ends of the complexity spectrum: simple, long only equity factor strategies (i.e. smart beta) and multiasset class long/short risk premia approaches that often employ leverage and derivatives. The space between these two poles is just starting to be explored, as risk factors become a more common feature of both portfolio attribution and portfolio construction. Today's simple factor smart beta portfolios can be extended across multiple asset classes, coupled with shorting, in order to approach a diluted risk premia approach.", "venue": "The Journal of Portfolio Management", "year": 2017.0, "author_names": ["Eugene Podkaminer"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 168642126, "title": "Practical Applications of Smart Beta Is the Gateway Drug to Risk Factor Investing", "abstract": "Practical Applications Summary Today's most common strategies using risk factor approaches are found on the opposite ends of the complexity spectrum. On one end lie simple, long only equity strategies based on factor tilts, such as low volatility, value, and momentum. Smart beta, at its most basic, is a good example. At the other end are the more complex, multi asset class, long/short, risk premia approaches that often employ leverage and derivatives. In Smart Beta Is the Gateway Drug to Risk Factor Investing, published in the Special Issue 2017 of The Journal of Portfolio Management, Eugene Podkaminer of Callan Associates establishes the two poles and explains the many opportunities that exist in the middle. As risk factors become a more common feature of both portfolio attribution and portfolio construction, the space between these two poles is just starting to be explored. Today's simple factor smart beta portfolios can be extended across multiple asset classes, and coupled with shorting, may constitute a robust and diversified risk premia approach.", "venue": "Practical Application", "year": 2017.0, "author_names": ["Eugene Podkaminer"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169322506, "title": "Smart Beta Exchange Traded Funds and Factor Investing", "abstract": "", "venue": "", "year": 2018.0, "author_names": ["Phillip A Braun"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Analog Mixed-Signal RF Circuits for Complex Signal Processing", "session_id": 5276357460049100, "user_id": 3399053105882287, "candidates": [{"corpus_id": 211059152, "title": "Analog Mixed Signal RF Circuits for Complex Signal Processing", "abstract": "This paper describes the research history of the authors' group in the area of analog/mixed signal circuits for complex or quadrature signal processing. Here the complex signal is composed of In phase and Quadrature phase signals (I, Q signals) the I signal represents its real part while the Q signal is its imaginary part. As complex signal processing circuits, the characteristics of the RC polyphaser filter, the complex active RC filter and the active Gm C filter are shown and also our data weighted averaging (DWA) algorithms for complex ADCs/DACs are introduced.", "venue": "2019 IEEE 13th International Conference on ASIC (ASICON)", "year": 2019.0, "author_names": ["Haruo Kobayashi", "Nene Kushita", "MinhTri Tran", "Koji Asami", "Hao San", "Anna Kuwana", "Akemi Hatta"], "n_citations": 9, "n_key_citations": 0, "score": 1}, {"corpus_id": 60684031, "title": "A compensability RF CMOS mixed signal interface for implantable system", "abstract": "The implantable microsystem requires the hybrid circuit technology for a brain machine interface. The paper described a compensability mixed signal implantable receiver including an analog front end and a digital processing circuit. The analog circuit consists of mainly an amplifier, an amplitude shift keying (ASK) demodulator, a clock extraction and a power recovery. In this paper, the amplifier and the ASK demodulator are described and provided without the capacitor and the resistor, fully integrated low power circuit. The processing circuit is designed with the digital technology, so that implementing the correct synchronous signal. The carrier frequency of the circuit is applied in the 10 MHz range; the data rates up to 1 M bit/s are supported, suitable for complex implants such as the brain neural stimulating and so on. The compensability low power and the high performance implantable interface using a CMOS technology has been designed, fabricated and verified. All of circuits were implemented in a standard 0.18 mm CMOS process.", "venue": "", "year": 2009.0, "author_names": ["Hongge Li"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 195881575, "title": "A hybrid approach to nonlinear macromodel generation for time varying analog circuits", "abstract": "Modeling frequency dependent nonlinear characteristics of complex analog blocks and subsystems is critical for enabling efficient verification of mixed signal system designs. Recent progress has been made for constructing such macromodels, however, their accuracy and/or efficiency can break down for certain problems, particularly those with high Q filtering. In this paper we explore a novel hybrid approach for generating accurate analog macromodels for time varying weakly nonlinear circuits. The combined benefits of nonlinear Pade approximations and pruning by exploitation of the system's internal structure allows us to construct nonlinear circuit models that are accurate for wide input frequency ranges, and thereby capable of modeling systems with sharp frequency selectivity. Such components are widely encountered in analog signal processing and RF applications. The efficacy of the proposed approach is demonstrated by the modeling of large time varying nonlinear circuits that are commonly found in these application areas.", "venue": "ICCAD 2003. International Conference on Computer Aided Design (IEEE Cat. No.03CH37486)", "year": 2003.0, "author_names": ["Peng Li", "Xin Li", "Yang Xu", "Lawrence T Pileggi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 23794246, "title": "An up to 36Gbps analog baseband equalizer and demodulator for mm wave wireless communication in 28nm CMOS", "abstract": "Future mm Wave wireless links with datarates of 20Gbps and more will result in prohibitive power consumption at the front end of the DSPs. The use of analog or mixed signal baseband processing, however, can significantly relax the receiver power budget. The most critical block for such a baseband is the decision feedback equalizer, that compensates for the line of sight multi path components and demodulates the signal. In this paper we present a complex DFE capable of handling 16QAM data at 9GHz RF bandwidth, aggregating all 4 channels of the 60GHz IEEE802.11ad band and resulting in a maximum datarate of 36Gbps. It is able to compensate for 0.7x cursor amplitude of inter symbol interference spread over 5 complex taps, while the minimum input SNR is 26dB. It consumes 138mW from a 0.9V supply, achieving 3.8mW/Gbps power efficiency including clock distribution.", "venue": "2017 IEEE Custom Integrated Circuits Conference (CICC)", "year": 2017.0, "author_names": ["Oscar Elisio Mattia", "Davide Guermandi", "Guy Torfs", "Piet Wambacq"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 3858513, "title": "EE4: Figures of merit on trial", "abstract": "Mixed signal/RF circuits are characterized by a wide variety of performance parameters and diverse functionality. Afigure of merit (FOM) provides a unique, simple and objective metric that allows normalizing and comparing circuits and systems of the same class. On the other hand, does the minimalistic simplicity of any single metric sacrifice more than it offers? Doesn't engineering practice intrinsically require designing and judging a far more complex reality than the monochromatic reductionism that an FOM can provide? For instance, in the case of analog to digital converters, the ability to drive the ADC's input, to clock it, to integrate it or interface it with other processing units, to supply power to it, are just a few real life examples of factors that can make or break a converter architecture and the signal chain embedding it. These factors are not considered in any FOM, with potentially catastrophic consequences. Enough already with the cult of FOMs? Open the doors to a new age of purely human subjective calls? You, the audience, be the judge. This panel will probe the weaknesses and strengths of popular analog FOMs in an entertaining and educational way: To this end, the room will become a tribunal with the moderator as judge. For each FOM on trial, two panelists will officiate, one becoming the defending advocate of the FOM, and the other the prosecutor, while the audience will become the jury, that will decide which of the two contestants will win.", "venue": "ISSCC", "year": 2018.0, "author_names": ["Kostas Doris", "Stefano Stanzione", "Paul F Ferguson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 10337039, "title": "Integrated Biosensor and Interfacing Circuits", "abstract": "Driven by the demand of the bioelectronics market, many biosensors need to work in parallel or in a controllable way to achieve complicated biodetections, however the limited scale, speed, cost, complex signal processing, and bulky circuit routing problems prohibit the discrete biosensor solutions (Drummond et al. 2003) Nowaday biosensor are usually integrated on the same substrate to form biosensor array to improved the scale and efficiency, and solve the signal routing difficulties. CMOS technology emerges since the mid 1960s, and rapidly captured the IC market. The aggressive scaling of CMOS technology following the famous Moore's Law enables the realization of high speed digital circuits, analog and mixed signal circuits, as well as radiofrequency (RF) communication circuits. A single chip monotonically integrating all components of a complex electronics systems or laboratory systems which contain digital, analog, mixed signal, and RF communication, microelectromechanical systems (MEMS) and other experimental functions, i.e. lab on a chip (LOC) is avidly to be implemented to possess the capabilities of high efficiency characterization, high speed complex signal processing and communication, mass production, large scale, low cost, and low power as well. Fortunately, most of the fabrication processes of biosensors are compatible with the standard CMOS technology either directly or via the post CMOS processes, e.g. DNA sensors fabricated on Si nanowire (Li et al. 2004) and gold surface (Cheng et al. 2005) etc, which makes it possible to integrate the biosneosr arrays and CMOS IC on a single chip as a CMOS integrated biosensing system (IBS) (Augustyniak et al. 2006; Prakash et al. 2006; Thewes et al. 2005; Han et al. 2007) The CMOS IBS usually composes of four parts in its system circuitry: integrated biosensor array, interfacing circuits, analog to digital (A/D) conversion, and digital signal processor (DSP) as shown in Fig. 1(a) In some system requiring feedback controlling during the characterization, digital to analog (D/A) converters are also included depending on the applications, as shown in Fig. 1(b) In the system architecture of CMOS IBS, the overall performance such as noise, bandwidth, sensitivity etc are mainly governed by the performances of interfacing circuits which controls the electrolyte potential and directly acquires signals from the integrated biosensor array. The three electrode system, as shown in Fig. 2, is the most popular electrode architecture of the integrated biosensor array in nowadays CMOS IBS. The system is composed of reference electrode, working electrode, and counter electrode (it is also called auxiliary electrode sometimes)", "venue": "", "year": 2010.0, "author_names": ["Lei Zhang", "Zhiping Yu", "Xiangqing He"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 43439410, "title": "Proceedings of the IEEE 2006 Custom Integrated Circuits Conference, CICC 2006, DoubleTree Hotel, San Jose, California, USA, September 10 13, 2006", "abstract": "Programmable devices provide a low cost, low risk path to complex analog, digital, and mixed signal implementations. This session highlights circuit and architectural techniques that make these devices possible. This session explores advanced circuits techniques that address timely issues such as working at extremely high frequencies and at very low voltage. New topologies for LANs, oscillators, and dividers will be presented. On chip signal digitization and capture at 70 GHz is presented in the first paper. Next is offered a jitter and link characterization tutorial which addresses standards such as PCI Express, Fibre Channel, and Giga Bit Ethernet. A novel bus probing technique based on electromagnetic couplers is presented in the third paper. The session closes with an innovative circuit design which allows a two order improvement in characterization accuracy of the frequency response of on chip continuous time filters. The first four papers present software assisted GSM radio RF processing, 802.11 WLAN integration, multiprocessing and integrated power management for wireless products. The next four papers present SoCs with 6.375 Gb/s SerDes l/Os, 22.5 Gb/s cross! current and a 10 Gb/s framer. A human body network processor with Jess than 30uW power, and a 15 dB SNR substrate noise reduction for wireline networks. The session starts with an Invited paper on low power design challenges, followed by advances In wireless transmitter building blocks, Including direct modulators, a wide band VCO, and high efficiency PA techniques.", "venue": "CICC", "year": 2006.0, "author_names": ["M De Dominlcis", "C Muccl", "Antonio Deledda", "F Campl", "Andrea Lodi", "Mario Toma", "Rishin Patel", "William Bereza"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 38304543, "title": "RF Test 101: Defining the Problem, Finding Solutions", "abstract": "Rapid growth in the wireless communication market has introduced many challenges in the test community. Today's wireless communication products are more complex and more integrated than their predecessors. To keep pace with the market, the test community must produce innovative test solutions for integrated circuits containing digital, mixed signal, and radiofrequency blocks. For example, a single chip in a handset cellular phone includes an I/Q modulator and demodulator, low noise amplifiers, filters, analog to digital and digital to analog converters, a gain controller, a phase locked loop, IF amplifiers, and a digital signal processing block. When testing these parts, engineers face the complexity of a system on a chip and the challenges of high frequency. The competitive market and low prices paid by the consumer for wireless phones have escalated the need for low cost radio frequency integrated circuits (RFICs)", "venue": "ITC", "year": 2003.0, "author_names": ["Mustapha Slamani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 7476607, "title": "Post silicon validation of analog/mixed signal/RF circuits and systems: recent advances", "abstract": "Technology scaling along with unprecedented levels of device integration has led to increasing numbers of analog/mixed signal/RF design bugs escaping into silicon. Such bugs are manifested under specific system on chip (SoC) operating conditions and their effects are difficult to predict a priori. This paper describes recent advances in detecting and diagnosing such bugs using \"guided\" stochastic test stimulus generation algorithms. A key challenge is that unlike traditional test generation for manufacturing test that is predicated on known failure mechanisms, the nature of design bugs is generally unknown and must be discovered on the fly. Classes of design errors from undesired capacitive coupling and incorrect biasing conditions to incorrect guard banding of designs are considered. It is shown that high design bug coverage can be obtained over a range of test cases.", "venue": "2016 IEEE 21st International Mixed Signal Testing Workshop (IMSTW)", "year": 2016.0, "author_names": ["Abhijit Chatterjee", "Sabyasachi Deyati", "Barry John Muldrey"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 199590051, "title": "Built In Self Test solutions for high performance and reliable analog, mixed signal, and RF integrated circuits", "abstract": "The integration capabilities offered by current nanoscale CMOS technologies enable the fabrication of complete and very complex mixed signal systems. However, manufacturing processes are prone to imperfections that may degrade sometimes catastrophically the intended functionality of the fabricated circuits. Extensive production tests are then needed in order to separate these defective or unreliable parts from functionally correct devices. Unfortunately, the co integration of blocks of very distinct nature (analog, mixed signal, digital, RF, as well as the limited access to internal nodes in an integrated system make the test of these devices a very challenging and costly task. BIST techniques have been proposed as a way to overcome these issues. These techniques aim at including some of the ATE functionality into the Device Under Test, in such a way that each fabricated system becomes self testable. Applying BIST to the digital part of a complex integrated system is a common and standardized practice. Many test alternatives broadly proven in practice are available, all of them based on defect test and fault models. On the other hand, AMS RF BIST techniques are still lagging behind due to the strict requirements imposed by the analog circuitry. Since AMS RF circuits are usually tested by measuring their functional specifications, this means that each measurement has to comply with strict accuracy constraints to match the performance of the circuits under test. A promising solution to these issues is the combination of BIST strategies and machine learning based tests. Machine learning test strategies replace costly analog, mixed signal and RF performance measurements by a set of simpler measurements that can be performed on chip by low cost built in test circuitry. The core idea is to build a mapping model from a set of simple measurements to the set of functional specifications. However, this test strategy is not free of shortcomings either. My research has been focused on overcoming the limitations of current BIST and machine learning based test for complex AMS RF circuits, with the final goal of providing innovative state of the art test solutions for these complex systems", "venue": "", "year": 2019.0, "author_names": ["Manuel J Barragan"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Injury of inferior alveolar nerve", "session_id": 2189166315565860, "user_id": 804660659551221, "candidates": [{"corpus_id": 195796070, "title": "Comparative effects of photobiomodulation therapy at wavelengths of 660 and 808 nm on regeneration of inferior alveolar nerve in rats following crush injury", "abstract": "The aim of the present study was to investigate the therapeutic effects of 660 nm and 880 nm photobiomodulation therapy (PBMT) following inferior alveolar nerve (IAN) crush injury. Following the nerve crush injuries of IAN, 36 Wistar rats were randomly divided into three groups as follows: (1) control, (2) 660 nm PBMT, and (3) 808 nm PBMT (GaAlAs laser, 100 J/cm 2 70 mW, 0.028 cm 2 beam) PBMT was started immediately after surgery and performed once every 3 days during the postoperative period. At the end of the 30 day treatment period, histopathological and histomorphometric evaluations of tissue sections were made under a light and electron microscope. The ratio of the inner axonal diameter to the total outer axonal diameter g ratio) and the number of axons per square micrometer were evaluated. In the 808 nm PBMT group, the number of nerve fibers with suboptimal g ratio ranges of 0 0.49 p 0.001) is significantly lower than expected, which indicates better rate of myelinization in the 808 nm PBMT group. The number of axons per square micrometer was significantly higher in the 808 nm PBMT group when compared with the control p 0.001) and 660 nm PBMT group p 0.010) The data and the histopathological investigations suggest that the PBMT with the 808 nm wavelength along with its settings was able to enhance IAN regeneration after nerve crush injury.", "venue": "Lasers in Medical Science", "year": 2019.0, "author_names": ["Nurettin Diker", "Duygu Aytac", "Fatma Helvacioglu", "Yener Oguz"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 35907600, "title": "Predictive Value of Panoramic Radiography for Injury of Inferior Alveolar Nerve After Mandibular Third Molar Surgery.", "abstract": "PURPOSE The purpose of the present systematic review was to assess the added value of panoramic radiography in predicting postoperative injury of the inferior alveolar nerve (IAN) in the decision making before mandibular third molar (MM3) surgery. MATERIALS AND METHODS MEDLINE and EMBASE were searched electronically to identify the diagnostic accuracy of studies that had assessed the predictive value of 7 panoramic radiographic signs, including root related signs (darkening of the root, deflection of the root, narrowing of the root, and dark and bifid apex of the root) and canal related signs (interruption of the white line of the canal, diversion of the canal, and narrowing of the canal) for IAN injury after MM3 surgery. RESULTS A total of 8 studies qualified for the meta analysis. The pooled sensitivity and specificity of the 7 signs ranged from 0.06 to 0.49 and 0.81 to 0.97, respectively. The area under the summary area under the receiver operating characteristic curve ranged from 0.42 to 0.89. The pooled positive predictive value (PPV) and negative predictive value (NPV) ranged from 7.5 to 26.6% and 95.9 to 97.7% respectively. The added value of a positive sign for ruling in an IAN injury (PPV minus the prior probability) ranged from 3.4 to 22.2% The added value of a negative sign for ruling out an IAN injury (NPV minus [1 minus the prior probability] ranged from 0.1 to 2.2% CONCLUSIONS For all 7 signs, the added value of panoramic radiography is too low to consider it appropriate for ruling out postoperative IAN in the decision making before MM3 surgery. The added value of panoramic radiography for determining the presence of diversion of the canal, interruption of the white line of the canal, and darkening of the root can be considered sufficient for ruling in the risk of postoperative IAN injury in the decision making before MM3 surgery.", "venue": "Journal of oral and maxillofacial surgery official journal of the American Association of Oral and Maxillofacial Surgeons", "year": 2017.0, "author_names": ["Naichuan Su", "Arjen J van Wijk", "Erwin Berkhout", "Gerard C H Sanderink", "Jan de Lange", "Hang Wang", "Geert J M G van der Heijden"], "n_citations": 23, "n_key_citations": 1, "score": 0}, {"corpus_id": 202405841, "title": "Does the Lag Time Between Injury and Treatment Play a Role in Recovery of Inferior Alveolar Nerve Neurosensory Disturbances Following Mandibular Body Fracture?", "abstract": "BACKGROUND The lag time between injury and treatment (LTIT) plays an important role in reduction of complications in mandibular fractures. The aim of this study was to measure the effect of LTIT on recovery of the inferior alveolar nerve (IAN) neurosensory disturbances (NSDs) following surgical management of mandibular body fractures. METHODS This was a prospective cohort study. Patients who had a unilateral mandibular body fracture with paresthesia were studied. Paresthesia was evaluated by 2 point discrimination (TPD) test, brush stroke test and self reporting before and 6 months after the surgical procedure. RESULTS Forty five patients were studied. There was a correlation between LTIT and TPD test result and self reported paresthesia at 6 months, postoperatively (P 0.001) Fifteen patients (33.3% had complete improvement in NSD 6 months after treatments (group 1) and 30 patients (group 2) had hyposthesia (N 17, 37.77% and paresthesia (N 13, 28.88% There was a significant difference in LTIT between groups 1 and 2 at 6 months postoperatively (P 0.001) Cox regression model demonstrated the hazard ratio increased significantly for self reported NSD when treatment was done 10 days after trauma (P 0.001, confidence level 95% CONCLUSION It seems that conduction of open reduction with internal rigid fixation shortly after mandibular fracture may shorten the recovery time of NSDs of the IAN following mandibular body fractures.", "venue": "The Journal of craniofacial surgery", "year": 2019.0, "author_names": ["Reza Tabrizi", "Freydoun Pourdanesh", "Paniz Lesan Khoshnik", "Samir Aboul-Hosn Centenero"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 201805670, "title": "Determining the risk relationship associated with inferior alveolar nerve injury following removal of mandibular third molar teeth: a systematic review.", "abstract": "PURPOSE This study analyzes the risk factors associated with the incidences of inferior alveolar nerve (IAN) injury after surgical removal of impacted mandibular third molar (IMTM) and to evaluate the contribution of these risk factors to postoperative neurosensory deficits. MATERIALS AND METHODS An exhaustive literature search has been carried out in the COCHRANE library and PubMed electronic databases from January 1990 to March 2019 supplemented by manual searching to identify the related studies. 23 studies out of 693 articles from the initial search were finally included, which summed up a total of 26427 patients (44171 teeth) RESULTS Our results have been compared with other current available papers in the literature reviewed that obtained similar outcomes. Among 44171 IMTM extractions performed by various grades of operators, 1.20% developed transient IAN deficit and 0.28% developed permanent IAN deficit respectively. Depth of impaction (P<0.001) contact between mandibular canal (MC) and IMTM (P<0.001) surgical technique (P<0.001) intra operative nerve exposure (P<0.001) and surgeon's experience (P<0.001) were statistically significant as contributing risk factors of IAN deficits. CONCLUSION Radiographic findings, such as depth of impaction, proximity of the tooth to the mandibular canal, surgical technique, intra operative nerve exposure, and surgeon's experience were high risk factors of IAN deficit after surgical removal of IMTMs.", "venue": "Journal of stomatology, oral and maxillofacial surgery", "year": 2019.0, "author_names": ["Feiwu Kang", "Manoj Kumar Sah", "Guo Fei"], "n_citations": 9, "n_key_citations": 0, "score": 1}, {"corpus_id": 189927663, "title": "Risk stratification against inferior alveolar nerve injury after lower third molar extraction by scoring on cone beam computed tomography image", "abstract": "The study aimed to stratify the risk of inferior alveolar nerve injury (IANI) after lower third molar (LM3) surgery with a scoring system using identified predictive factors based on cone beam computed tomography (CBCT) images. In a case control study, the primary outcome was IANI occurrence. The control group included randomly selected patients without IANI. Predictor variables included patient demographics, surgical situations, Pell Gregory classification, and inferior alveolar canal (IAC) associated factors on CBCT. Study variables were analyzed using logistic regression models. Risk stratification was assessed by a scoring system that was constructed using independent predictors. The 858 patients who underwent LM3 surgery (1177 teeth) after CBCT scan were divided into case (25 patients, 2.9% 27 teeth) and control (235 patients, 300 teeth) groups. In the multivariate model, lingual/inter radicular position of IAC [odds ratio (OR) 7.21; P 0.001; assigned score, 2] multiple roots closed to the IAC with cortical perforation (OR 3.72; P 0.015; 1) and age 30 years (OR 4.99; P 0.008; 2) were associated with an increased IANI risk. The IANI risk scoring system could be stratified into low and high risk groups at a cutoff score of 3 (sensitivity, 68.0% specificity, 90.6% positive predictive value, 17.8% positive likelihood ratio, 7.23) In conclusion, the high risk group of IANI after LM3 surgery corresponded to individuals with multiple factors: lingual/inter radicular IAC position to LM3, multiple roots with perforated IAC, and increased age 30 years) Raising awareness of the higher probability for IANI is needed for patients with multiple aforementioned factors.", "venue": "Odontology", "year": 2019.0, "author_names": ["Seiko Kubota", "Tomoaki Imai", "Mitsuhiro Nakazawa", "Narikazu Uzawa"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 199000323, "title": "Evaluating the risk of post extraction inferior alveolar nerve injury through the relative position of the lower third molar root and inferior alveolar canal.", "abstract": "The aim of this study was to introduce a method to evaluate the risk of inferior alveolar nerve (IAN) injury following the extraction of impacted lower third molars. Two hundred impacted lower third molars adjacent to the IAN were evaluated. These were divided into four classification groups according to preoperative cone beam computed tomography (CBCT) findings: AR, apical region; LT, lateral region of the tapered root; LE, lateral region of the enlarged root; AE, adjacent to the enlarged root. All teeth were dislocated along the long axis or arc of the root by tooth sectioning technique and extracted by a single surgeon. The primary outcome variable was postoperative neurosensory impairment of the IAN. The kh2 test was used to evaluate differences in postoperative IAN injury between the classifications. Logistic regression analysis was used to evaluate the risk factors for postoperative IAN injury. The overall incidence of postoperative IAN injury was 7% Specifically, most injuries involved classification AE (AE 36% LE 8.6% LT 3.6% AR 0% and the difference was statistically significant (P< 0.05) Logistic regression showed that classification AE was the only risk factor for postoperative IAN injury (P< 0.001) According to preoperative CBCT, the risk of postoperative IAN injury is higher when the IAN is adjacent to the enlarged part of the root.", "venue": "International journal of oral and maxillofacial surgery", "year": 2019.0, "author_names": ["Wei Qi", "J Lei", "Y-N Liu", "J-N Li", "J Pan", "Gy Yu"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 128189766, "title": "Is juxta apical radiolucency a reliable risk factor for injury to the inferior alveolar nerve during removal of lower third molars?", "abstract": "The aim of this study was to find out if juxta apical radiolucency (JAR) is a reliable risk factor for injury to the inferior alveolar nerve (IAN) during removal of lower third molars. We designed a cohort study of patients whose dental panoramic tomograms (DPT) had shown JAR before complete removal of lower wisdom teeth. The outcome variable was postoperative permanent neurosensory disturbance of the IAN. A total of 39 patients (50 lower third molars) were identified and screened for permanent neurosensory disturbance. None reported any permanently altered sensation 18 months after the operation. Based on our group, the presence of JAR does not seem to be a reliable predictor of the risk of permanent injury to the IAN during removal of lower third molars.", "venue": "The British journal of oral maxillofacial surgery", "year": 2019.0, "author_names": ["Ciro Gilvetti", "Sonam Haria", "Aakshay Gulati"], "n_citations": 4, "n_key_citations": 1, "score": 0}, {"corpus_id": 46868472, "title": "Risk of inferior alveolar nerve injury with coronectomy vs surgical extraction of mandibular third molars A comparison of two techniques and review of the literature", "abstract": "The removal of mandibular third molar teeth is one of the most common oral surgical procedures. In a significant number of patients, it carries a degree of associated morbidity, including damage to the inferior alveolar nerve (IAN) For this reason, practitioners desire the most up to date guidance on the most appropriate technique, informed by the best available evidence that will produce the lowest incidence of iatrogenic complications. The aim of this study was to perform a systematic review comparing the effect of coronectomy vs complete surgical extraction of mandibular third molar teeth on the risk of IAN injury and other complications in adults. Studies were identified through Embase (1980 2016) and Ovid MEDLINE (1946 2016) database searches. Search terms included coronectomy, partial root removal, deliberate vital root retention, odontectomy, surgical removal, surgical extraction, complete tooth extraction and extract. Limits of the study included humans, English language and randomised controlled trials (RCTs) Only RCTs comparing IAN damage associated with surgical extraction of mandibular third molars vs coronectomy were included. From our database searches, we identified two unique RCTs matching the inclusion criteria. Both evaluated patients who had specific radiographic signs of intimate relationships with the IAN. Upon detailed analysis, the studies were noted to exhibit a high risk of bias in many categories, thereby rendering their results inconclusive. Although evidence from two RCTs suggests that coronectomy can reduce the risk of IAN injury compared to surgical removal of high risk mandibular third molars, the quality of evidence is insufficient to provide definitive conclusions regarding the preferred technique.", "venue": "Journal of oral rehabilitation", "year": 2018.0, "author_names": ["A S Ali", "Jamie Benton", "Julian M Yates"], "n_citations": 18, "n_key_citations": 0, "score": 0}, {"corpus_id": 3994779, "title": "Does additional cone beam computed tomography decrease the risk of inferior alveolar nerve injury in high risk cases undergoing third molar surgery?Does CBCT decrease the risk of IAN injury?", "abstract": "The objectives of this study were to evaluate the efficacy of additional cone beam computed tomography (CBCT) imaging on decreasing the risk of inferior alveolar nerve (IAN) injury during third molar removal in patients at high risk and to assess the surgical outcomes. The study sample included patients considered at high risk for IAN injury based on panoramic radiography (PAN) evaluation. The primary predictor was the type of imaging method (PAN only or with additional CBCT) The other variables were demographic and anatomical/radiographic factors. The primary outcome variable was IAN injury. The secondary outcome variables were the preoperative surgical plan and surgical results including IAN exposure and duration of surgery. The sample comprised 122 patients (139 teeth) aged 18 48 years. Postoperative temporary IAN injury was present in three (4.2% cases in the CBCT group and 11 (16.4% in the PAN group at 7 days after surgery. However, none of the patients had a permanent IAN injury at the 6 month follow up. Additional CBCT imaging was not superior to PAN in reducing IAN injury after third molar surgery during long term follow up. Nonetheless, CBCT may decrease the prevalence of temporary IAN injury and improve the surgical outcomes in high risk patients.", "venue": "International journal of oral and maxillofacial surgery", "year": 2017.0, "author_names": ["Yavuz Tolga Korkmaz", "Saadettin Kayipmaz", "Figen Cizmeci Senel", "Kerem Turgut Atasoy", "Z Gumrukcu"], "n_citations": 27, "n_key_citations": 1, "score": 0}, {"corpus_id": 7217253, "title": "Inferior alveolar nerve injury: Correlation between indicators of risk on panoramic radiographs and the incidence of tooth and mandibular canal contact on cone beam computed tomography scans in a Western Australian population", "abstract": "AIM The aim of the present study was to assess risks prior to third molar removal. A 2 D panoramic radiograph or a 3 D cone beam computed tomography (CBCT) scan can be used to visualize the proximity of the third molar to the mandibular canal. We aimed to correlate panoramic indicators of risk with the incidence of contact between these two structures on CBCT scans. METHODS Patients were selected from a Western Australian population if they had a panoramic radiograph that illustrated signs of risk of inferior alveolar nerve injury and had a CBCT scan on file. Statistically significant relationships between the relative position and distance between the mandibular canal and third molar were investigated using kh2 test and Fisher's exact test in Stata version 13. RESULTS Within the Western Australian sample (N 100) of six possible panoramic indicators of risk, two were significantly associated with contact between the tooth and mandibular canal on CBCT: (a) interruption of the radiographic white line of the canal; and (b) darkening of the root(s) CONCLUSIONS Two panoramic radiograph risk signs are significantly more likely to indicate contact on the CBCT scans: interruption of the white line and darkening of the root(s) Further research is required to develop CBCT prescription guidelines for surgical planning.", "venue": "Journal of investigative and clinical dentistry", "year": 2018.0, "author_names": ["Kate L Winstanley", "Lisa M Otway", "Lionel Thompson", "Zoe H Brook", "Nigel M King", "Bernard Koong", "Michael O'halloran"], "n_citations": 8, "n_key_citations": 0, "score": 0}]} -{"query": "Experiments on Graph Clustering Algorithms", "session_id": 3879614538588419, "user_id": 672528985745092, "candidates": [{"corpus_id": 8609428, "title": "Experiments on Graph Clustering Algorithms", "abstract": "A promising approach to graph clustering is based on the intuitive notion of intra cluster density vs. inter cluster sparsity. While both formalizations and algorithms focusing on particular aspects of this rather vague concept have been proposed no conclusive argument on their appropriateness has been given. As a first step towards understanding the consequences of particular con ceptions, we conducted an experimental evaluation of graph clustering approaches. By combining proven techniques from graph partitioning and geometric clustering, we also introduce a new approach that compares favorably.", "venue": "ESA", "year": 2003.0, "author_names": ["Ulrik Brandes", "Marco Gaertler", "Dorothea Wagner"], "n_citations": 320, "n_key_citations": 23, "score": 1}, {"corpus_id": 11240963, "title": "Multilayer Spectral Graph Clustering via Convex Layer Aggregation: Theory and Algorithms", "abstract": "Multilayer graphs are commonly used for representing different relations between entities and handling heterogeneous data processing tasks. Nonstandard multilayer graph clustering methods are needed for assigning clusters to a common multilayer node set and for combining information from each layer. This paper presents a multilayer spectral graph clustering (SGC) framework that performs convex layer aggregation. Under a multilayer signal plus noise model, we provide a phase transition analysis of clustering reliability. Moreover, we use the phase transition criterion to propose a multilayer iterative model order selection algorithm (MIMOSA) for multilayer SGC, which features automated cluster assignment and layer weight adaptation, and provides statistical clustering reliability guarantees. Numerical simulations on synthetic multilayer graphs verify the phase transition analysis, and experiments on real world multilayer graphs show that MIMOSA is competitive or better than other clustering methods.", "venue": "IEEE Transactions on Signal and Information Processing over Networks", "year": 2017.0, "author_names": ["Pin-Yu Chen", "Alfred O Hero"], "n_citations": 35, "n_key_citations": 2, "score": 0}, {"corpus_id": 2977323, "title": "Experiments on Density Constrained Graph Clustering", "abstract": "Clustering a graph means identifying internally dense subgraphs that are only sparsely interconnected. Formalizations of this notion lead to measures that quantify the quality of a clustering and to algorithms that actually find clusterings. Since, most generally, corresponding optimization problems are hard, heuristic clustering algorithms are used in practice, or other approaches that are not based on an objective function. In this work, we conduct a comprehensive experimental evaluation of the qualitative behavior of greedy bottom up heuristics driven by cut based objectives and constrained by intracluster density, using both real world data and artificial instances. Our study documents that a greedy strategy based on local movement is superior to one based on merging. We further reveal that the former approach generally outperforms alternative setups and reference algorithms from the literature in terms of its own objective, while a modularity based algorithm competes surprisingly well. Finally, we exhibit which combinations of cut based inter and intracluster measures are suitable for identifying a hidden reference clustering in synthetic random graphs and discuss the skewness of the resulting cluster size distributions. Our results serve as a guideline to the usage of bicriterial, cut based measures for graph clusterings.", "venue": "ACM J. Exp. Algorithmics", "year": 2014.0, "author_names": ["Robert Gorke", "Andreas Schumm", "Dorothea Wagner"], "n_citations": 13, "n_key_citations": 2, "score": 0}, {"corpus_id": 21894316, "title": "GEMSEC: Graph Embedding with Self Clustering", "abstract": "Modern graph embedding procedures can efficiently process graphs with millions of nodes. In this paper, we propose GEMSEC a graph embedding algorithm which learns a clustering of the nodes simultaneously with computing their embedding. GEMSEC is a general extension of earlier work in the domain of sequence based graph embedding. GEMSEC places nodes in an abstract feature space where the vertex features minimize the negative log likelihood of preserving sampled vertex neighborhoods, and it incorporates known social network properties through a machine learning regularization. We present two new social network datasets and show that by simultaneously considering the embedding and clustering problems with respect to social properties, GEMSEC extracts high quality clusters competitive with or superior to other community detection algorithms. In experiments, the method is found to be computationally efficient and robust to the choice of hyperparameters.", "venue": "2019 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)", "year": 2019.0, "author_names": ["Benedek Rozemberczki", "Ryan Davies", "Rik Sarkar", "Charles A Sutton"], "n_citations": 106, "n_key_citations": 10, "score": 0}, {"corpus_id": 51953378, "title": "ScaleSCAN: Scalable Density Based Graph Clustering", "abstract": "How can we efficiently find clusters (a.k.a. communities) included in a graph with millions or even billions of edges? Density based graph clustering SCAN is one of the fundamental graph clustering algorithms that can find densely connected nodes as clusters. Although SCAN is used in many applications due to its effectiveness, it is computationally expensive to apply SCAN to large scale graphs since SCAN needs to compute all nodes and edges. In this paper, we propose a novel density based graph clustering algorithm named ScaleSCAN for tackling this problem on a multicore CPU. Towards the problem, ScaleSCAN integrates efficient node pruning methods and parallel computation schemes on the multicore CPU for avoiding the exhaustive nodes and edges computations. As a result, ScaleSCAN detects exactly same clusters as those of SCAN with much shorter computation time. Extensive experiments on both real world and synthetic graphs demonstrate that the performance superiority of ScaleSCAN over the state of the art methods.", "venue": "DEXA", "year": 2018.0, "author_names": ["Hiroaki Shiokawa", "Tomokatsu Takahashi", "Hiroyuki Kitagawa"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 199488433, "title": "Multi view Clustering via Simultaneously Learning Graph Regularized Low Rank Tensor Representation and Affinity Matrix", "abstract": "Low rank tensor representation based multi view clustering has become an efficient method for data clustering due to the robustness to noise and the preservation of the high order correlation. However, existing algorithms may suffer from two common problems: (1) the local view specific geometrical structures and the various importance of features in different views are neglected; (2) the low rank representation tensor and the affinity matrix are learned separately. To address these issues, we propose a novel framework to learn the Graph regularized Low rank Tensor representation and the Affinity matrix (GLTA) in a unified manner. Besides, the manifold regularization is exploited to preserve the view specific geometrical structures, and the various importance of different features is automatically calculated when constructing the final affinity matrix. An efficient algorithm is designed to solve GLTA using the augmented Lagrangian multiplier. Extensive experiments on six real datasets demonstrate the superiority of GLTA over the state of the arts.", "venue": "2019 IEEE International Conference on Multimedia and Expo (ICME)", "year": 2019.0, "author_names": ["Yongyong Chen", "Xiaolin Xiao", "Yicong Zhou"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 201305300, "title": "Multi level Graph Drawing using Infomap Clustering", "abstract": "Infomap clustering finds the community structures that minimize the expected description length of a random walk trajectory; algorithms for infomap clustering run fast in practice for large graphs. In this paper we leverage the effectiveness of Infomap clustering combined with the multi level graph drawing paradigm. Experiments show that our new Infomap based multi level algorithm produces good visualization of large and complex networks, with significant improvement in quality metrics.", "venue": "Graph Drawing", "year": 2019.0, "author_names": ["Seok-Hee Hong", "Peter Eades", "Marnijati Torkel", "Ziyang Wang", "David Chae", "Sungpack Hong", "Daniel Langerenken", "Hassan Chafi"], "n_citations": 4, "n_key_citations": 1, "score": 0}, {"corpus_id": 10009915, "title": "(Semi )External Algorithms for Graph Partitioning and Clustering", "abstract": "In this paper, we develop semi external and external memory algorithms for graph partitioning and clustering problems. Graph partitioning and clustering are key tools for processing and analyzing large complex networks. We address both problems in the (semi )external model by adapting the size constrained label propagation technique. Our (semi )external size constrained label propagation algorithm can be used to compute graph clusterings and is a prerequisite for the (semi )external graph partitioning algorithm. The algorithm is then used for both the coarsening and the refinement phase of a multilevel algorithm to compute graph partitions. Our algorithm is able to partition and cluster huge complex networks with billions of edges on cheap commodity machines. Experiments demonstrate that the semi external graph partitioning algorithm is scalable and can compute high quality partitions in time that is comparable to the running time of an efficient internal memory implementation. A parallelization of the algorithm in the semi external model further reduces running time.", "venue": "ALENEX", "year": 2015.0, "author_names": ["Yaroslav Akhremtsev", "Peter Sanders", "Christian Schulz"], "n_citations": 15, "n_key_citations": 1, "score": 0}, {"corpus_id": 17767008, "title": "SymNMF: nonnegative low rank approximation of a similarity matrix for graph clustering", "abstract": "Nonnegative matrix factorization (NMF) provides a lower rank approximation of a matrix by a product of two nonnegative factors. NMF has been shown to produce clustering results that are often superior to those by other methods such as K means. In this paper, we provide further interpretation of NMF as a clustering method and study an extended formulation for graph clustering called Symmetric NMF (SymNMF) In contrast to NMF that takes a data matrix as an input, SymNMF takes a nonnegative similarity matrix as an input, and a symmetric nonnegative lower rank approximation is computed. We show that SymNMF is related to spectral clustering, justify SymNMF as a general graph clustering method, and discuss the strengths and shortcomings of SymNMF and spectral clustering. We propose two optimization algorithms for SymNMF and discuss their convergence properties and computational efficiencies. Our experiments on document clustering, image clustering, and image segmentation support SymNMF as a graph clustering method that captures latent linear and nonlinear relationships in the data.", "venue": "J. Glob. Optim.", "year": 2015.0, "author_names": ["Da Kuang", "Sangwoon Yun", "Haesun Park"], "n_citations": 136, "n_key_citations": 27, "score": 0}, {"corpus_id": 5472503, "title": "A Streaming Algorithm for Graph Clustering", "abstract": "We introduce a novel algorithm to perform graph clustering in the edge streaming setting. In this model, the graph is presented as a sequence of edges that can be processed strictly once. Our streaming algorithm has an extremely low memory footprint as it stores only three integers per node and does not keep any edge in memory. We provide a theoretical justification of the design of the algorithm based on the modularity function, which is a usual metric to evaluate the quality of a graph partition. We perform experiments on massive real life graphs ranging from one million to more than one billion edges and we show that this new algorithm runs more than ten times faster than existing algorithms and leads to similar or better detection scores on the largest graphs.", "venue": "NIPS 2017", "year": 2017.0, "author_names": ["Alexandre Hollocou", "Julien Maudet", "Thomas Bonald", "Marc Lelarge"], "n_citations": 5, "n_key_citations": 0, "score": 0}]} -{"query": "Yorkshire type 1 diabetes", "session_id": 2216657221404684, "user_id": 6204609457116093, "candidates": [{"corpus_id": 24850294, "title": "Mortality and acute complications in children and young adults diagnosed with Type 1 diabetes in Yorkshire, UK: a cohort study", "abstract": "To examine all cause and cause specific mortality in a population based cohort of people with early and late onset of Type 1 diabetes.", "venue": "Diabetic medicine a journal of the British Diabetic Association", "year": 2018.0, "author_names": ["T C Evans-Cheung", "H Jonathan Bodansky", "R C Parslow", "Richard G Feltbower"], "n_citations": 11, "n_key_citations": 0, "score": 0}, {"corpus_id": 79753282, "title": "Mortality due to diabetic ketoacidosis: population based findings from the Yorkshire register of type 1 diabetes in children and young people", "abstract": "", "venue": "", "year": 2017.0, "author_names": ["T C Evans-Cheung", "H Jonathan Bodansky", "Richard G Feltbower", "Roger Parslow"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 74814186, "title": "In Patient Care for Children with Type 1 Diabetes A Regional Audit in the Yorkshire and Humber Region in the North of England", "abstract": "", "venue": "", "year": 2015.0, "author_names": ["Suma Uday", "Nadia Laila Amin", "Fiona M Campbell", "James Yong"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 2264930, "title": "Incidence rate trends in childhood Type 1 diabetes in Yorkshire, UK 1978 2007: effects of deprivation and age at diagnosis in the south Asian and non south Asian populations", "abstract": "Diabet. Med. 28, 1508 1513 (2011)", "venue": "Diabetic medicine a journal of the British Diabetic Association", "year": 2011.0, "author_names": ["Katie L Harron", "Patricia A McKinney", "Richard G Feltbower", "H Jonathan Bodansky", "Paul D Norman", "Fiona M Campbell", "Roger Parslow"], "n_citations": 19, "n_key_citations": 0, "score": 1}, {"corpus_id": 57983757, "title": "In patient care for children with type 1 diabetes across hospitals in the Yorkshire and Humber region in the north of England", "abstract": "This audit demonstrates on going difficulties achieving current standards of in patient care for children and young people with diabetes. There is a lack of 24 hour on call service in majority of the paediatric diabetes units. There needs to be standardisation across the region and feasibility of implementation needs to be explored. Sixty three per cent of the units, consisting of 2 tertiary and 8 secondary care units, responded. Paediatric wards and EDs in all units had protocols for management of new diagnosis of diabetes, diabetic ketoacidosis (DKA) hypoglycaemia and surgery. The following table illustrates availability of protocols. An important part of diabetes management is maintaining high standards of in patient care. The first inpatient standards for management of children with diabetes were set and audited in the South of England in 20111. Deficiencies highlighted were: lack of dietetic advice on wards, lack of education sessions for ED and ward staff and lack of contact with diabetes team especially for overnight admissions Nine out of 10 units had paediatric nurses in areas where children were cared for, but only the tertiary centres had a trained paediatric nurse in the emergency department (ED) on every shift. A 24 hour on call service was only provided by 40% of the units. The diabetes team was usually contacted within 2 hours of an admission in tertiary centres and within 24 hours in secondary care units. Paediatric diabetes specialist nurses (DSN) had an active role in in patient management in all units.", "venue": "", "year": 2014.0, "author_names": ["Nadia Laila Amin", "Suma Uday", "Fiona M Campbell", "James Yong"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 23549014, "title": "Early deaths from ischaemic heart disease in childhood onset type 1 diabetes", "abstract": "Aims The risk of ischaemic heart disease (IHD) death in early type 1 diabetes onset was assessed using death certification data. Methods The Yorkshire Register of type 1 Diabetes in Children and Young People was linked to clinically validated death certification data for those diagnosed under 15 years. Standardised mortality ratios (SMRs) were calculated using the England and Wales population and IHD death rates between 1978 and 2014 by 5 year age group and sex. Results The cohort included 4382 individuals (83aEUR%0097 person years) Of 156 deaths, nine were classed as IHD deaths before clinical validation. After clinical validation, 14 IHD deaths were classified, with an SMR of 13.8 (95% CI 8.2 to 23.3) and median age at death of 35.1 years (range 21.9aEUR\"47.9 years) Conclusions There is an early emergence of death from IHD in early onset type 1 diabetes. Underascertainment of IHD deaths was present without clinical validation of death certification.", "venue": "Archives of Disease in Childhood", "year": 2018.0, "author_names": ["T C Evans-Cheung", "H Jonathan Bodansky", "Roger Parslow", "Richard G Feltbower"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 29662925, "title": "Learning Through Chain Event Graphs: The Role of Maternal Factors in Childhood Type 1 Diabetes", "abstract": "Chain event graphs (CEGs) are a graphical representation of a statistical model derived from event trees. They have previously been applied to cohort studies but not to case control studies. In this paper, we apply the CEG framework to a Yorkshire, United Kingdom, case control study of childhood type 1 diabetes (1993 1994) in order to examine 4 exposure variables associated with the mother, 3 of which are fully observed (her school leaving age, amniocenteses during pregnancy, and delivery type) and 1 with missing values (her rhesus factor) while incorporating previous type 1 diabetes knowledge. We conclude that the unknown rhesus factor values were likely to be missing not at random and were mainly rhesus positive. The mother's school leaving age and rhesus factor were not associated with the diabetes status of the child, whereas having at least 1 amniocentesis procedure and, to a lesser extent, birth by cesarean delivery were associated; the combination of both procedures further increased the probability of diabetes. This application of CEGs to case control data allows for the inclusion of missing data and prior knowledge, while investigating associations in the data. Communication of the analysis with the clinical expert is more straightforward than with traditional modeling, and this approach can be applied retrospectively or when assumptions for traditional analyses are not held.", "venue": "American journal of epidemiology", "year": 2017.0, "author_names": ["Claire Keeble", "Peter Adam Thwaites", "Paul D Baxter", "Stuart Barber", "Roger Parslow", "Graham R Law"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 73327588, "title": "020 Incidence rate trends in childhood type 1 diabetes in Yorkshire, 1978 2007: effects of ethnicity and age at diagnosis", "abstract": "Objective To examine incidence rates and trends of childhood Type 1 diabetes in Yorkshire from 1978 to 2007. Methods Data from the population based Yorkshire Register of Diabetes in Children and Young People was used to analyse the incidence of Type 1 diabetes in children aged <15 years diagnosed in the former Yorkshire Regional Health Authority. Incidence rates (per 100 000 per year) were estimated using mid year population estimates stratified by sex, age and ethnicity: south Asian (Indian, Pakistani, Bangladeshi) or non south Asian (all other ethnicities) Ethnicity was assigned using two name recognition programs (Nam Pehchan and SANGRA) and a local expert. Age sex standardised rates were calculated between 1978 and 2007 and by ethnic group between 1990 and 2007. Poisson regression was used to assess incidence trends and estimate predicted rates up to 2020. Goodness of fit, AIC and likelihood ratio tests were used to assess model fit. Results 3896 children were diagnosed in Yorkshire between 1978 and 2007. Overall incidence was 18.1 (95% CI 17.5 to to 18.6) increasing from 13.3 (1978 to 1987) to 16.9 (1988 to 1997) to 24.1 (1998 to 2007) Incidence increased significantly over time: average annual percentage change (AAPC) was 2.8% (1.8 to 3.8) The inclusion of an age sex interaction term provided evidence for differences in trends between sexes depending on age, with females having higher incidence and AAPC than males for those aged 5 9. Overall incidence for non south Asians (21.4; 20.6 to 22.3) was significantly higher than that of south Asians (14.6; 12.3 to 17.0) over the entire study period. A significant increasing trend in incidence was observed for non south Asians of 3.3% (1.3 to 5.2) compared to a non significant trend seen in south Asians (1.9% 0.4 to 4.3) Overall forecasted incidence for 2020 is 38.3 per 100 000. Conclusions Type 1 diabetes incidence rates have risen almost uniformly for non south Asians of all ages but not for south Asians, contrary to findings in the Bradford area of Yorkshire between 1978 and 1998. Overall incidence increased most quickly in the 5 9 age group. Incidence doubled from 12.5 to 25.2 between 1978 and 2007. If current trends continue, rates will rise by 52% to 38.3 between 2007 and 2020.", "venue": "Journal of Epidemiology Community Health", "year": 2010.0, "author_names": ["Katie L Harron", "Patricia A McKinney", "Richard G Feltbower", "C R Stephenson", "H Jonathan Bodansky", "PD Norman", "Roger Parslow"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 63353130, "title": "Ethnic differences in incidence rates of childhood type 1 diabetes in Yorkshire 1978 2007", "abstract": "", "venue": "", "year": 2010.0, "author_names": ["Katie L Harron", "Patricia A McKinney", "Richard G Feltbower", "C R Stephenson", "Paul D Norman", "H Jonathan Bodansky", "Gurjit Chhokar", "Roger Parslow"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 38386687, "title": "The relationship between inflammatory bowel disease and type 1 diabetes mellitus: a study of relative prevalence in comparison with population controls.", "abstract": "Genome wide association studies have identified that an overlap exists in the genetic architecture underpinning inflammatory bowel disease (IBD) and other immunemediated inflammatory diseases [1] Epidemiological studies have established that IBD patients have a higher prevalence of asthma, psoriasis, rheumatoid arthritis and multiple sclerosis, than persons without IBD [2, 3] However, data remains unclear regarding the association between IBD and type 1 diabetes mellitus (T1DM) We have examined the prevalence of IBD in T1DM and T1DM in IBD and assessed the effect of concurrent IBD in T1DM patients on glycaemic control and quality of life (QoL) Type 1 diabetes mellitus (n= 662) and IBD (n= 622) patients were recruited during attendance at outpatient clinics. Nondiabetic controls (n= 602) were recruited from general practices within the South Yorkshire region. Demographic information was recorded from patient case notes, alongside stated diagnoses of T1DM and/or histology confirmed IBD. Diabetic controls were selected from the diabetes cohort matched for age and sex in a 2:1 ratio for comparison of QoL and glycaemic control. Glycaemic control was assessed using HbA1c values and QoL using the Short Form 36 Version 2 (SF 36) questionnaire. We found that the prevalence of IBD was 12/662 (1.5% in those with T1DM and 2/602 (0.3% in controls (OR 5.5, 1.224.9; p=0.03) The prevalence of T1DM in IBD patients was 4/662 (0.6% which is comparable with the UK adult population prevalence of T1DM (0.4% [4] OR 1.5, 0.38 6.07; p=0.56) In T1DM IBD patients, QoL scores were significantly lower in the general health and vitality domains compared to T1DM only patients (p=0.004 and p=0.041, respectively; Fig. 1) Adverse QoL was not explained by changes in the glycaemic control (Fig. 2) In conclusion, the prevalence of IBD in T1DM was increased six fold compared with that in the control population. However, LETTERS TO THE EDITOR Available from: URL: http:/www.jgld.ro/2015/1/22.html", "venue": "Journal of gastrointestinal and liver diseases JGLD", "year": 2015.0, "author_names": ["Hugo A Penny", "John S Leeds", "Matthew Kurien", "Anastasios Averginos", "Andrew D Hopper", "Marios Hadjivassiliou", "Solomon H Tesfaye", "David S Sanders"], "n_citations": 9, "n_key_citations": 0, "score": 0}]} -{"query": "vocabulary Learning Strategies of english teacher", "session_id": 6158514498257635, "user_id": 4386395818129326, "candidates": [{"corpus_id": 158137449, "title": "Vocabulary Learning Strategies of English for Specific Purposes Students at Agricultural University of Georgia", "abstract": "The importance of teaching ESP to students not majoring in English is discussed. The role of vocabulary learning in ESP is emphasized. The article attempts to add an insight to Georgian experience of teaching English to students of Agriculture to the existing studies on the use of vocabulary learning strategies in ESP. The students should become aware of the importance of language learning strategies and get trained to use them appropriately. The purpose of this study was to investigate the attitude of students towards vocabulary learning methods offered by the textbook and the teacher, as well as the awareness of and the preferred vocabulary teaching /learning strategies among Agriculture University students while they were taking an English for Specific Purpose (ESP) course. Respondents comprised 107 students at Agriculture University of Georgia students. An ESP vocabulary learning questionnaire was administered to the randomly selected students who enrolled in the English for Agriculture as a requirement. It revealed that students are not sufficiently satisfied with the existing state of teaching ESP vocabulary. A conclusion has been made that vocabulary learning strategies have to be purposefully taught, to improve the existing situation.", "venue": "", "year": 2016.0, "author_names": ["Tamar Tskhvitava"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 234450962, "title": "THE KNOWLEDGE OF VOCABULARY LEARNING STRATEGIES PRACTICE FOR ENGLISH AS A SECOND LANGUAGE LEARNERS", "abstract": "Vocabulary is the first step that must be memorize by learner in speaking English as a foreign language. There are two kinds of vocabulary learning strategies which are breadth vocabulary (recognition) and depth vocabulary (recall) In enhancing L2 vocabulary knowledge students need the awareness of learning strategies used. The active students use learning strategies can increase language skills, self confidence and learning motivation in learning activity process. Learning strategies instruct students to learn independently, responsibly, and actively. Thus, students tend to be more confident and motivated when applied learning strategies in the daily learning process. Besides, teacher ought to teach and motivate students to apply students' some learning strategies accordingly. Finally after implementing learning strategies, students and teachers need to evaluate the progress students' vocabulary regularly.", "venue": "", "year": 2020.0, "author_names": ["Asti Gumartifa", "Kurnia Saputri", "Sri Yuliani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 152098391, "title": "Vocabulary learning strategies in internship program at English Teacher Education department of Sunan Ampel State Islamic University Surabaya", "abstract": "Dalam pengajaran dan pembelajaran bahasa, mengenalkan dan menginstruksikan strategi pembelajaran kosakata (vocabulary learning strategies) kepada siswa itu penting. Strategi pembelajaran kosakata membantu pelajar untuk bertanggung jawab untuk pembelajaran mereka sendiri. Oleh karena itu, penelitian mengenai strategi pembelajaran kosakata bisa membantu guru untuk menyadari mengenai hal itu. Terdapat banyak penelitian terkait dengan strategi pembelajaran kosakata. Selain itu, penelitian sebelumnya banyak menggunakan guru profesional atau siswa sebagai partisipan. Namun, penelitian ini berfokus pada mahasiswa yang telah mengambil Program Pengalaman Lapangan (PPL 2) Tujuan dari penelitian ini adalah untuk mengidentifikasi jenis strategi pembelajaran kosakata yang mahasiswa PPL 2 pertimbangkan untuk digunakan siswa dan strategi pembelajaran kosakata yang mahasiswa PPL 2 sering gunakan untuk siswa mereka dalam PPL 2. Sumber data penelitian ini dikumpulkan dari 63 mahasiswa PPL 2 Jurusan Pendidikan Bahasa Inggris. Penelitian ini menggunakan deskriptif kualitatif. Untuk menggumpulkan data, peneliti menggunakan kuesioner. Kuesioner ini terdiri dari tiga puluh strategi pembelajaran kosakata berdasarkan taksonomi Schmitt. Hasil penelitian ini menunjukkan bahwa dari tiga puluh strategi pembelajaran kosakata yang terdapat pada questionnaire, terdapat satu strategi yang dianggap sebagai strategi yang sangat baik, dua lima strategi yang dianggap sebagai strategi yang baik, dan empat strategi yang dianggap sebagai strategi yang buruk untuk digunakan siswa. Dalam praktik mereka, mahasiswa PPL 2 menggunakan semua strategi pembelajaran kosakata yang terdapat pada questionnaire. Khususnya, mahasiswa PPL 2 sering menggunakan empat belas strategi yang terdapat pada kuesioner. Selain itu, hasil penelitian ini menunjukkan bahwa strategi pembelajaran kata menggunakan gambar adalah strategi yang mahasiswa PPL 2 pertimbangkan sebagai strategi yang sangat baik untuk digunakan siswa dan strategi ini paling sering digunakan mahasiswa PPL 2. Selain itu, penelitian ini juga menemukan bahwa semakin berguna suatu strategi yang dianggap oleh mahasiswa PPL 2, frekuensi menggunakan strategi tersebut juga meningkat. Oleh karena itu, dengan melakukan penelitian ini, peneliti berharap bahwa mahasiswa PPL 2 akan mengetahui mengenai strategi pembelajaran kosakata. Selain itu, mereka juga perlu untuk memperluas pengetahuan mereka mengenai jenis strategi pembelajaran kosakata dan bagaimana cara menggunakan dan menginstruksikan strategi pembelajaran kosakata tersebut kepada siswa.", "venue": "", "year": 2016.0, "author_names": ["Frida Ayu Lestari"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213268601, "title": "The Mapping of the Primary School English Vocabulary Learning Strategies: VOLSQUES Questionnaire", "abstract": "Based on the Ministry of National Education policy, English is not a compulsory subject in primary school level due to the Bahasa learning priority. For supporting this policy, Bahasa is taught longer in the classroom. Though there is limited support from the government, many primary schools are still involving English in their school curriculum as a local content subject or as an extracurricular. Bogem 2 Primary School in Yogyakarta is one of those which set English as a local content subject. In this school, English was taught between 2015 and 2017. Unfortunately, English is omitted now as there is no more English teacher in this school. However, Grade 5 students who experienced learning English on their grade 1 3 are still learning English from foreign tourists who are frequently visit their classroom. This unique situation becomes the main reason of conducting the present research which focuses on the vocabulary learning strategy mapping used by the Grade 5 students of Bogem 2 Primary School. The researcher applied a descriptive quantitative research method with VOLSQUES questionnaire using 3 scales. The results show that even though the respondents got frequent contacts with the English native speakers, they mostly gained their vocabulary inventory from cartoon movies on televisions.", "venue": "", "year": 2019.0, "author_names": ["Ima Widyastuti", "Adhi Kusuma"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 230703470, "title": "The strategies used by an English teacher in teaching vocabulary to grade three students", "abstract": "Vocabulary is the most important aspects in learning language as without understanding the vocabulary, the learners will not know the meaning at all. In school, it is the duty of the teachers to teach vocabulary to the students. The teacher plays an important role to make the students understand the language well, especially in teaching young learners. The teacher needs to find a suitable strategy in order to make the students understand the material well. This study therefore was aimed to identify the strategies used by an elementary teacher in teaching vocabulary to the third grade students of elementary school and why the teacher used the strategies in teaching vocabulary to grade three students. The writer conducted a qualitative study. The writer observed the English teacher of the third graders of private Elementary School in Surabaya as the subject of the study. The writer focused on the strategies used by the English teacher in an elementary school. It described the techniques and the methods used by the English teacher in teaching vocabulary to the third grade of elementary school. The writer would take the primary data from observing the classroom. Additional supporting data were taken by interviewing the teacher and researcher's reflective journals and notes. The data were analyzed in terms of its frequency of occurance to see the prominent strategies used by the teacher. The result shows that the English teacher used strategies from both visual and verbal. The writer used the theory of Sanusi which separate the teaching strategies into two, visual and verbal technique. From visual strategy, the teacher used mime, gesture, action, visual aids, realia, and picture. From verbal strategy, the teacher used explanation, translation and also using commands. The teacher used those strategies mostly because the teacher followed the teacher's handbook which already provided video for each Unit, students' activity, small quizzes, and more. By following the teacher's handbook, the teacher could prepare the learning material to be more organized and the teacher could also predict when the students finished the material of each unit.", "venue": "", "year": 2020.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 63825630, "title": "AN INVESTIGATION OF ENGLISH TEACHING STRATEGIES IN ENHANCING STUDENTS' VOCABULARY IMPLEMENTED BY A PRE SERVICE ENGLISH TEACHER", "abstract": "The need to teach English language appropriately in particular is a challenge for all the teachers in Indonesia. Today, it has become obligatory for the teachers to rethink and renovate their teaching strategies with the changing times. Since there has been a continuous transformation in the teaching methods and techniques all over the world in every subject, methods and techniques for teaching vocabulary need desirable innovation to fulfill students' need of vocabulary learning. This research was aimed to investigate the strategies used by a pre service teacher to improve students vocabulary and students' responses toward the teacher's strategies A qualitative approach with case study design with a pre service English teacher was used and 35 students in one junior high school participated in this study. The data were collected through observations, interview, and questionnaire. The results showed that the teacher used varieties of techniques in numerous methods as his teaching strategies, such as Contextual Teaching and Learning with neighborhood walk, Silent way with pictures and crosswords puzzle, and Total Physical Response with gamification, to enhance students' vocabulary The srategies are proven to be appropriate to be implemented in the classroom and leads to the vocabulary collection improvement for most students in the classroom", "venue": "", "year": 2016.0, "author_names": ["Fatah Huda"], "n_citations": 3, "n_key_citations": 0, "score": 1}, {"corpus_id": 210554273, "title": "Contextualised teacher designed workshops based on cognitive strategies for vocabulary learning", "abstract": "This qualitative action research study analyses what is informed about the use of contextualised teacher designed workshops (built up of five lessons each, addressing the English language skills) based on explicit cognitive strategies in regard to vocabulary learning among tenth graders in E.I Santa Ana. The participants of the study were twenty students and the teacher who performed three roles: as a language teacher, a researcher and a materials developer. The instruments used to collect data were students' artefacts, teacher's field notes and think aloud protocol. The findings evinced that the parameters of particularity, practicality and possibility underlying the contextualized workshops for vocabulary learning designed by the teacher, generated suggestive and thought provoking activities in which the students' real life experiences were reflected and stimulated thinking. Likewise, the conscious application of cognitive strategies as a key reflection process for vocabulary learning involved the association of images, prior knowledge activation, classifying, using skimming, scanning and making predictions for learning new words joyfully, promoting in this way, the students' participation. Aditionally, an improvement in vocabulary learning was displayed since activities integrated conceptual, grammar, phonological and orthographical features of words, although spelling is in the initial phase. Finally, the workshops were adapted, articulated to the school project on sexual education and then, implemented with 1680 students from seventh to ninth grades.", "venue": "", "year": 2019.0, "author_names": ["Mauricio Tapias Cadena"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 143468581, "title": "The Most and Least Frequent Vocabulary Learning Strategies of High School English Language Learners", "abstract": "This study emphasized the most and least frequent vocabulary learning strategies that English language teachers encourage students to use, and the strategies that students actually use to build their vocabulary. Finding out whether the students' most used strategies were teacher encouraged or independently learned was another point of interest. The participants included 20 male and 23 female learners of English of ages 18 to 22, all of them students in the Arts program at a Southern Congolese high school. They completed a Likert scale questionnaire of 34 statements and four short answer questions. Statistical and content analysis methods were employed. The study revealed contextual guessing and dictionary use to be the most frequently encouraged and used strategies, whereas pronunciation and flashcards were the least frequently encouraged and used. These strategies showed no significant difference between the teacher encouraged and the student used strategies, which provided evidence about the important role that language teachers play in students' learning in general, and in strategy in particular. Furthermore, the majority of participants attributed their frequently used strategies to their teachers' practices and advice. Further discussion stresses the potential reasons why pronunciation receives less attention.", "venue": "", "year": 2014.0, "author_names": ["Jean Kaya", "Krassimira D Charkova"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 216976620, "title": "The Strategies Used By English Teacher to Teach Vocabulary (A Study at Several MAS in Aceh Besar)", "abstract": "ABSTRACT This study is entitled \"The Strategies used by English Teacher to Teach Vocabulary (A Study at Several MAS in Aceh Besar)\" Its aims were to find out the teachers' strategies in teaching English vocabulary and to identify the obstacles faced by the teachers in implementing the strategies. This research was qualitative research. In collecting the data, observation and interview were applied. The data were analyzed by using descriptive analysis. The findings of the analyses suggest that the English teachers at the three schools used their own strategies which are the combination of several strategies proposed by experts such as Word Map Strategy, Scavenger Hunt Strategy and so on. Some obstacles faced by the teachers in implementing the strategies were that they had limited time for focusing on vocabulary and less supporting facilities. Another difficulty is related to the students' low ability to master the vocabulary words. However, the students seem to enthusiastically engage in learning the vocabulary words.", "venue": "", "year": 2018.0, "author_names": ["Karuni Humaira Arta"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 216917370, "title": "VOCABULARY LEARNING STRATEGY USED BY FEMALE STUDENTS OF IAIN SALATIGA MAJORED IN ENGLISH EDUCATION AND TEACHER TRAINING FACULTY: A GENDER PERSPECTIVE", "abstract": "Strategy in learning a language has its' classification on its target skill that will be achieved. One of the target that could be noticed is when language learner intended to posess a varied vocabulary in their mind and to express it appropriately. This study was projected to explore the vocabulary learning strategy applied by the female undergraduate students in an Islamic university. 22 female students involved in the study and they filled up the open ended questionnaire and engaged in interview to collect the Vocabulary learning strategies. The questionnaire items were adapted from Schmitt taxonomy (2000) which is in English version. The students were facing the fourth semester when the VLS study was conducted. The finding indicated that the variation gender result in the strategy described the general picture of the VLS application in IAIN Salatiga. Each student actually possessed the different characteristics in implementing the VLS and it was not only tied to the gender difference.", "venue": "", "year": 2018.0, "author_names": ["Dwi Erna Susanti"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Monte carlo tree search for scheduling activity recognition", "session_id": 2710538745736922, "user_id": 955144070587600, "candidates": [{"corpus_id": 302380, "title": "Monte Carlo Tree Search for Scheduling Activity Recognition", "abstract": "This paper addresses recognition of human activities with stochastic structure, characterized by variable space time arrangements of primitive actions, and conducted by a variable number of actors. Our approach classifies the activity of interest as well as identifies the relevant foreground in the video. Each activity representation is considered as a mixture distribution of BoWs captured by a Sum Product Network (SPN) In our approach, SPN represents a linear mixture of many bags of words (BoWs) where each BoW represents an important foreground part of the activity. This mixture distribution is efficiently computed by organizing the BoWs in a hierarchy, where children BoWs are nested within parent BoWs. SPN allows us to model this mixture since it consists of terminal nodes representing BoWs, product nodes, and sum nodes organized in a number of layers. The products are aimed at encoding particular configurations of primitive actions, and the sums serve to capture their alternative configurations. SPN inference amounts to parsing the SPN graph, which yields the most probable explanation (MPE) of the video foreground. SPN inference has linear complexity in the number of nodes, under fairly general conditions, enabling fast and scalable recognition. The connectivity of SPN and the parameters of BoW distributions are learned under weak supervision using a variational EM algorithm. For our evaluation, we have compiled and annotated a new Volleyball dataset. Our classification accuracy and localization results are superior to those of the state of the art on current benchmarks as well as our Volleyball datasets.", "venue": "2013 IEEE International Conference on Computer Vision", "year": 2013.0, "author_names": ["Mohamed R Amer", "Sinisa Todorovic", "Alan Fern", "Song-Chun Zhu"], "n_citations": 45, "n_key_citations": 4, "score": 1}, {"corpus_id": 5703351, "title": "Combining Monte Carlo and hyper heuristic methods for the multi mode resource constrained multi project scheduling problem", "abstract": "Investigates the novel solution structures arising in multi project scheduling.Presents specific algorithm components for scheduling of multiple projects.Combines all the algorithm components with a hyper heuristic and memetic algorithm.Significantly outperforms other methods on a set of \"hidden\" instances.Produces new best solutions for some long standing multi mode PSPLIB instances. Multi mode resource and precedence constrained project scheduling is a well known challenging real world optimisation problem. An important variant of the problem requires scheduling of activities for multiple projects considering availability of local and global resources while respecting a range of constraints. A critical aspect of the benchmarks addressed in this paper is that the primary objective is to minimise the sum of the project completion times, with the usual makespan minimisation as a secondary objective. We observe that this leads to an expected different overall structure of good solutions and discuss the effects this has on the algorithm design. This paper presents a carefully designed hybrid of Monte Carlo tree search, novel neighbourhood moves, memetic algorithms, and hyper heuristic methods. The implementation is also engineered to increase the speed with which iterations are performed, and to exploit the computing power of multicore machines. Empirical evaluation shows that the resulting information sharing multi component algorithm significantly outperforms other solvers on a set of \"hidden\" instances, i.e. instances not available at the algorithm design phase.", "venue": "Inf. Sci.", "year": 2016.0, "author_names": ["Shahriar Asta", "Daniel Karapetyan", "Ahmed Kheiri", "Ender Ozcan", "Andrew J Parkes"], "n_citations": 53, "n_key_citations": 10, "score": 0}, {"corpus_id": 69525759, "title": "Guest Editorial Special Issue on Deep/Reinforcement Learning and Games", "abstract": "Deep learning (DL) and reinforcement learning (RL) have been applied with great success to many games, including Go and Atari 2600 games. Monte Carlo Tree Search (MCTS) developed in 2006, can be viewed as a kind of online RL. This technique has greatly improved the level of Go playing programs. MCTS has since become the state of the art for many other games including Hex, Havannah, and general game playing, and has found much success in applications as diverse as scheduling, unit commitment problems, and probabilistic planning. DL has transformed fields such as image and video recognition and speech understanding. In computer games, DL started making its mark in 2014, when teams from the University of Edinburgh and Google DeepMind independently applied deep convolutional neural networks (DCNNs) to the problem of expertmove prediction in Go.Clark and Storkey's DCNN achieved a move prediction rate of 44% exceeding all previously published results. DeepMind's publication followed soon after, with a DCNN that reached 55% The combination of DL and RL led to great advances in Atari 2600 game playing, and to the ultimate breakthrough in computer Go. In 2017, DeepMind proposed a new deep reinforcement learning (DRL) algorithm and developed AlphaGo Zero, which is significant for not requiring any human knowledge of Go. By removing the requirement for domain knowledge, DRL is also flexible in that the method can be applied to a wide range of games and problems, ushering in a variety of new research opportunities. In this special issue, we are delighted to bring you eight articles on applying DL/RL related techniques to games research.", "venue": "IEEE Trans. Games", "year": 2018.0, "author_names": ["I-Chen Wu", "Chang-Shing Lee", "Y Tian", "Martin Muller"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 21387891, "title": "An architecture algorithm co design of artificial intelligence for Trax player", "abstract": "Trax is a two player game of simple rules but strategic depth. This article proposes an FPGA based artificial intelligence for its endless version called Supertrax. An implementable algorithm is developed by combining several strategies and techniques functioning at various levels of software and hardware. These methods are developed using heuristics, multi level pattern recognition, Monte Carlo Tree Search, and path based scheduling. A specific architecture has also been described to accommodate this algorithm. The proposal contributes a novel idea on this subject and its performance will be shown in the design competition to be held in FPT'15.", "venue": "2015 International Conference on Field Programmable Technology (FPT)", "year": 2015.0, "author_names": ["Qing Lu", "Chiu-Wing Sham", "Francis Chung-Ming Lau"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 42026901, "title": "Advances in Computer Science and Information Technology, AST/UCMA/ISA/ACN 2010 Conferences, Miyazaki, Japan, June 23 25, 2010. Joint Proceedings", "abstract": "Information Security and Assurance. Fuzzy Based Threat Analysis in Total Hospital Information System. An ID Based Anonymous Signcryption Scheme for Multiple Receivers Secure in the Standard Model. A Supervised Locality Preserving Projections Based Local Matching Algorithm for Face Recognition. Information Systems Security Criticality and Assurance Evaluation. Security Analysis of 'Two Factor User Authentication in Wireless Sensor Networks' Directed Graph Pattern Synthesis in LSB Technique on Video Steganography. Feature Level Fusion of Face and Palmprint Biometrics by Isomorphic Graph Based Improved K Medoids Partitioning. Post quantum Cryptography: Code Based Signatures. Security Analysis of the Proposed Practical Security Mechanisms for High Speed Data Transfer Protocol. A Fuzzy Based Dynamic Provision Approach for Virtualized Network Intrusion Detection Systems. An Active Intrusion Detection System for LAN Specific Attacks. Analysis on the Improved SVD Based Watermarking Scheme. Advanced Communication and Networking. Applications of Adaptive Belief Propagation Decoding for Long Reed Solomon Codes. Dynamic Routing for Mitigating the Energy Hole Based on Heuristic Mobile Sink in Wireless Sensor Networks. Grammar Encoding in DNA Like Secret Sharing Infrastructure. HATS: High Accuracy Timestamping System Based on NetFPGA. A Roadside Unit Placement Scheme for Vehicular Telematics Networks. Concurrent Covert Communication Channels. Energy Efficiency of Collaborative Communication with Imperfect Frequency Synchronization in Wireless Sensor Networks. High Performance MAC Architecture for 3GPP Modem. Modified Structures of Viterbi Alogrithm for Forced State Method in Concatenated Coding System of ISDB T. A New Cross Layer Unstructured P2P File Sharing Protocol over Mobile Ad Hoc Network. A Model for Interference on Links in Inter working Multi hop Wireless Networks. An Optimum ICA Based Multiuser Data Separation for Short Message Service. Advanced Computer Science and Information Technology. Multiple Asynchronous Requests on a Client Based Mashup Page. Using an Integrated Ontology Database to Categorize Web Pages. Topic Detection by Topic Model Induced Distance Using Biased Initiation. Mining Significant Least Association Rules Using Fast SLP Growth Algorithm. Maximized Posteriori Attributes Selection from Facial Salient Landmarks for Face Recognition. Agent Based Approach to Regression Testing. A Numerical Study on B&B Algorithms for Solving Sum Of Ratios Problem. Development of a Digital Textbook Standard Format Based on XML. A Pattern Based Representation Approach for Online Discourses. A Fault Tolerant Architecture for Transportation Information Services of E Government. Design and Implementation of Binary Tree Based Proactive Routing Protocols for Large MANETS. Extract Semantic Information from WordNet to Improve Text Classification Performance. Managing Ubiquitous Scientific Knowledge on Semantic Web. A Semantic Pattern Approach to Managing Scientific Publications. A Bootstrap Software Reliability Assessment Method to Squeeze Out Remaining Faults. Markov Chain Monte Carlo Random Testing. An Integrated Approach to Detect Fault Prone Modules Using Complexity and Text Feature Metrics. Ubiquitous Computing and Multimedia Applications. An Effective Video Steganography Method for Biometric Identification. A Video Coding Technique Using Octagonal Motion Search and BTC PF Method for Fast Reconstruction. Rough Set Approach in Ultrasound Biomicroscopy Glaucoma Analysis. Video Copy Detection: Sequence Matching Using Hypothesis Test. An XML Based Digital Textbook and Its Educational Effectiveness. SIMACT: A 3D Open Source Smart Home Simulator for Activity Recognition. Design of an Efficient Message Collecting Scheme for the Slot Based Wireless Mesh Network. A Novel Approach Based on Fault Tolerance and Recursive Segmentation to Query by Humming. Chinese Prosody Generation Based on C ToBI Representation for Text To Speech. CAS4UA: A Context Aware Service System Based on Workflow Model for Ubiquitous Agriculture. A Power Control Scheme for an Energy Efficient MAC Protocol. Towards the Designing of a Robust Intrusion Detection System through an Optimized Advancement of Neural Networks.", "venue": "AST/UCMA/ISA/ACN", "year": 2010.0, "author_names": ["", "Tai-Hoon Kim", "Hojjat Adeli"], "n_citations": 59, "n_key_citations": 0, "score": 0}, {"corpus_id": 39022843, "title": "RoboCup 2004: Robot Soccer World Cup VIII", "abstract": "RoboCup 2004 Overview. RoboCup 2004 Overview. Award Winner Papers. Map Based Multiple Model Tracking of a Moving Object. UCHILSIM: A Dynamically and Visually Realistic Simulator for the RoboCup Four Legged League. Full Papers. CommLang: Communication for Coachable Agents. Turning Segways into Robust Human Scale Dynamically Balanced Soccer Robots. A Constructive Feature Detection Approach for Robotic Vision. Illumination Insensitive Robot Self Localization Using Panoramic Eigenspaces. A New Omnidirectional Vision Sensor for Monte Carlo Localization. Fuzzy Self Localization Using Natural Features in the Four Legged League. A Behavior Architecture for Autonomous Mobile Robots Based on Potential Fields. An Egocentric Qualitative Spatial Knowledge Representation Based on Ordering Information for Physical Robot Navigation. Sensor Actuator Comparison as a Basis for Collision Detection for a Quadruped Robot. Learning to Drive and Simulate Autonomous Mobile Robots. RoboCupJunior Four Years Later. Evolution of Computer Vision Subsystems in Robot Navigation and Image Classification Tasks. Towards Illumination Invariance in the Legged League. Using Layered Color Precision for a Self Calibrating Vision System. Getting the Most from Your Color Camera in a Color Coded World. Combining Exploration and Ad Hoc Networking in RoboCup Rescue. Robust Multi robot Object Localization Using Fuzzy Logic. Visual Robot Detection in RoboCup Using Neural Networks. Extensions to Object Recognition in the Four Legged League. Predicting Opponent Actions by Observation. A Model Based Approach to Robot Joint Control. Evolutionary Gait Optimization Using a Fitness Function Based on Proprioception. Optic Flow Based Skill Learning for a Humanoid to Trap, Approach to, and Pass a Ball. Learning to Kick the Ball Using Back to Reality. Cerebellar Augmented Joint Control for a Humanoid Robot. Dynamically Stable Walking and Kicking Gait Planning for Humanoid Soccer Robots. An Algorithm That Recognizes and Reproduces Distinct Types of Humanoid Motion Based on Periodically Constrained Nonlinear PCA. Three Dimensional Smooth Trajectory Planning Using Realistic Simulation. Plug and Play: Fast Automatic Geometry and Color Calibration for Cameras Tracking Robots. Real Time Adaptive Colour Segmentation for the RoboCup Middle Size League. Visual Tracking and Localization of a Small Domestic Robot. A Vision Based System for Goal Directed Obstacle Avoidance. Object Tracking Using Multiple Neuromorphic Vision Sensors. Interpolation Methods for Global Vision Systems. A Method of Pseudo Stereo Vision from Images of Cameras Shutter Timing Adjusted. Automatic Distance Measurement and Material Characterization with Infrared Sensors. Posters. A Novel Search Strategy for Autonomous Search and Rescue Robots. World Modeling in Disaster Environments with Constructive Self Organizing Maps for Autonomous Search and Rescue Robots. Approaching Urban Disaster Reality: The ResQ Firesimulator. Stochastic Map Merging in Rescue Environments. Orpheus Universal Reconnaissance Teleoperated Robot. Navigation Controllability of a Mobile Robot Population. Sharing Belief in Teams of Heterogeneous Robots. Formulation and Implementation of Relational Behaviours for Multi robot Cooperative Systems. Cooperative Planning and Plan Execution in Partially Observable Dynamic Domains. Exploring Auction Mechanisms for Role Assignment in Teams of Autonomous Robots. A Descriptive Language for Flexible and Robust Object Recognition. Modular Learning System and Scheduling for Behavior Acquisition in Multi agent Environment. Realtime Object Recognition Using Decision Tree Learning. Optimizing Precision of Self Localization in the Simulated Robotics Soccer. Path Optimisation Considering Dynamic Constraints. Analysis by Synthesis, a Novel Method in Mobile Robot Self Localization. Robots from Nowhere. Design and Implementation of Live Commentary System in Soccer Simulation Environment. Towards a League Independent Qualitative Soccer Theory for RoboCup. Motion Detection and Tracking for an AIBO Robot Using Camera Motion Compensation and Kalman Filtering. The Use of Gyroscope Feedback in the Control of the Walking Gaits for a Small Humanoid Robot. The UT Austin Villa 2003 Champion Simulator Coach: A Machine Learning Approach. ITAS and the Reverse RoboCup Challenge. SPQR RDK: A Modular Framework for Programming Mobile Robots. Mobile Autonomous Robots Play Soccer An Intercultural Comparison of Different Approaches Due to Different Prerequisites. From Games to Applications: Component Reuse in Rescue Robots.", "venue": "RoboCup", "year": 2005.0, "author_names": ["Daniele Nardi", "Martin A Riedmiller", "Claude Sammut", "Jose Santos-Victor"], "n_citations": 43, "n_key_citations": 0, "score": 0}, {"corpus_id": 21281289, "title": "PRICAI 2008: Trends in Artificial Intelligence, 10th Pacific Rim International Conference on Artificial Intelligence, Hanoi, Vietnam, December 15 19, 2008. Proceedings", "abstract": "Keynotes. What Shall We Do Next? The Challenges of AI Midway through Its First Century. Exposing the Causal Structure of Processes by Learning CP Logic Programs. Building Structured Web Community Portals Via Extraction, Integration, and Mass Collaboration. Large Scale Corpus Analysis and Recent Applications. On the Computability and Complexity Issues of Extended RDF. Toward Formalizing Common Sense Psychology: An Analysis of the False Belief Task. Computing Stable Skeletons with Particle Filters. Using Semantic Web Technologies for the Assessment of Open Questions. Quantifying Commitment. Temporal Data Mining for Educational Applications. Dual Properties of the Relative Belief of Singletons. Alternative Formulations of the Theory of Evidence Based on Basic Plausibility and Commonality Assignments. Non negative Sparse Principal Component Analysis for Multidimensional Constrained Optimization. Sentence Compression by Removing Recursive Structure from Parse Tree. An ATP of a Relational Proof System for Order of Magnitude Reasoning with Negligibility, Non closeness and Distance. A Heuristic Data Reduction Approach for Associative Classification Rule Hiding. Evolutionary Computation Using Interaction among Genetic Evolution, Individual Learning and Social Learning. Behavior Learning Based on a Policy Gradient Method: Separation of Environmental Dynamics and State Values in Policies. Developing Evaluation Model of Topical Term for Document Level Sentiment Classification. Learning to Identify Comparative Sentences in Chinese Text. Efficient Exhaustive Generation of Functional Programs Using Monte Carlo Search with Iterative Deepening. Identification of Subject Shareness for Korean English Machine Translation. Agent for Predicting Online Auction Closing Price in a Simulated Auction Environment. Feature Selection Using Mutual Information: An Experimental Study. Finding Orthogonal Arrays Using Satisfiability Checkers and Symmetry Breaking Constraints. Statistical Model for Japanese Abbreviations. A Novel Heuristic Algorithm for Privacy Preserving of Associative Classification. Time Frequency Analysis of Vietnamese Speech Inspired on Chirp Auditory Selectivity. Meta level Control of Multiagent Learning in Dynamic Repeated Resource Sharing Problems. Ontology Based Natural Query Retrieval Using Conceptual Graphs. Optimal Multi issue Negotiation in Open and Dynamic Environments. The Density Based Agglomerative Information Bottleneck. State Based Regression with Sensing and Knowledge. Some Results on the Completeness of Approximation Based Reasoning. KT and S4 Satisfiability in a Constraint Logic Environment. Clustering with Feature Order Preferences. Distributed Memory Bounded Path Search Algorithms for Pervasive Computing Environments. Using Cost Distributions to Guide Weight Decay in Local Search for SAT. Fault Resolution in Case Based Reasoning. Constrained Sequence Classification for Lexical Disambiguation. Map Building by Sequential Estimation of Inter feature Distances. Document Based HITS Model for Multi document Summarization. External Force for Active Contours: Gradient Vector Convolution. Representation Grounded Information. Learning from the Past with Experiment Databases. An Argumentation Framework Based on Conditional Priorities. Knowledge Supervised Text Classification with No Labeled Documents. Constrained Local Regularized Transducer for Multi Component Category Classification. Low Resolution Gait Recognition with High Frequency Super Resolution. NIIA: Nonparametric Iterative Imputation Algorithm. Mining Multidimensional Data through Element Oriented Analysis. Evolutionary Feature Selections for Face Detection System. A Probabilistic Approach to the Interpretation of Spoken Utterances. Regular Papers. Towards Autonomous Robot Operation: Path Map Generation of an Unknown Area by a New Trapezoidal Approximation Method Using a Self Guided Vehicle and Shortest Path Calculation by a Proposed SRS Algorithm. Exploring Combinations of Ontological Features and Keywords for Text Retrieval. Instance Management Problems in the Role Model of Hozo. Advancing Topic Ontology Learning through Term Extraction. Handling Unknown and Imprecise Attribute Values in Propositional Rule Learning: A Feature Based Approach. Fuzzy Knowledge Discovery from Time Series Data for Events Prediction. Evolution of Migration Behavior with Multi agent Simulation. Constraint Relaxation Approach for Over Constrained Agent Interaction. Structure Extraction from Presentation Slide Information. Combining Local and Global Resources for Constructing an Error Minimized Opinion Word Dictionary. An Improvement of PAA for Dimensionality Reduction in Large Time Series Databases. Stability Margin for Linear Systems with Fuzzy Parametric Uncertainty. An Imperative Account of Actions. Natural Language Interface Construction Using Semantic Grammars. Exploiting the Role of Named Entities in Query Oriented Document Summarization. A Probabilistic Model for Understanding Composite Spoken Descriptions. Fuzzy Communication Reaching Consensus under Acyclic Condition. Probabilistic Nogood Store as a Heuristic. Semantic Filtering for DDL Based Service Composition. Prediction of Protein Functions from Protein Interaction Networks: A Naive Bayes Approach. Multi class Support Vector Machine Simplification. A Syntactic based Word Re ordering for English Vietnamese Statistical Machine Translation System. A Multi modal Particle Filter Based Motorcycle Tracking System. Bayesian Inference on Hidden Knowledge in High Throughput Molecular Biology Data. Personalized Search Using ODP based User Profiles Created from User Bookmark. Domain Driven Local Exceptional Pattern Mining for Detecting Stock Price Manipulation. A Graph Based Method for Combining Collaborative and Content Based Filtering. Hierarchical Differential Evolution for Parameter Estimation in Chemical Kinetics. Differential Evolution Based on Improved Learning Strategy. SalienceGraph: Visualizing Salience Dynamics of Written Discourse by Using Reference Probability and PLSA. Learning Discriminative Sequence Models from Partially Labelled Data for Activity Recognition. Feature Selection for Clustering on High Dimensional Data. Availability of Web Information for Intercultural Communication. Short Papers. Mining Weighted Frequent Patterns in Incremental Databases. Revision of Spatial Information by Containment. Joint Power Control and Subcarrier Allocation in MC CDMA Systems An Intelligent Search Approach. Domain Independent Error Based Simulation for Error Awareness and Its Preliminary Evaluation. A Characterization of Sensitivity Communication Robots Based on Mood Transition. Recommendation Algorithm for Learning Materials That Maximizes Expected Test Scores. A Hybrid Kansei Design Expert System Using Artificial Intelligence. Solving the Contamination Minimization Problem on Networks for the Linear Threshold Model. A Data Driven Approach for Finding the Threshold Relevant to the Temporal Data Context of an Alarm of Interest. Branch and Bound Algorithms to Solve Semiring Constraint Satisfaction Problems. Image Analysis of the Relationship between Changes of Cornea and Postmortem Interval. Context Based Term Frequency Assessment for Text Classification. Outlier Mining on Multiple Time Series Data in Stock Market. Generating Interactive Facial Expression of Communication Robots Using Simple Recurrent Network. Effects of Repair Support Agent for Accurate Multilingual Communication. Towards Adapting XCS for Imbalance Problems. Personalized Summarization Agent Using Non negative Matrix Factorization. Interactive Knowledge Acquisition and Scenario Authoring. Reconstructing Hard Problems in a Human Readable and Machine Processable Way. Evolving Intrusion Detection Rules on Mobile Ad Hoc Networks. On the Usefulness of Interactive Computer Game Logs for Agent Modelling. An Empirical Study on the Effect of Different Similarity Measures on User Based Collaborative Filtering Algorithms. Using Self Organizing Maps with Learning Classifier System for Intrusion Detection. New Particle Swarm Optimization Algorithm for Solving Degree Constrained Minimum Spanning Tree Problem. Continuous Pitch Contour as an Improvement Feature for Music Information Retrieval by Humming/Singing. Classification Using Improved Hybrid Wavelet Neural Networks. Online Classifier Considering the Importance of Attributes. An Improved Tabu Search Algorithm for 3D Protein Folding Problem. Transferring Knowledge from Another Domain for Learning Action Models. Texture and Target Orientation Estimation from Phase Congruency. Query Classification and Expansion for Translation Mining Via Search Engines.", "venue": "PRICAI", "year": 2008.0, "author_names": ["Tu Bao Ho", "Zhi-Hua Zhou"], "n_citations": 48, "n_key_citations": 0, "score": 0}, {"corpus_id": 56487980, "title": "AI*IA 2001: advances in artificial intelligence 7th Congress of the Italian Association for Artificial Intelligence, Bari, Italy, September 25 28, 2001 proceedings", "abstract": "Machine Learning. A Monte Carlo Approach to Hard Relational Learning Problems. Boosting as a Monte Carlo Algorithm. Stepwise Induction of Model Trees. Evaluation Methods for Focused Crawling. A Knowledge Based Neurocomputing Approach to Extract Refined Linguistic Rules from Data. RBF Networks Exploiting Supervised Data in the Adaptation of Hidden Neuron Parameters. A New Machine Learning Approach to Fingerprint Classification. An Automatic Accompanist Based on Hidden Markov Models. Resampling vs Reweighting in Boosting a Relational Weak Learner. Learning Logic Models for Automated Text Categorization. User Profiling in an Application of Electronic Commerce. Automated Reasoning. Computing Spatial Similarity by Games. An Analysis of Backjumping and Trivial Truth in Quantified Boolean Formulas Satisfiability. Abduction with Penalization in Logic Programming. Causal Simulation and Diagnosis of Dynamic Systems. Critical Parallelization of Local Search for MAX SAT. Product Design as Product Revise: The Case of Chemical Compounds. Knowledge Representation. Belief Revision and the Ramsey Test: A Solution. Supporting Product Configuration in a Virtual Store. Characterising Concept's Properties in Ontologies. Multi agent Systems. Reasoning about Dynamic Scenes Using Autonomous Agents. Tuning the Collaboration Level with Autonomous Agents: A Principled Theory. The Definition of Legal Relations in a BDI Multiagent Framework. Reasoning about Actions in a Multiagent Domain. L*MASS: A Language for Situated Multi agent Systems. An Interactive System for Generating Arguments in Deceptive Communication. An Agent Based Approach to Virtual Market Place Simulation. Natural Language Processing. Open Domain Question/Answering on the WEB. User Adapted Image Descriptions from Annotated Knowledge Sources. Wide Coverage Incremental Parsing by Learning Attachment Preferences. Flexible Parsing Architectures for NLP Applications. Information Presentation Adapted to the \"User in Context\" A Hybrid Approach to Optimize Feature Selection Process in Text Classification. Perception, Vision, and Robotics. Concepts for Anchoring in Robotics. Symbolic and Conceptual Representation of Dynamic Scenes: Interpreting Situation Calculus on Conceptual Spaces. A Non traditional Omnidirectional Vision System with Stereo Capabilities for Autonomous Robots. Architectural Scenes Reconstruction from Uncalibrated Photos and Map Based Model Knowledge. A SOM/ARSOM Hierarchy for the Description of Dynamic Scenes. Planning and Scheduling. A Constraint Based Architecture for Flexible Support to Activity Scheduling. Planning and Execution in Dynamic Environments. An Agent Architecture for Planning in a Dynamic Environment.", "venue": "", "year": 2001.0, "author_names": ["Associazione italiana per l'intelligenza artificiale Congress", "Floriana Esposito"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 106972824, "title": "Advances in artificial intelligence IBERAMIA 2002 8th Ibero American Conference on AI, Seville, Spain, November 12 15, 2002 proceedings", "abstract": "Knowledge Representation and Reasoning. Improving Naive Bayes Using Class Conditional ICA. Detecting Events and Topics by Using Temporal References. Asymmetric Neighbourhood Selection and Support Aggregation for Effective Classification. Incremental Learning of Tree Augmented Naive Bayes Classifiers. A Comparison of PCA and GA Selected Features for Cloud Field Classification. Filtering Noisy Continuous Labeled Examples. Designing Adaptive Hypermedia for Internet Portals: A Personalization Strategy Featuring Case Base Reasoning with Compositional Adaptation. Improving Classification Accuracy of Large Test Sets Using the Ordered Classification Algorithm. A Comparative Study of Some Issues Concerning Algorithm Recommendation Using Ranking Methods. Properties and Complexity in Feasible Logic Based Argumentation for Electronic Commerce. A Hybrid CBR Model for Forecasting in Complex Domains. Generalized Modifiers as an Interval Scale: Towards Adaptive Colorimetric Alterations. A Conceptual Graph and RDF(S) Approach for Representing and Querying Document Content. Automatic Optimization of Multi paradigm Declarative Programs. SLFD Logic: Elimination of Data Redundancy in Knowledge Representation. Indeed: Interactive Deduction on Horn Clause Theories. Restricted Trees and Reduction Theorems in Multiple Valued Logics. Max CSP Approach for Software Diagnosis. Machine Learning. Local Search Methods for Learning Bayesian Networks Using a Modified Neighborhood in the Space of DAGs. A Quasi Metric for Machine Learning. Shared Ensemble Learning Using Multi trees. A GRASP Algorithm for Clustering. An Analysis of the Pheromone Q Learning Algorithm. SOAP: Efficient Feature Selection of Numeric Attributes. Uncertainty and Fuzzy Systems. Designing Fuzzy Relations in Orthogonal Persistence Object Oriented Database Engines. An Interactive Framework for Open Queries in Decision Support Systems. Integration of Fault Detection and Diagnosis in a Probabilistic Logic Framework. Series Parallel and Tree Decomposition Approaches for Fuzzy Constraint Networks. Change Detection Using Contextual Information and Fuzzy Entropy Principle. Improving Simple Linguistic Fuzzy Models by Means of the Weighted COR Methodology. A Semiquantitative Approach to Study Semiqualitative Systems. Genetic Algorithms. Multi objective Optimization Evolutionary Algorithms Applied to Paroxysmal Atrial Fibrillation Diagnosis Based on the k Nearest Neighbours Classifier. Population Studies for the Gate Matrix Layout Problem. New Generic Hybrids Based upon Genetic Algorithms. Genetic Algorithms and Biological Images Restoration: Preliminary Report. Evolution of Multi adaptive Discretization Intervals for a Rule Based Genetic Learning System. An Immunological Approach to Combinatorial Optimization Problems. A Genetic Algorithm for Solving a Production and Delivery Scheduling Problem with Time Windows. A Prediction System for Cardiovascularity Diseases Using Genetic Fuzzy Rule Based Systems. Multiple Crossover per Couple with Selection of the Two Best Offspring: An Experimental study with the BLX Crossover Operator for Real Coded Genetic Algorithms. Neural Nets. Adaptive Random Fuzzy Cognitive Maps. Convex Hull in Feature Space for Support Vector Machines. Applying Neural Networks and Genetic Algorithms to the Separation of Sources. A Neural Associative Pattern Classifier. Rule Extraction from Radial Basis Function Networks by Using Support Vectors. Empirical Performance Assessment of Nonlinear Model Selection Techniques. An Efficient Neural Network Algorithm for the p Median Problem. Improving Cellular Nonlinear Network Computational Capabilities. Learning to Assess from Pair Wise Comparisons. Forecasting Time Series Combining Machine Learning and Box Jenkins Time Series. Gaussian Synapse Networks for Hyperspectral Image Segmentation. An Associative Multivalued Recurrent Network. Machine Learning Models for Online Dynamic Security Assessment of Electric Power Systems. On Endmember Detection in Hyperspectral Images with Morphological Associative Memories. Application of Learning Machine Methods to 3D Object Modeling. Distributed Artificial Intelligence and Multi agent Systems. Enriching Information Agents' Knowledge by Ontology Comparison: A Case Study. Negotiation among Autonomous Computational Agents. Interface Agents Development in MASA for Human Integration in Multiagent Systems. Comparing Distributed Reinforcement Learning Approaches to Learn Agent Coordination. Empowered Situations of Autonomous Agents. Distributed Agenda Management through Decentralised Vowels Co ordination Approach. Meta modelling in Agent Oriented Software Engineering. Multi agent Systems and Network Management A Positive Experience on Unix Environments. Formal Specification of Opinions Applied to the Consensus Problem. Natural Language Processing. Practical NLP Based Text Indexing. Definite Description Resolution Enrichment with WordNet Domain Labels. A Hidden Markov Model Approach to Word Sense Disambiguation. A Simple Connectionist Approach to Language Understanding in a Dialogue System. Wide Coverage Spanish Named Entity Extraction. Terminology Retrieval: Towards a Synergy between Thesaurus and Free Text Searching. Mixed Parsing of Tree Insertion and Tree Adjoining Grammars. Task Oriented Dialogue Processing Using Multiagents Theory. Automatic Adaptation of a Natural Language Interface to a Robotic System. Intelligent Tutoring Systems. Semantic Comparison of Texts for Learning Environments. I PETER: Modelling Personalised Diagnosis and Material Selection for an Online English Course. An Agent Based System for Supporting Learning from Case Studies. Emergent Diagnosis via Coalition Formation. Adaptive Bayes. Control and Real Time. A Dynamic Scheduling Algorithm for Real Time Expert Systems. A Process Knowledge Based Controller for Maneuverability Improvement of a Nonlinear Industrial Process. An Architecture for Online Diagnosis of Gas Turbines. STeLLa v2.0: Planning with Intermediate Goals. Scheduling as Heuristic Search with State Space Reduction. Domain Independent Online Planning for STRIPS Domains. A Pomset Based Model for Estimating Workcells' Setups in Assembly Sequence Planning. New Methodology for Structure Identification of Fuzzy Corollers in Real Time. Robotics. Comparing a Voting Based Policy with Winner Takes All to Perform Action Selection in Motivational Agents. Bayesian Approach Based on Geometrical Features for Validation and Tuning of Solution in Deformable Models. Recognition of Continuous Activities. Kinematic Control System for Car Like Vehicles. Habituation Based on Spectrogram Analysis. Dynamic Schema Hierarchies for an Autonomous Robot. Computer Vision. Monte Carlo Localization in 3D Maps Using Stereo Vision. 3D Complex Scenes Segmentation from a Single Range Image Using Virtual Exploration. Recognizing Indoor Images with Unsupervised Segmentation and Graph Matching. Vision Based System for the Safe Operation of a Solar Power Tower Plan.", "venue": "", "year": 2002.0, "author_names": ["Francisco J Garijo", "Jose C Riquelme", "Miguel Toro"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 61114405, "title": "EurAsia ICT 2002: Information and Communication Technology First EurAsian Conference Shiraz, Iran, October 29 31, 2002 Proceedings", "abstract": "Artificial Intelligence I. Speaker Model and Decision Threshold Updating in Speaker Verification. Application of Constraint Hierarchy to Timetabling Problems. An Intelligent System for Therapy Control in a Distributed Organization. Data Mining. Discovering Local Patterns from Multiple Temporal Sequences. The Geometric Framework for Exact and Similarity Querying XML Data. A Mobile System for Extracting and Visualizing Protein Protein Interactions. Discovering Temporal Relation Rules Mining from Interval Data. Multimedia I. An Abstract Image Representation Based on Edge Pixel Neighborhood Information (EPNI) Motion Estimation Based on Temporal Correlations. A New Boundary Matching Algorithm Based on Edge Detection. Lookmark: A 2.5D Web Information Visualization System. Artificial Intelligence II. Different Local Search Algorithms in STAGE for Solving Bin Packing Problem. A Prototype for Functionality Based Network Management System. Coaching a Soccer Simulation Team in RoboCup Environment. Security I. Improving Information Retrieval System Security via an Optimal Maximal Coding Scheme. A New Scheme Based on Semiconductor Lasers with Phase Conjugate Feedback for Cryptographic Communications. Parallel Algorithm and Architecture for Public Key Cryptosystem. Specification and Verification of Security Policies in Firewalls. Multimedia II. Image Segmentation Based on Shape Space Modeling. HERMES: File System Support for Multimedia Streaming in Information Home Appliance. Motion Vector Recovery for Error Concealment Based on Macroblock Distortion Modeling. A Memory Copy Reduction Scheme for Networked Multimedia Service in Linux Kernel. Neural Network. Hidden Markov Model and Neural Network Hybrid. Neural Network Based Algorithms for IP Lookup and Packet Classification. Non linear Prediction of Speech Signal Using Artificial Neural Nets. Security II. Web Document Access Control Using Two Layered Storage Structures with RBAC Server. Development of UML Descriptions with USE. FPGA Implementation of Digital Chaotic Cryptography. Multimedia III. Stereo for Recovering Sharp Object Boundaries. Priority Vantage Points Structures for Similarity Queries in Metric Spaces. A High Performance Image Coding Using Uniform Morphological Sampling, Residues Classifying, and Vector Quantization. A Genetic Algorithm for Steiner Tree Optimization with Multiple Constraints Using Prufer Number. Data and Knowledge Engineering I. A New Technique for Participation of Non CORBA Independent Persistent Objects in OTS Transactions. Compositional Modelling of Workflow Processes. EDMIS: Metadata Interchange System for OLAP. An Efficient Method for Controlling Access in Object Oriented Databases. XML I. Extracting Information from XML Documents by Reverse Generating a DTD. Mapping XML Schema to Relational Schema. Flexible Modification of Relational Schema by X2RMap in Storing XML into Relations. B2B Integration Aligning ebXML and Ontology Approaches. Mobile Communication I. An Agent Based Service Discovery Architecture for Mobile Environments. Location Management Using Multicasting HLR in Mobile Networks. Packet Error Probability of Multi carrier CDMA System in Fast/Slow Correlated Fading Plus Interference Channel. Computer Graphics. A Distributed Low Cost Dynamic Multicast Routing Algorithm with Delay Constraints. A New Bandwidth Reduction Method for Distributed Rendering Systems. Neural Networks Based Mesh Generation Method in 2 D. Image Denoising Using Hidden Markov Models. Data and Knowledge Engineering II. Anaphoric Definitions in Description Logic. Storage and Querying of High Dimensional Sparsely Populated Data in Compressed Representation. The GlobData Fault Tolerant Replicated Distributed Object Database. XML II. A Levelized Schema Extraction for XML Document Using User Defined Graphs. Extracting, Interconnecting, and Accessing Heterogeneous Data Sources: An XML Query Based Approach. Mobile Communication II. Call Admission Control in Cellular Mobile Networks: A Learning Automata Approach. An Adaptive Flow Control Scheme for Improving TCP Performance in Wireless Internet. An Adaptive TCP Protocol for Lossy Mobile Environment. Design and Implementation of Application Level Multicasting Services over ATM Networks. Digital Libraries and Natural Language Issues. Bon: The Persian Stemmer. Current and Future Features of Digital Journals. Solving Language Problems in a Multilingual Digital Library Federation. Internet and Quality of Service. Performing IP Lookup on Very High Line Speed. A Study of Marking Aggregated TCP and UDP Flows Using Generalized Marking Scheme. Information Society. Towards the Global Information Society: The Enactment of a Regulatory Framework as a Factor of Transparency and Social Cohesion. E learning. On the Application of the Semantic Web Concepts to Adaptive E learning. An Integrated Programming Environment for Teaching the Object Oriented Programming Paradigm. The Current Legislation Covering E learning Provisions for the Visually Impaired in the EU. Mobile Communication III. Monte Carlo Soft Handoff Modeling. A QoS Provision Architecture for Mobile IPv6 over MPLS Using HMAT. A New Propagation Model for Cellular Mobile Radio Communications in Urban Environments Including Tree Effects. Mobile Web Information Systems. A Secure Mobile Agent System Applying Identity Based Digital Signature Scheme. Transmission Time Analysis of WAP over CDMA System Using Turbo Code Scheme. On the Use of New Technologies in Health Care. Wireless Communication Technology I. Hybrid Queuing Strategy to Reduce Call Blocking in Multimedia Wireless Networks. A Dynamic Backoff Scheme to Guarantee QoS over IEEE 802.11 Wireless Local Area Networks. Performance Evaluation of Serial/Parallel Block Coded CDMA System with Complex Spreading in Near/Far Multiple Access Interference and Multi path Nakagami Fading Channel. A Learning Automata Based Dynamic Guard Channel Scheme. Web Based Application. Dynamic System Simulation on the Web. Using Proximity Information for Load Balancing in Geographically Distributed Web Server Systems. Strategic Tool for Assessment of the Supply and Demand Relationship between ASPs and SMEs for Competitive Advantage. Intelligent Agents I. Trust and Commitment in Dynamic Logic. Modelling Heterogeneity in Multi Agent Systems. Pricing Agents for a Group Buying System. Evolution of Cooperation in Multiagent Systems. Real Time Systems. A Dynamic Window Based Approximate Shortest Path Re computation Method for Digital Road Map Databases in Mobile Environments. Web Based Process Control Systems: Architectural Patterns, Data Models, and Services. A Comparison of Techniques to Estimate Response Time for Data Placement. Using a Real Time Web Based Pattern Recognition System to Search for Component Patterns Database. Wireless Communication Technology II. An Adaptive Call Admission Control to Support Flow Handover in Wireless Ad Hoc Networks. Design of Optimal LA in Personal Communication Services Network Using Simulated Annealing Technique. Secure Bluetooth Piconet Using Non anonymous Group Key. Differentiated Bandwidth Allocation and Power Saving for Wireless Personal Area Networks. Software Engineering I. Combining Extreme Programming with ISO 9000. The Class Cohesion Using the Reference Graph G1 and G2. Process Oriented Interactive Simulation of Software Acquisition Projects. Automatic Design Patterns Identification of C+ Programs. Intelligent Agents II. Specifying the Merging of Desires into Goals in the Context of Beliefs. The Illegal Copy Protection Using Hidden Agent. Mobile Agent Based Misuse Intrusion Detection Rule Propagation Model for Distributed System. Algorithm and Computer Theory. H Colorings of Large Degree Graphs. Hyper Star Graph: A New Interconnection Network Improving the Network Cost of the Hypercube. Sequential Consistency as Lazy Linearizability. Embedding Full Ternary Trees into Recursive Circulants. Wireless Communication Technology III. A Handoff Priority Scheme for TDMA/FDMA Based Cellular Networks. On Delay Times in a Bluetooth Piconet: The Impact of Different Scheduling Policies. Intelligent Paging Strategy in 3G Personal Communication Systems. Experience from Mobile Application Service Framework in WIP. An Efficient Approach to Improve TCP Performance over Wireless Networks. Extended Hexagonal Constellations as a Means of Multicarrier PAPR Reduction. Software Engineering II. Adaptive Application Centric Management in Meta computing Environments. The Weakest Failure Detector for Solving Election Problems in Asynchronous Distributed Systems. From Lens to Flow Structure. ADML: A Language for Automatic Generation of Migration Plans. Considerations for Using Domain Specific Modeling in the Analysis Phase of Software Development Process. Intelligent Agents II. Organizations and Normative Agents. A Framework for Agent Based Software Development. Application of Agent Technologies in Extended Enterprise Production Planning. Zamin: An Artificial Ecosystem.", "venue": "", "year": 2002.0, "author_names": ["", "Hassan Shafazand", "A Min Tjoa"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "Detecting Photoshopped Faces by .", "session_id": 7499943209077001, "user_id": 3023696593417531, "candidates": [{"corpus_id": 189762170, "title": "Detecting Photoshopped Faces by Scripting Photoshop", "abstract": "Most malicious photo manipulations are created using standard image editing tools, such as Adobe Photoshop. We present a method for detecting one very popular Photoshop manipulation image warping applied to human faces using a model trained entirely using fake images that were automatically generated by scripting Photoshop itself. We show that our model outperforms humans at the task of recognizing manipulated images, can predict the specific location of edits, and in some cases can be used to \"undo\" a manipulation to reconstruct the original, unedited image. We demonstrate that the system can be successfully applied to artist created image manipulations.", "venue": "2019 IEEE/CVF International Conference on Computer Vision (ICCV)", "year": 2019.0, "author_names": ["Sheng-Yu Wang", "Oliver Wang", "Andrew Owens", "Richard Zhang", "Alexei A Efros"], "n_citations": 52, "n_key_citations": 3, "score": 1}, {"corpus_id": 204952582, "title": "Detecting Photoshopped Faces by Scripting Photoshop Supplemental Material", "abstract": "Local predictions Figure 5 shows a random selection of results from our validation dataset of automaticallygenerated manipulations. We conducted an experiment where the PSNR change with respect to scaled versions of the predicted flow field are shown over the validation set (Figure 1) We can see that the highest PSNR gain is where the scale factor is 1.0, which implies that our predicted flow fields do not contain a multiplicative bias, that might result from the regression loss.", "venue": "", "year": 2019.0, "author_names": ["S Wang", "Oliver Wang", "Andrew Owens", "Richard Zhang", "Alexei A Efros"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 44530543, "title": "New Masters of Photoshop", "abstract": "creation During abstract creation, you develop design that is totally your own to bring the theme to life. This can mean manipulating the raw material by sinking it into the design so that it becomes fully a part of i t It can also involve putting in your own design touches, with further line work and new drawings. From the raw material to the final design, an unbroken message should appear. Your design shouldn't be such that you've taken the original needs of the client and. through your creative process, totally obscured the message. Professional web design shouldn't be ego driven the design is not about you. it's about the message you're trying to convey for your client. They will (usually) know their message better than you do. and the reality may be that you've forgotten that message in your design! On the Web, people are bombarded with much that makes no sense at all. and the way they rapidly consume the medium means that there is confusion and smudging of style in their eyes. It's not enough just to create a piece of work that's individual in design; it must also be singular in meaning. This is where real strength in design lies. To be able to take your creative impetus and drive it into focused coherence is a true survival skill for the Web. T T H E C E N T R E O F T H E N A T I O N S C R E A T I V E L I F", "venue": "Apress", "year": 2001.0, "author_names": ["Tim Bird", "Michael G Cina", "Gavin Cromhout", "Josh Fallon", "Jens Magnus Karlsson", "Derek Lea", "Adrian Luna", "Catherine McIntyre", "Wojtek Madej", "Jason Mohr", "Eun-Ha Paek", "Andrew Park", "Paul Sinclair", "Colin Smith", "Yoshi Sodeoka", "Peter Stanick", "Johann Terrettaz", "Norma V Toraya", "Michael Young"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 191988083, "title": "Deepfakes, Artificial Intelligence, and Some Kind of Dystopia: The New Faces of Online Post Fact Performance", "abstract": "Abstract:This provocation summarizes recent advances in machine learning and artificial intelligence (AI) as they relate to the emergence of \"deepfakes\" manufactured videos of people saying and doing things they never did. Although audio/visual fakes online are nothing new, recent technological and software advances have enabled cheap, fast generation of practically undetectable video fakery by consumer level users. The provocation traces the appearance and evolution of deepfakes over the winter of 2017 18 from their beginnings as a stunt on amateur porn sharing sites to their spread to other digital media exchange venues. In concert with a range of tech scholars and critics, it lays out some of the more troubling paradigm shifts that deepfakes represent in terms of both AI development and post digital media circulation. It calls for performance scholars generally (and not merely those who focus on digital/online media) to attend to how deepfakes and the ever advancing technology underlying them are transforming assumptions of seeing, representing, verifying, and performing online and beyond.", "venue": "", "year": 2018.0, "author_names": ["John Gould Fletcher"], "n_citations": 27, "n_key_citations": 2, "score": 0}, {"corpus_id": 204939260, "title": "Impact and Detection of Facial Beautification in Face Recognition: An Overview", "abstract": "Facial beautification induced by plastic surgery, cosmetics or retouching has the ability to substantially alter the appearance of face images. Such types of beautification can negatively affect the accuracy of face recognition systems. In this work, a conceptual categorisation of beautification is presented, relevant scenarios with respect to face recognition are discussed, and related publications are revisited. Additionally, technical considerations and trade offs of the surveyed methods are summarized along with open issues and challenges in the field. This survey is targeted to provide a comprehensive point of reference for biometric researchers and practitioners working in the field of face recognition, who aim at tackling challenges caused by facial beautification.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Christian Rathgeb", "Antitza Dantcheva", "Christoph Busch"], "n_citations": 20, "n_key_citations": 1, "score": 0}, {"corpus_id": 143062537, "title": "Unbearable Weight: Feminism, Western Culture, and the Body", "abstract": "In this provocative book, Susan Bordo untangles the myths, ideologies, and pathologies of the modern female body. Bordo explores our tortured fascination with food, hunger, desire, and control, and its effects on women's lives.", "venue": "", "year": 1993.0, "author_names": ["Susan Bordo"], "n_citations": 4074, "n_key_citations": 427, "score": 0}, {"corpus_id": 203708212, "title": "Geometry Aware GAN for Face Attribute Transfer", "abstract": "In this paper, the geometry aware GAN, referred to as GAGAN, is proposed to address the issue of face attribute transfer with unpaired data. The source and target images are not aligned and come from different individuals. The key idea is to deform the source image according to the geometry features and generate a high resolution image with the desired attribute. To address the problem of the unpaired training samples, the CycleGAN architecture is applied to form an attribute adding and removing cycle, where the bilateral mappings between the source and target domains are learned. The geometry flow and occlusion mask are learned by the warping sub network to capture the geometric variation between the two domains. In the attribute adding process, the spatial transformer network (STN) warps the source face into the desired pose and shape according to the flow, and the transfer sub network hallucinates new components on the warped image. In the attribute removing process, the recover sub network and the STN reverts the sample back to the source domain. Experiments on the benchmarks CELEBA and CELEBA HQ datasets demonstrate the advantages of our method compared to the baselines, in terms of both quantitative and qualitative evaluation.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Danlan Huang", "Xiaoming Tao", "Jianhua Lu", "Minh N Do"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 202699691, "title": "No One Can Escape: A General Approach to Detect Tampered and Generated Image", "abstract": "Fake or tampered images pose a real problem in today's life. It is easy to unknowingly be drawn to an interesting image that is false. Recently, with the emergence of generative adversarial networks (GANs) it becomes much more easy to generate high quality fake images in a very realistic way. However, the current digital image forensics algorithms mainly focus on the detection of traditional tampered images or need prior knowledge of the network structure of GANs. Hence, verifying the authenticity of an image is very challenging. In this paper, we propose a general method for simultaneously detecting tampered images, and GANs generated images. First, we use the Scharr operator to extract the edge information of the image. Then, we converted the edge image information matrix into the gray level co occurrence matrix (GLCM) to scale the image without loss of image information. Finally, GLCM was fed into the deep neural network designed based on depthwise separable convolution for training. Compared with other methods, our model achieves a higher macro average of F1 score of 0.9865. Meanwhile, our method has better performance in detecting tampered images and has strong generalization ability for many GANs models.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Kejun Zhang", "Y Liang", "Jianyi Zhang", "Zhiqiang Wang", "Xinxin Li"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 144013651, "title": "Obama as Anti American: Visual Folklore in Right Wing Forwarded E mails and Construction of Conservative Social Identity", "abstract": "This paper investigates the group building potential of forwarded e mails through a visual analysis of negative images about President Barack Obama. We argue that these e mails are a form of political digital folklore that may contribute to constructing participants' individual and group identities. Images amplify the impact and believability of the messages, especially when linked to familiar cultural references and experiences and may lead to increased political polarization and hostility.", "venue": "", "year": 2012.0, "author_names": ["Margaret Duffy", "Maria Page", "Margaret Young"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 143045391, "title": "The Social Media Reader", "abstract": "With the rise of web 2.0 and social media platforms taking over vast tracts of territory on the internet, the media landscape has shifted drastically in the past 20 years, transforming previously stable relationships between media creators and consumers. The Social Media Reader is the first collection to address the collective transformation with pieces on social media, peer production, copyright politics, and other aspects of contemporary internet culture from all the major thinkers in the field. Culling a broad range and incorporating different styles of scholarship from foundational pieces and published articles to unpublished pieces, journalistic accounts, personal narratives from blogs, and whitepapers, The Social Media Reader promises to be an essential text, with contributions from Lawrence Lessig, Henry Jenkins, Clay Shirky, Tim O'Reilly, Chris Anderson, Yochai Benkler, danah boyd, and Fred von Loehmann, to name a few. It covers a wide ranging topical terrain, much like the internet itself, with particular emphasis on collaboration and sharing, the politics of social media and social networking, Free Culture and copyright politics, and labour and ownership. Theorizing new models of collaboration, identity, commerce, copyright, ownership, and labour, these essays outline possibilities for cultural democracy that arise when the formerly passive audience becomes active cultural creators, while warning of the dystopian potential of new forms of surveillance and control.", "venue": "", "year": 2012.0, "author_names": ["Michael Mandiberg"], "n_citations": 86, "n_key_citations": 5, "score": 0}]} -{"query": "Consensus Clustering with Unsupervised Representation Learning", "session_id": 6844975959531170, "user_id": 3829057874398838, "candidates": [{"corpus_id": 222133200, "title": "Consensus Clustering With Unsupervised Representation Learning", "abstract": "Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must either be closer in the representation space, or have a similar cluster assignment. Bootstrap Your Own Latent (BYOL) is one such representation learning algorithm that has achieved state of the art results in self supervised image classification on ImageNet under the linear evaluation protocol. However, the utility of the learnt features of BYOL to perform clustering is not explored. In this work, we study the clustering ability of BYOL and observe that features learnt using BYOL may not be optimal for clustering. We propose a novel consensus clustering based loss function, and train BYOL with the proposed loss in an end to end way that improves the clustering ability and outperforms similar clustering based methods on some popular computer vision datasets.", "venue": "2021 International Joint Conference on Neural Networks (IJCNN)", "year": 2021.0, "author_names": ["Jayanth Reddy Regatti", "Aniket Anand Deshmukh", "Eren Manavoglu", "Urun Dogan"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 233714969, "title": "Representation Learning for Clustering via Building Consensus", "abstract": "In this paper, we focus on deep clustering and unsupervised representation learning for images. Recent advances in deep clustering and unsupervised representation learning are based on the idea that different views of an input image (generated through data augmentation techniques) must be closer in the representation space (exemplar consistency) and/or similar images have a similar cluster assignment (population consistency) We define an additional notion of consistency, consensus consistency, which ensures that representations are learnt to induce similar partitions for variations in the representation space, different clustering algorithms or different initializations of a clustering algorithm. We define a clustering loss by performing variations in the representation space and seamlessly integrate all three consistencies (consensus, exemplar and population) into an end to end learning framework. The proposed algorithm, Consensus Clustering using Unsupervised Representation Learning (ConCURL) improves the clustering performance over state of the art methods on four out of five image datasets. Further, we extend the evaluation procedure for clustering to reflect the challenges in real world clustering tasks, such as clustering performance in the case of distribution shift. We also perform a detailed ablation study for a deeper understanding of the algorithm.", "venue": "ArXiv", "year": 2021.0, "author_names": ["Aniket Anand Deshmukh", "Jayanth Reddy Regatti", "Eren Manavoglu", "Urun Dogan"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 3412895, "title": "Unsupervised Multiview Nonnegative Correlated Feature Learning for Data Clustering", "abstract": "Multiview data, which provide complementary information for consensus grouping, are very common in real world applications. However, synthesizing multiple heterogeneous features to learn a comprehensive description of the data samples is challenging. To tackle this problem, many methods explore the correlations among various features across different views by the assumption that all views share the common semantic information. Following this line, in this letter, we propose a new unsupervised multiview nonnegative correlated feature learning (UMCFL) method for data clustering. Different from the existing methods that only focus on projecting features from different views to a shared semantic subspace, our method learns view specific features and captures inter view feature correlations in the latent common subspace simultaneously. By separating the view specific features from the shared feature representation, the effect of the individual information of each view can be removed. Thus, UMCFL can capture flexible feature correlations hidden in multiview data. A new objective function is designed and efficient optimization processes are derived to solve the proposed UMCFL. Extensive experiments on real world multiview datasets demonstrate that the proposed UMCFL method is superior to the state of the art multiview clustering methods.", "venue": "IEEE Signal Processing Letters", "year": 2018.0, "author_names": ["Liang Zhao", "Zhikui Chen", "Zhen Jane Wang"], "n_citations": 16, "n_key_citations": 2, "score": 0}, {"corpus_id": 235335102, "title": "Deep Spectral Representation Learning From Multi View Data", "abstract": "Multi view representation learning (MvRL) aims to learn a consensus representation from diverse sources or domains to facilitate downstream tasks such as clustering, retrieval, and classification. Due to the limited representative capacity of the adopted shallow models, most existing MvRL methods may yield unsatisfactory results, especially when the labels of data are unavailable. To enjoy the representative capacity of deep learning, this paper proposes a novel multi view unsupervised representation learning method, termed as Multi view Laplacian Network (MvLNet) which could be the first deep version of the multi view spectral representation learning method. Note that, such an attempt is nontrivial because simply combining Laplacian embedding (i.e. spectral representation) with neural networks will lead to trivial solutions. To solve this problem, MvLNet enforces an orthogonal constraint and reformulates it as a layer with the help of Cholesky decomposition. The orthogonal layer is stacked on the embedding network so that a common space could be learned for consensus representation. Compared with numerous recent proposed approaches, extensive experiments on seven challenging datasets demonstrate the effectiveness of our method in three multi view tasks including clustering, recognition, and retrieval. The source code could be found at www.pengxi.me.", "venue": "IEEE Transactions on Image Processing", "year": 2021.0, "author_names": ["Joey Tianyi Zhou", "Changqing Zhang", "Jiancheng Lv", "Xi Peng"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 55547460, "title": "Variability analysis of the hierarchical clustering algoritms and its implication on consensus clustering", "abstract": "Clustering is one of the most important unsupervised learning tools when no prior knowledge about the data set is available. Clustering algorithms aim to find underlying structure of the data sets taking into account clustering criteria, properties in the data and specific way of data comparison. In the literature many clustering algorithms have been proposed having a common goal which is, given a set of objects, grouping similar objects in the same cluster and dissimilar objects in different clusters. Hierarchical clustering algorithms are of great importance in data analysis providing knowledge about the data structure. Due to the graphical representation of the resultant partitions, through a dendrogram, may give more information than the clustering obtained by non hierarchical clustering algorithms. The use of different clustering methods for the same data set, or the use of the same clustering method but with different initializations (different parameters) can produce different clustering. So several studies have been concerned with validate the resulting clustering analyzing them in terms of stability variability, and also, there has been an increasing interest on the problem of determining a consensus clustering. This work empirically analyzes the clustering variability delivered by hierarchical algorithms, and some consensus clustering techniques are also investigated. By the variability of hierarchical clustering, we select the most suitable consensus clustering technique existing in literature. Results on a range of synthetic and real data sets reveal significant differences of the variability of hierarchical clustering as well as different performances of the consensus clustering techniques.", "venue": "", "year": 2017.0, "author_names": ["Lucia Sousa"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 4770426, "title": "Multi view Clustering with Graph Embedding for Connectome Analysis", "abstract": "Multi view clustering has become a widely studied problem in the area of unsupervised learning. It aims to integrate multiple views by taking advantages of the consensus and complimentary information from multiple views. Most of the existing works in multi view clustering utilize the vector based representation for features in each view. However, in many real world applications, instances are represented by graphs, where those vector based models cannot fully capture the structure of the graphs from each view. To solve this problem, in this paper we propose a Multi view Clustering framework on graph instances with Graph Embedding (MCGE) Specifically, we model the multi view graph data as tensors and apply tensor factorization to learn the multi view graph embeddings, thereby capturing the local structure of graphs. We build an iterative framework by incorporating multi view graph embedding into the multi view clustering task on graph instances, jointly performing multi view clustering and multi view graph embedding simultaneously. The multi view clustering results are used for refining the multi view graph embedding, and the updated multi view graph embedding results further improve the multi view clustering. Extensive experiments on two real brain network datasets (i.e. HIV and Bipolar) demonstrate the superior performance of the proposed MCGE approach in multi view connectome analysis for clinical investigation and application.", "venue": "CIKM", "year": 2017.0, "author_names": ["Guixiang Ma", "Lifang He", "Chun-Ta Lu", "Weixiang Shao", "Philip S Yu", "Alex D Leow", "Ann B Ragin"], "n_citations": 34, "n_key_citations": 1, "score": 0}, {"corpus_id": 7763171, "title": "A new consensus function based on dual similarity measurements for clustering ensemble", "abstract": "Clustering ensemble is an unsupervised learning method, which combines a number of partitions in order to produce a better clustering result. In this paper, we have proposed a clustering ensemble algorithm named Dual Similarity Clustering Ensemble (DSCE) The core of our ensemble is a consensus function, consists of three stages. The first stage is to transform the initial clusters into a binary representation, and the second is to measure the similarity between initial clusters and merge the most similar ones. The third is to identify candidate clusters, which contain only certain objects, and calculate their quality. The final clustering result is produced by an iterative process assigning the uncertain objects to a cluster that has a minimum effect on its quality. The number of clusters in the final clustering result converges to a stable value from the generated member, in contrast to most existing methods that require the user to provide the number of clusters in advance. The Experimental results on real datasets indicate that our method is statistically significant better than other state of the art clustering ensemble methods including CO and DICLENS algorithms.", "venue": "2015 IEEE International Conference on Data Science and Advanced Analytics (DSAA)", "year": 2015.0, "author_names": ["Tahani Alqurashi", "Wenjia Wang"], "n_citations": 5, "n_key_citations": 2, "score": 0}, {"corpus_id": 235221161, "title": "Contrastive self supervised clustering of scRNA seq data", "abstract": "Background Single cell RNA sequencing (scRNA seq) has emerged has a main strategy to study transcriptional activity at the cellular level. Clustering analysis is routinely performed on scRNA seq data to explore, recognize or discover underlying cell identities. The high dimensionality of scRNA seq data and its significant sparsity accentuated by frequent dropout events, introducing false zero count observations, make the clustering analysis computationally challenging. Even though multiple scRNA seq clustering techniques have been proposed, there is no consensus on the best performing approach. On a parallel research track, self supervised contrastive learning recently achieved state of the art results on images clustering and, subsequently, image classification. Results We propose contrastive sc a new unsupervised learning method for scRNA seq data that perform cell clustering. The method consists of two consecutive phases: first, an artificial neural network learns an embedding for each cell through a representation training phase. The embedding is then clustered in the second phase with a general clustering algorithm (i.e. KMeans or Leiden community detection) The proposed representation training phase is a new adaptation of the self supervised contrastive learning framework, initially proposed for image processing, to scRNA seq data. contrastive sc has been compared with ten state of the art techniques. A broad experimental study has been conducted on both simulated and real world datasets, assessing multiple external and internal clustering performance metrics (i.e. ARI, NMI, Silhouette, Calinski scores) Our experimental analysis shows that constastive sc compares favorably with state of the art methods on both simulated and real world datasets. Conclusion On average, our method identifies well defined clusters in close agreement with ground truth annotations. Our method is computationally efficient, being fast to train and having a limited memory footprint. contrastive sc maintains good performance when only a fraction of input cells is provided and is robust to changes in hyperparameters or network architecture. The decoupling between the creation of the embedding and the clustering phase allows the flexibility to choose a suitable clustering algorithm (i.e. KMeans when the number of expected clusters is known, Leiden otherwise) or to integrate the embedding with other existing techniques.", "venue": "BMC Bioinform.", "year": 2021.0, "author_names": ["Madalina Ciortan", "Matthieu Defrance"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 125893308, "title": "Multi view predictive latent space learning", "abstract": "Abstract In unsupervised circumstances, multi view learning seeks a shared latent representation by taking the consensus and complementary principles into account. However, most existing multi view unsupervised learning approaches do not explicitly lay stress on the predictability of the latent space. In this paper, we propose a novel multi view predictive latent space learning (MVP) model and apply it to multi view clustering and unsupervised dimension reduction. The latent space is forced to be predictive by maximizing the correlation between the latent space and feature space of each view. By learning a multi view graph with adaptive view weight learning, MVP effectively combines the complementary information from multi view data. Experimental results on benchmark datasets show that MVP outperforms the state of the art multi view clustering and unsupervised dimension reduction algorithms.", "venue": "Pattern Recognit. Lett.", "year": 2020.0, "author_names": ["Jirui Yuan", "Ke Gao", "Peng Fei Zhu", "Karen O Egiazarian"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 3524326, "title": "Infinite ensemble clustering", "abstract": "Ensemble clustering aims to fuse several diverse basic partitions into a consensus one, which has been widely recognized as a promising tool to discover novel clusters and deliver robust partitions, while representation learning with deep structure shows appealing performance in unsupervised feature pre treatment. In the literature, it has been empirically found that with the increasing number of basic partitions, ensemble clustering gets better performance and lower variances, yet the best number of basic partitions for a given data set is a pending problem. In light of this, we propose the Infinite Ensemble Clustering (IEC) which incorporates marginalized denoising auto encoder with dropout noises to generate the expectation representation for infinite basic partitions. Generally speaking, a set of basic partitions is firstly generated from the data. Then by converting the basic partitions to the 1 of K codings, we link the marginalized denoising auto encoder to the infinite basic partition representation. Finally, we follow the layer wise training procedure and feed the concatenated deep features to K means for final clustering. According to different types of marginalized auto encoders, the linear and non linear versions of IEC are proposed. Extensive experiments on diverse vision data sets with different levels of visual descriptors demonstrate the superior performance of IEC compared to the state of the art ensemble clustering and deep clustering methods. Moreover, we evaluate the performance of IEC in the application of pan omics gene expression analysis application via survival analysis.", "venue": "Data Mining and Knowledge Discovery", "year": 2017.0, "author_names": ["Hongfu Liu", "Ming Shao", "Sheng Li", "Yun Raymond Fu"], "n_citations": 22, "n_key_citations": 0, "score": 0}]} -{"query": "IDENTIFICATION AND BACTERIA AND DIABETIC AND FOOT AND ULCERS AND INFECTION AND NURSING CARE", "session_id": 913171199623698, "user_id": 3886735945840838, "candidates": [{"corpus_id": 226311171, "title": "Point of care testing for bacterial infection in diabetic foot ulcers: a prospective cohort study.", "abstract": "OBJECTIVE To appraise the performance of a new point of care wound infection detection kit in diabetic foot ulcers (DFUs) using clinician opinion as the primary comparator. The proprietary swab based chromatic Glycologic (Glycologic Ltd. UK) detection kit used in this study is designed to detect host response to pathogenic levels of bacteria in wounds. METHOD In high risk podiatry clinics, patients with DFUs were recruited and infection detection kit test results compared with initial clinician opinion. Chi squared tests, principal component analysis (PCA) and multiple regression analysis were performed to determine which variables were possibly associated with infection. The variables considered were patients' wound parameters, wider vascular comorbidity and demographics. RESULTS A total of 136 patients, providing 383 wound swabs, were included in the study. Total agreement in terms of DFU wound assessment for infection between podiatrists' clinical opinion and Glycologic kit test result was observed in 79% of cases (301/383) For 56 of the 349 negative infection detection kit test results (16% podiatrists identified a 'possible' or 'definite' infection. Conversely, in 14 of the 307 cases (4.6% where podiatrists deemed the wound 'not infected' the infection detection kit test showed a colour change. Regression analysis and PCA showed that clinical signs of wound infection, namely erythema, purulence and odour, were all significantly associated with both a positive clinical opinion and infection detection kit test result. However, in the case of the infection detection kit, a patient's number of lesions and vascular comorbidities were also significantly correlated with a positive test result. CONCLUSION A host response to critical pathological levels of bioburden in a wound as detected with the infection detection kit may partly be determined by an individual patient's (vascular) health and therefore be person specific. Further research is indicated to determine the relationship between an infection detection kit test result and the microbiological status of the wound.", "venue": "Journal of wound care", "year": 2020.0, "author_names": ["Leon Jonker", "Danielle Smith", "E Van Mark", "Jose Schutter", "Sarah Thornthwaite", "Shona Johnston"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 214603589, "title": "Effect of Nursing Intervention Based on Self Efficacy Theory on Promotion of Foot Self Care and Its Acceptability among Diabetic Elderly people", "abstract": "Background: Diabetes has a huge economic and social impact on the individuals, families and health system as a whole. Diabetic foot is one of the most common complications among diabetic patients. Improper foot care can lead to many complications such as infection, ulcers, gangrene and amputation. Aim of the study was to evaluate the effect of nursing intervention based on self efficacy theory on promotion of foot self care and its acceptability among diabetic elderly people. Subjects and method: Study design: A quasi experimental research design was used. Setting: This study was carried out at 9 geriatric homes on 160 elderly selected by convenience sample. Those elderly divided equally to study and control group. Tools: 1) Structured Interview Schedule, 2) Generalized Self Efficacy Scale (GSE) 3) Knowledge of foot care (KOFC) 4) Diabetic foot self care behavior scale (DFSBS) 5) Foot care outcome expectation (FCOE) and 6) The acceptability profile. Results: The total knowledge, foot care outcome expectation, foot care self efficacy and foot self care behavior score significantly improved immediately and three months post program than the pre program for the study group. Also, there was a significant positive correlation between the total knowledge, expectation, care selfefficacy and behavior scores pre and three months post nursing intervention for both groups. There was a good acceptance of the program by the elderly people. Conclusion: The nursing intervention based on self efficacy theory was effective to promote foot self care among diabetic elderly persons at geriatric homes. Recommendations: Community, geriatric, and medical surgical nurses need to design preventive health programs based on self efficacy for the elderly to reinforce and motivate beliefs about ability to self care for diabetic clients. It is necessary to measure the participant's acceptance of the program to identify and remove obstacles.", "venue": "", "year": 2019.0, "author_names": ["Zainab Gazar Alkotb Alagamy", "Tawheda Mohamed Khalefa El-saidy", "Mervat Amin Sayed", "Asmaa Abouda Abdelhamed Soultan"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219691093, "title": "The microbiome of diabetic foot ulcers: a comparison of swab and tissue biopsy wound sampling techniques using 16S rRNA gene sequencing", "abstract": "Background Health care professionals need to collect wound samples to identify potential pathogens that contribute to wound infection. Obtaining appropriate samples from diabetic foot ulcers (DFUs) where there is a suspicion of infection is of high importance. Paired swabs and tissue biopsies were collected from DFUs and both sampling techniques were compared using 16S rRNA gene sequencing. Results Mean bacterial abundance determined using quantitative polymerase chain reaction (qPCR) was significantly lower in tissue biopsies p 0.03) The mean number of reads across all samples was significantly higher in wound swabs X \\Big(\\overline{X} 32,014) compared to tissue X \\overline{X} 15,256, p 0.001) Tissue biopsies exhibited greater overall diversity of bacteria relative to swabs (Shannon's H diversity p 0.009) However, based on a presence/absence analysis of all paired samples, the frequency of occurrence of bacteria from genera of known and potential pathogens was generally higher in wound swabs than tissue biopsies. Multivariate analysis identified significantly different bacterial communities in swabs compared to tissue p 0.001) There was minimal correlation between paired wound swabs and tissue biopsies in the number and types of microorganisms. RELATE analysis revealed low concordance between paired DFU swab and tissue biopsy samples (Rho 0.043, p 0.34) Conclusions Using 16S rRNA gene sequencing this study identifies the potential for using less invasive swabs to recover high relative abundances of known and potential pathogen genera from DFUs when compared to the gold standard collection method of tissue biopsy.", "venue": "BMC Microbiology", "year": 2020.0, "author_names": ["Judith Travis", "Matthew Malone", "H Hu", "Abdul Baten", "Khalid Johani", "Flavia Huygens", "Karen Vickery", "K Benkendorff"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 86433734, "title": "Aerobic bacteria associated with diabetic foot ulcers and their susceptibility pattern", "abstract": "BackgroundFoot ulcers in diabetes mellitus subjects cause morbidity and mortality and lead to non traumatic amputations worldwide. Knowledge of the microbial burden in the ulcers may improve patients' care and management.ObjectivesThis prospective study was designed to isolate, identify and carry out antibiotic susceptibility testing on bacterial isolates associated with diabetic foot ulcers among subjects in University of Calabar Teaching Hospital.MethodsSubjects with diabetic foot ulcer were recruited after obtaining ethical clearance from the Research Committee and informed consent from the subjects. Samples were obtained from subjects using sterile swabs and subjected to microscopy and culture. Isolates were identified using standard bacteriological techniques. Kirby Bauer method was used for susceptibility testing.ResultsOut of the 50 subjects recruited, 19 (38.1% were males and 31 (62.0% were females with mean age of 55.4 10.1 and a minimum age of 40.0 years. All the subjects had grade 4 wounds. The study recorded 100% infection rates among subjects with 70.0% polymicrobial infections. A total of 97 isolates were obtained from the 50 subjects accounting for the average of 1.94 isolates per subject. The most prevalent isolate was Staphylococcus aureus (32 (32.9% while the least isolated pathogen was Klebsiella pneumonia (10 (20.4% Females harboured more isolates (61 (62.9% than males (36 (37.1% but infection rates were not significantly associated with gender (kh2 15.0, p 0.05) Erythromycin was the most effective antibiotic agent (65.6% against S. aureus while gram negative bacteria were more susceptible to augmentin (87.5% and ciprofloxacin (75.0%.ConclusionThe multiple antibiotic resistance of the bacterial isolates calls for the need to monitor resistance. The best practice is to perform antibiotic susceptibility testing before treatment. Wounds should be evaluated for bacterial agents before treatment is instituted. Information on the mi.uction of morbidity and amputation rates on the patients.", "venue": "Biomedical Dermatology", "year": 2019.0, "author_names": ["Ofonime M Ogba", "Emmanuel Nsan", "Eyam Sunday Eyam"], "n_citations": 12, "n_key_citations": 1, "score": 1}, {"corpus_id": 227336222, "title": "Antimicrobial Susceptibility Testing and Phenotypic Detection of MRSA Isolated from Diabetic Foot Infection", "abstract": "Background Diabetic foot infection (DFI) is a common and costly complication of diabetes that may be caused by various bacteria with multi resistant genes. The aim of this study is to evaluate the efficacy of phenotypic methods for identification of methicillin resistant Staphylococcus aureus (MRSA) with genotypic detection of MRSA related genes. Methods In this cross sectional study, swab samples were collected from patients with DFI from hospitals in Sulaimani/Iraq in April July 2019. All the samples were processed for microbiological assessment and further MRSA phenotypic and genotypic testing. Results A total of 46 swab samples were collected from diabetic foot ulcers of 29 males and 17 females. Most samples (93.5% showed positive growth, with higher proportions of monomicrobial (23; 53.5% than mixed bacterial infections (20; 46.5% and S. aureus as the predominant pathogen. Conventional methods of MRSA detection, such as cefoxitin disc diffusion, can predict methicillin resistance in 45.8% of the cases. Real time/conventional PCR showed that 41.6% of Staphylococcus aureus were positive for the mecA gene, while none of the isolates was positive for PVL. Conclusion Staphylococcus aureus was the predominant pathogen in DFI. Although cefoxitin and oxacillin disc diffusion methods can help in the prediction of MRSA, real time PCR is a reliable method for MRSA detection and confirmation.", "venue": "International journal of general medicine", "year": 2020.0, "author_names": ["Khanda Abdulateef Anwar", "Dlsoz Hussein", "Jamal M Salih"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 222159246, "title": "Recurrence rates suggest delayed identification of plantar ulceration for patients in diabetic foot remission", "abstract": "Introduction Foot ulcers are a common and costly complication of diabetes, and delays in treatment can result in impaired healing, infection, hospitalization, and lower extremity amputation. Research design and methods We aimed to determine whether patterns in plantar diabetic foot ulcer (DFU) recurrence coincided with typical intervals between routine preventive care appointments, which would suggest that delays exist between ulcer development and identification. We completed an analysis of existing data from two multicenter studies in 300 total participants. We analyzed unadjusted counts of DFU binned in weekly intervals and defined 'exam periods' as intervals from 2 to 4 weeks, from 6 to 8 weeks, within 1 week of 3 months and within 1 week of 6 months. We tested whether recurrence rates during exam periods were equivalent to rates outside exam periods. We estimated the delay between DFU development and DFU identification such that the rate of development would have been constant. Results During exam periods, a total of 43 DFUs were identified (43/86=50% despite the fact that these periods represent only 23.5% of follow up in aggregate. Accounting for censoring, the annualized incidence during exam periods was 0.68 DFU/year (CI 0.48 to 0.89) in contrast to 0.25 DFU/year (CI 0.18 to 0.32) outside exam periods (incidence ratio=2.8, CI 1.8 to 4.3) We estimated delays between DFU occurrence and identification to average 15.3 days (IQR 7.4 23.7 days) Conclusions These findings have potential implications for practice, particularly related to the value of telehealth and in home monitoring of patients in diabetic foot remission. Additionally, there are implications for study design, which should consider the impact of interval censoring and attempt to control for confounders related to frequency and timing of exams.", "venue": "BMJ open diabetes research care", "year": 2020.0, "author_names": ["Brian J Petersen", "Sicco A Bus", "Gary M Rothenberg", "David R Linders", "Lawrence A Lavery", "David G Armstrong"], "n_citations": 1, "n_key_citations": 0, "score": 1}, {"corpus_id": 184486669, "title": "Profile and Antibiotic Susceptibility of Bacterial Pathogens Associated With Diabetic Foot Ulcers From a Rural Area.", "abstract": "OBJECTIVE This cross sectional study assesses the profile and antibiotic susceptibility of aerobic bacterial pathogens associated with diabetic foot ulcers (DFUs) MATERIALS AND METHODS Two swab samples from 140 DFUs with various Wagner grades were processed for identification using routine culture methods and antimicrobial susceptibility by Kirby Bauer disc diffusion method. RESULTS A total of 125 (89.29% samples were found to be positive for bacteria on culture. A higher incidence of positive culture (94.32% was found in individuals with a blood sugar level 200 mg/dL. The highest number of culture positive cases was observed in Wagner grade 2 DFUs (45% Overall infection was monomicrobial in 83.20% (104) and polymicrobial in 16.80% (21) of samples. Staphylococcus aureus (21.09% and Pseudomonas aeruginosa (19.05% were the most common isolates. Linezolid (100% and imipenem (75.70% were the most effective antimicrobial agents against gram positive and gram negative isolates, respectively. CONCLUSIONS The results show an overall increase in bacterial resistance to antimicrobial agents and emphasize the importance of an antimicrobial susceptibility pattern in the selection of appropriate antibiotic(s) to institute the rational antibiotic therapy.", "venue": "Wounds a compendium of clinical research and practice", "year": 2019.0, "author_names": ["Kalpana Jaju", "Asha Pichare", "Milind Davane", "Basavraj Nagoba"], "n_citations": 6, "n_key_citations": 0, "score": 1}, {"corpus_id": 218609528, "title": "MON 626 Frequency and Associated Factors with Multidrug Resistant Organism Infection in Diabetic Foot Ulcers in a Peruvian Public Hospital", "abstract": "Abstract Objective: To determine the frequency and associated factors with multidrug resistant organism (MDRO) infection among patients with diabetic foot ulcers in a Peruvian Public Hospital. Materials and methods. Cross sectional survey was conducted from January 2017 December 2018 at National Hospital in Lima Peru. Ulcers with clinical signs of infection (erythema, edema, pain, purulent exudate) according Infectious Diseases Society of America clinical practice guideline were included1. Wounds with only skin involvement were excluded. On admission, specimens for culture were obtained after cleansing and debriding of the wound. Samples were promptly sent to the microbiology laboratory for culture using appropriate transport media. Bacterial identification and antibiotic susceptibility testing were performed using the VITEK(r) 2 automated system (BioMerieux Laboratory, Argentina) Multidrug resistant organisms were identified according to the recommendations of International Expert Proposal2. Prevalence ratios derived from bivariate analysis are given with their 95% CI, which was performed to study factors associated with the presence of multidrug resistant bacteria; and a multivariate analysis with a lineal model to associated variables found in the bivariate analysis. This study has the approval of the Research Ethics Committee of the Maria Auxiliadora Hospital. Results Among 153 selected subjects, 75% were male, with an average age of 59 yo, 70% had =10 years of diabetes duration and only 16% had HbA1C <7% A frequency of 85% of patients with MDRO infection was found and was associated with minor amputation RP 1.18 (95% CI 1.01 1.44) and with hospitalization time of 28 days RP 1.21 (95% CI 1.03 1.30) Conclusion. 6 of 7 patients have MDRO infection among patients with diabetic foot ulcers and are associated with the occurrence of minor amputation and hospitalization time 28 days. References 1. Lipsky BA, et al. 2012 Infectious Diseases Society of America clinical practice guideline for the diagnosis and treatment of diabetic foot infections. Clin Infect Dis. 2012;54(12):e132 73. 2. Magiorakos AP, et al. Multidrug resistant, extensively drug resistant and pandrugresistant bacteria: an international expert proposal for interim standard definitions for acquired resistance. Clin Microbiol Infect. 2012;18(3):268 81.", "venue": "Journal of the Endocrine Society", "year": 2020.0, "author_names": ["Marlon Yovera-Aldana", "Liset Paola Sifuentes", "Delia Cruz-Estacio", "Diana Consuelo Flores", "Lucy Nelly Damas-Casani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 199026229, "title": "Clinical Profile and Outcome of Diabetic Foot in a Tertiary Care Centre", "abstract": "Background This study attempted to determine the disease burden in terms of clinical profile and outcome of diabetic foot admissions at a tertiary care hospital in a developing country. Method This study was done in Department of Surgery at Shri Guru Ram Rai Institute of Medical and Health Sciences and Shri Mahant Indiresh Hospital, Dehradun. Duration of the study was 1 year. The demographic characteristic, type of foot lesion, etiology, isolated micro organism, treatment, and outcome were reviewed. Results A total of 49 patient were diagnosed with Diabetic Foot. All patients had type 2 diabetes with no gender predominance. Majority of the patient were above age of 40 years and diabetes control was very poor. Before admission, the ulcers had already developed for 4.7 2.9 weeks; however, the majority of patients were unaware of the preceding causes. More than 70% of ulcers were in Wagner gradeg3 with infection event in nearly all patients. The most common isolates from culture were Gram negative bacteria. A total of 8 patient required lower extremity amputations (LEAs) at various level of the foot were carried out, including major LEA. Conclusions Diabetic foot problems constitute a source of morbidity, a reason for LEA surgery as well as being a cause of death among patients with diabetes mellitus", "venue": "", "year": 2019.0, "author_names": ["Abhishek Gupta", "Subash Chandra Sharma", "Janmejai Prasad Sharma"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 219798324, "title": "Nursing care of patients with early diabetic foot ulcer treated with mupirocin ointment combined with ultra laser", "abstract": "Objective To study and analyze the effect of nursing care for patients early diabetic foot ulcer treated with mupirocin ointment and ultra laser. Methods 30 patients with early diabetic foot ulcer admitted to our hospital from June, 2017 to June, 2018 were included as research objects, and were divided into an intervention group and a control group by random lottery, 15 cases for each group. The control group was given iodine volts dressing change nursing, and intervention group was given mopiroxine ointment and ultra laser treatment and nursing care. The wound healing effect, frequency of dressing change, healing time, and pain degree before and after intervention were compared between the two groups. In addition, logistic method was used to analyze the related influencing factors of the patients' prognosis and rehabilitation. Results The total effective rate of wound healing was higher in the intervention group than that in the control group (93.33% vs. 73.00% but the difference was not statistically significant (P>0.05) The frequency of dressing change and healing time were lower in the intervention group than in the control group (45.12+ 2.49) times vs. (52.04+ 2.77) times and (39.31+ 2.20) d vs. (56.23+ 2.84) d, both P<0.05] The VAS scores of the intervention group and control group were lower after than before the intervention (both P<0.05) and the VAS score was lower in the intervention group than in the control group after the intervention (2.07+ 0.34) vs. (3.41+ 0.51) P<0.05] Logistic regression analysis showed that disease course, ulcer area, foot infection, non use of mopirocin ointment, and no ultra laser intervention were all independent risk factors for poor prognosis in the patients (all P<0.05) Conclusion Mupirocin ointment combined with ultra laser can significantly improve the therapeutic effect of patients with early diabetic foot ulcer, promote their early recovery, and reduce their pain. In addition, disease course, ulcer area, foot infection, non use of mopirocin ointment, and no ultra laser intervention can all adversely affect the patients' prognosis to some extent. Key words: Early diabetic foot ulcer; Mupirocin ointment; Ultra laser; Nursing measures; Wound healing", "venue": "", "year": 2020.0, "author_names": ["Suzhen Mai", "Hui-Fang Yang"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "\"relationship between locus of control and academic achievement of students pdf\"", "session_id": 6909738458255563, "user_id": 281906809656343, "candidates": [{"corpus_id": 212671782, "title": "The Relationship between Locus of Control and Student's University Academic Achievement in Case of Wolaita Sodo University", "abstract": "This study was designed to investigate the relationship between Locus of Control (both internal and external) and academic achievement. Emphasis was put on trying to establish the relationship between internal locus of control, external locus of control and academic achievement of graduating class university students at Wolaita Sodo University. The study employed the use of correlation design to establish the nature of the relationships. The validity and reliability of research instruments was established and data was collected from 313 respondents selected from three colleges and two schools in the university by using the simple random sampling method. To analyze the data, the analysis of variance (ANOVA) T Test, and Pearson product moment correlation statistical tools were used with the aim of establishing the difference and relationship between students' locus of control and their academic achievement of university graduating class students. Findings revealed the existence of a significant difference in academic performance in students of different age, significant difference in academic achievement of students from different gender groups. GJHSS A Classification: FOR Code: TheRelationshipbetweenLocusofControlandStudentsUniversityAcademicAchievementinCaseofWolaitaSodoUniversity Strictly as per the compliance and regulations of: 170199 The Relationship between Locus of Control and Student's University Academic Achievement in Case of Wolaita Sodo University Bereket Merkine Zebdewos Zekarias s Eskinder Woldeyesus r AbstractThis study was designed to investigate the relationship between Locus of Control (both internal and external) and academic achievement. Emphasis was put on trying to establish the relationship between internal locus of control, external locus of control and academic achievement of graduating class university students at Wolaita Sodo University. The study employed the use of correlation design to establish the nature of the relationships. The validity and reliability of research instruments was established and data was collected from 313 respondents selected from three colleges and two schools in the university by using the simple random sampling method. To analyze the data, the analysis of variance (ANOVA) T Test, and Pearson product moment correlation statistical tools were used with the aim of establishing the difference and relationship between students' locus of control and their academic achievement of university graduating class students. Findings revealed the existence of a significant difference in academic performance in students of different age, significant difference in academic achievement of students from different gender groups. The findings also revealed that there was a significant negative relationship between students' external locus of control and academic achievement. There was significant positive relationship between students' internal locus of control and their academic achievement. On the basis of the findings, the researcher made the following conclusions; Locus of control (internal external) is the most important issue that positively and negatively affects students' academic success and need special attention from university stakeholders. Counseling and psychosocial support, advice and overall support in confidence building skill and life skill training do count on motivating students to manage, resist negative self evaluation. The researcher also confirmed the ecological and social learning theoretical model. On the basis of the conclusions made, the researcher recommended that; Wolaita Sodo University maintains its instruction by considering the influence of Locus of control (internal external) on academic achievement of students. This study was designed to investigate the relationship between Locus of Control (both internal and external) and academic achievement. Emphasis was put on trying to establish the relationship between internal locus of control, external locus of control and academic achievement of graduating class university students at Wolaita Sodo University. The study employed the use of correlation design to establish the nature of the relationships. The validity and reliability of research instruments was established and data was collected from 313 respondents selected from three colleges and two schools in the university by using the simple random sampling method. To analyze the data, the analysis of variance (ANOVA) T Test, and Pearson product moment correlation statistical tools were used with the aim of establishing the difference and relationship between students' locus of control and their academic achievement of university graduating class students. Findings revealed the existence of a significant difference in academic performance in students of different age, significant difference in academic achievement of students from different gender groups. The findings also revealed that there was a significant negative relationship between students' external locus of control and academic achievement. There was significant positive relationship between students' internal locus of control and their academic achievement. On the basis of the findings, the researcher made the following conclusions; Locus of control (internal external) is the most important issue that positively and negatively affects students' academic success and need special attention from university stakeholders. Counseling and psychosocial support, advice and overall support in confidence building skill and life skill training do count on motivating students to manage, resist negative self evaluation. The researcher also confirmed the ecological and social learning theoretical model. On the basis of the conclusions made, the researcher recommended that; Wolaita Sodo University maintains its instruction by considering the influence of Locus of control (internal external) on academic achievement of students.", "venue": "", "year": 2019.0, "author_names": ["Bereket Merkine", "Zebdewos Zekarias"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 209483293, "title": "Relation Between Locus of Control and Academic Achievement of Nursing Students at Damanhour University", "abstract": "Background: Academic achievement is considered the alarm button that pressed todays in each academic institution. A competent nursing workforce is important for an effective healthcare which intensified by the development of student's internal locus of control (LOC Aims: this study aimed to 1 assess the locus of control levels among the nursing students at Damanhour University 2assess the relation between locus of control and academic achievement among the nursing students, 3determine the effects of Internal LOC Development Program on the student's academic achievement level. Design: A quasi experimental study design was adopted to carry out this study. Setting: the study was conducted at Faculty of Nursing Damanhour University. Subjects: The 4 th year students were selected. The total number of the students was 250 divided into two groups (control group and experimental group) Tools: data was collected using two tools; Tool (I) socio demographic characteristics and health status sheet for the students, tool (II) entitled trice academic LOC scale. Results: The findings of the present study revealed that 75.2% of the experimental group has an internal locus of control pre the training program application, while it increased to 79.2% post the program. There is a significant relation between LOC and the academic achievement among the experimental group. Recommendations: Academic administrators should pay attention to help students to understand how their perceptions about self may affect their academic achievement. Develop policies regarding coaching, mentoring and counseling undergraduates. Develop mind coaching program to increase the internal LOC. Further researches would be extremely valuable.", "venue": "", "year": 2018.0, "author_names": ["A A Mohamed", "Ahlam Mohammed", "- HendAbo", "El-Olemy Ahmed"], "n_citations": 3, "n_key_citations": 1, "score": 1}, {"corpus_id": 55715547, "title": "A Study to Investigate the Relationship between Locus of Control and Academic Achievement of Students.", "abstract": "Motivation is regarded as the alpha and omega of learning .It is the heart of teaching learning process. Motivation is defined as an internal state that arouses, directs, and maintains the behavior over time. Thus motivation is the pivotal component of learning and locus of control which is one of the important factors it stems from. Locus of control is a belief about the primary source of a person's behavioreither internal (within a person) or external (with in a person's physical and social environment) The main aim of this research was to measure the locus of control of students in order to determine the degree of their externality or internality of locus of control. And to find out the gender difference in locus of control orientation at College and University levels to relate the locus of control with academic achievement. Sample of study consisted of 466 students, out of which 205 were boys and 261 were girls. This sample was chosen from two female college and one male college located in Rawalpindi city and one Co education University Institute located in Islamabad city. The college students were mostly of 16 and 17 years age group, where as University students were in the 20 and 21 year age group. For the purpose of measuring locus of control questionnaire was used with a few modifications. Academic achievement was measured by the marks obtained by the sample in their recently held examination at their institutions. The obtained data were analyzed and interpreted using statistical tools such as: Mean Standard Deviation, t test and correlation coefficient. The results show that the majority of students were found to be more internal than external in their locus of control. This result is enlightened with others studies that, locus of control and academic achievement were related positively to each other. Boys were found to be more internal than girls at college level however, no gender differences in locus of control were found at the University level.", "venue": "", "year": 2014.0, "author_names": ["Aijaz Ahmed Gujjar", "Rukhma Aijaz"], "n_citations": 4, "n_key_citations": 1, "score": 0}, {"corpus_id": 20666998, "title": "The relationship between locus of control and academic achievement and gender in a selected higher education institution in Jordan", "abstract": "This paper examined the relationship between Locus of Control and academic achievement, and discussed the possibility of gender differences. Past research indicated a positive correlation relationship between internal scores and high academic achievement. Overall, the research regarding gender found males to be more internal and external than females. The sample of this study included 204 first year Yarmouk University students, from four different departments (English. Accounting, Chemistry and Engineering) The multidimensionalmulti attributional causality scales (MMCS) was administered to the respondents of the study. The MMCS were then correlated with academic achievement and gender. The statistical analysis evidenced a correlation between Locus of Control and academic achievement, The internal locus of control were high and positively correlated with academic achievement among the male students (r=.362, p=.000) and positively correlated with external locus of control (r= .208, p=.035) However only the internal locus of control was positively correlated with academic achievement among female students (r=.274, p=.006) and negatively correlated with external locus of control (r=.002, p=.982) The findings showed that males were more internal and external then females. Overall, this study supported the findings of past research supporting a positive relationship between Locus of Control and academic achievement. Key Words: Locus of Control, Academic Achievement, Gender", "venue": "", "year": 2009.0, "author_names": ["Rohaty Mohd Majzub", "Marwan Zaid Bataineh", "Noriah Mohd Ishak", "Saemah Rahman"], "n_citations": 18, "n_key_citations": 1, "score": 0}, {"corpus_id": 55656520, "title": "Examining the Relationship between Self Efficacy, Locus of Control and Academic Achievement of Students Girls and Boys in Secondary School of Rustam City", "abstract": "The purpose of this study is to determine the relationship between roles of self efficacy and locus of control in academic achievement. The sample consisted of 305 students girls and boys from 3 rd grade of secondary school in Rustam city who were selected by random sampling method step by step. Participants completed the self efficacy and locus of control questionnaire. To assess the variables under study in this research, the self efficacy questionnaire of Pintrich and de Groot was used in order to measure the control locus in control locus questionnaire of Strickland and Nowiki. Also the third year GPA was used as academic achievement. For statistical analysis of data, Pearson correlation and multiple variable regression methods were used. The results of research showed that there is a positive and significant relationship between self efficacy variables and academic achievement, while there is a negative and significant relationship between locus of control and academic achievement. Also the results of regression analysis showed that among the predictive variables, self efficacy has a major role in explaining the educational attainment. Zahra Razmi Far 1", "venue": "", "year": 2014.0, "author_names": ["Zahra Razmefar"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 144804491, "title": "A causal analysis of the relationship between locus of control and academic achievement in first grade", "abstract": "Abstract Eighty nine middle and lower socioeconomic status (SES) first graders from 15 classrooms were seen individually the first week of school and again 7 months later to obtain scores on an achievement test and a measure of locus of control. Results revealed that middle SES children began first grade with a higher average score on the achievement test and a more internal locus of control than lower SES children. While middle SES children gained more in achievement during first grade, the change toward a more internal locus of control was more pronounced for lower SES children. Results of both cross lagged panel correlational and path analyses suggested that an internal locus of control contributed to academic achievement.", "venue": "", "year": 1980.0, "author_names": ["Deborah J Stipek"], "n_citations": 41, "n_key_citations": 3, "score": 0}, {"corpus_id": 146722754, "title": "Importance of Relationship Between Locus of Control and Academic Achievement of Senior Secondary Schools", "abstract": "The present paper explain that importance ofrelationship between Locus ofControl's areas and academic achievement because the world is becoming more and more competitive. So, quality of performance has become the key factor for personal progress. The desire for high level of achievement puts a lot of pressure on students, teachers, school, in general and the educational system itself. In fact, it appears as the whole system of education revolves round the academic achievement of students. The importance of academic achievement has raised several questions for us. Like, what factors promote achievement in student? How far do the different factors contribute towards academic achievement? So, the result of this paper revealed that three areas ofLocusofControl havegreat contribution to academic achievement of Sr. Secondary School students.", "venue": "", "year": 2009.0, "author_names": ["Meena Mehta"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 146332035, "title": "THE RELATIONSHIP BETWEEN LOCUS OF CONTROL AND ACADEMIC ACHIEVEMENT AMONG AT RISK STUDENTS", "abstract": "", "venue": "", "year": 2003.0, "author_names": ["Marthina Jacoba Kirchner"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 55166275, "title": "The relationship between locus of control and academic achievement and the role of gender", "abstract": "Smriti Goyal The Relationship Between Locus of Control and Academic Achievement and the Role of Gender 2000", "venue": "", "year": 2000.0, "author_names": ["Smriti Goyal"], "n_citations": 3, "n_key_citations": 1, "score": 0}, {"corpus_id": 142900262, "title": "The Relationship between a High School Principal's Locus of Control and the Academic Achievement of Students", "abstract": "", "venue": "", "year": 1992.0, "author_names": ["J Ciccone"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "stakeholders analysis chart in automotive industry", "session_id": 6676527607651324, "user_id": 5176882966041830, "candidates": [{"corpus_id": 209475216, "title": "FEDERAL UNIVERSITY OF SANTA CATARINA TECHNOLOGICAL CENTER OF JOINVILLE AUTOMOTIVE ENGINEERING JESSICA KARINE PROCHNOW Internal Stakeholders' Analysis of Industry 4.0 Working Network within a Multinational Automotive Supplier Joinville 2019 JESSICA KARINE PROCHNOW Internal Stakeholders' Analysis of", "abstract": "Due to technological advances in digitization and connected manufacturing processes, multinational companies have faced challenges imposed by Industry 4.0 in order to improve their productivity. The present work aims to approach a case study involving internal stakeholders analysis for an Industry 4.0 working network of a multinational automotive supplier. From data collection and surveys, it is intended to develop stakeholders' power and influence qualitative analysis, which are based on their main interests in the projects managed by the concerned department. In order to bring a better understanding of the identified stakeholders in regards to the development of the strategic projects, they are initially divided in two categories related to the organizational and project levels. Besides, based on the obtained internal stakeholders classification, it is proposed a communication concept in order to fulfill stakeholders expectations, once these represent, according to their level of power and influence, fundamental factors in a project development. Key Words: Stakeholders, Industry 4.0, multinational automotive supplier.", "venue": "", "year": 2019.0, "author_names": ["Jessica Karine Prochnow", "Modesto Hurtado Ferrer", "Janaina Renata Garcia", "Marcos A Rabelo", "Elisete Santos da Silva Zagheni"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213504172, "title": "Internal Stakeholders' Analysis of Industry 4.0 Working Network within a Multinational Automotive Supplier", "abstract": "", "venue": "", "year": 2019.0, "author_names": ["Jessica Karine Prochnow"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 208193788, "title": "THE ANALYSIS AND IMPROVEMENT OF PRODUCT QUALITY USING SELECTED METHODS AND TOOLS IN AUTOMOTIVE INDUSTRY ENTERPRISE", "abstract": "The work included a qualitative analysis of a product installed in passenger cars of a selected make and model. The product is an airbag module manufactured for one type of passenger car. This product is produced by an international production company with its plant in the northern part of the Silesian province. The initial analysis covered six calendar months, of which the month chosen for further analysis was the one in which the percentage of nonconforming products in the total production exceeded the assumed acceptable value. The analysis used four basic quality management instruments: the Pareto chart, the FMEA method, Ishikawa chart and the 5 Whys technique. After the analysis, improvement actions were also proposed.", "venue": "", "year": 2019.0, "author_names": ["Edyta Kardas", "Pavlina Pustejovska"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 212896858, "title": "Analysis and Improvement of an Assembly Line in the Automotive Industry", "abstract": "Abstract In a market as competitive as the automotive industry, it becomes increasingly important for the organizations to adopt a culture of continuous improvement, which should cross over all stakeholders in the organization. The continuous improvement of the processes, the increase in efficiency, and the elimination of waste, leads to a considerable increase in market competitiveness, not only economically, but also technologically. The focus of this work was the optimization of a production line, with the main goal being the increase of its productive capacity, so that it can comply with customer's requests. Thus, it was defined as a goal of this project: to increase the productive capacity to 1800 parts/day. The methodologies used were based on several continuous improvements and lean techniques, such as line balancing, standard work, visual management and 5S. The work developed allowed an increase of 37% of the production line capacity and an increase of 22% in the OEE of the production line.", "venue": "", "year": 2019.0, "author_names": ["Dias Paulo", "Francisco J G Silva", "R D S G Campilho", "Luis Pinto Ferreira", "T Santos"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 158591790, "title": "Multi criteria decision analysis framework for sustainable manufacturing in automotive industry", "abstract": "Abstract Increase in societal demand for sustainability has resulted in attention to sustainable manufacturing. Although an attractive goal to most, executives face difficulties in implementing; sustainable manufacturing due to the necessity of balancing social, economic and environmental; outcomes associated with the implementation of different manufacturing alternatives and; processes. This is especially true in highly competitive consumer oriented industries, such as the automotive industry. The literature review presented herein indicated that most of the available sustainability frameworks are qualitative in nature and limited to discussion of sustainable materials and processes, while tradeoffs between the environmental, social and economic domains of sustainability are rarely examined. To overcome such shortcomings, we develop a quantitative framework for sustainable manufacturing and illustrate its application for the automotive industry. Multi criteria decision analysis (MCDA) is utilized to combine the values of industry executives and decision makers with performance criteria of different car manufacturing materials (ferrous metals, aluminum, plastics, organic composites, and synthetic composites) Our results show how material alternatives in manufacturing can be quantitatively selected based on sustainability objectives. Additionally, we illustrate how sensitivity analyses are used to assess the robustness of the resulting alternative selection. Although this framework may be useful for decision makers in its current form, future applications might improve the model by choosing different or more specific alternatives, using objective performance scores supported by industry research, or by investigating a more diverse set of weight distributions representing dissimilar stakeholder values.", "venue": "", "year": 2018.0, "author_names": ["Stella Stoycheva", "Dayton Marchese", "Cameron Paul", "Sara Padoan", "Abdul-Salam Juhmani", "Igor Linkov"], "n_citations": 63, "n_key_citations": 0, "score": 0}, {"corpus_id": 201234686, "title": "Green supplier selection using multi criterion decision making under fuzzy environment: A case study in automotive industry", "abstract": "Abstract In the past few decades, it has been widely observed that environmental awareness is continuously increasing among people, stakeholders, and governments. However, rigorous environmental rules and policies pushed organizations to accept affirmative changes like green supply chain management practices in their processes of the supply chain. Selection of green supplier is a tedious task and comprises a lot of challenges starting from evaluation to their final selection, which is experienced by supplier management professionals. The development and implementation of practical decision making tools that seek to address these challenges are rapidly evolving. In the present work, the evaluation of a set of suppliers is primarily based on both conventional and environmental criteria. This work proposes a multi criteria decision making (MCDM) based framework that is used to evaluate green supplier selection by using an integrated fuzzy Analytical Hierarchy Process (AHP) with the other three techniques namely MABAC \"Multi Attributive Border Approximation Area Comparison\" WASPAS \"Weighted Aggregated Sum Product Assessment\" and TOPSIS \"Technique for order preference by similarity to ideal Solution\" Initially, six green supplier selection environmental criteria (Environmental management system, green image, staff environment training, eco design, pollution control, and resource consumption) and three conventional criteria (price, quality and service level) have been identified through literature review and expert's opinions to employ MCDM approach. A real world case study of the automotive industry in India is deliberated to exhibit the proposed framework applicability. From AHP findings, 'Environment management system' 'Pollution control' 'Quality' and 'Green image' have been ranked as the topmost four green supplier selection criteria. Besides, the consistency test was performed to check the uniformity of the expert's input whereas the 'robustness' of the approach was tested by performing sensitivity analysis. The results illustrate that the applied fuzzy hybrid methods reach common green supplier rankings. Moreover, out of the four green supplier's alternatives, supplier number 'one' got the highest rank. This shows that the applied models are robust in nature. Further, this study relinquishes a single platform for the selection of green supplier under fuzzy environment. The applied methodology and its analysis will provide insight to decision makers of supplier selection. It may aid decision makers and the procurement department not only to differentiate the significant green supplier selection criteria but also to assess the most efficient green supplier in the supply chain in the global market.", "venue": "Comput. Ind. Eng.", "year": 2019.0, "author_names": ["Shubham Gupta", "Umang Soni", "Girish Kumar"], "n_citations": 52, "n_key_citations": 1, "score": 0}, {"corpus_id": 169540698, "title": "Gap analysis study on the compliance of automotive standard IATF 16949 based on internal quality audit score in automotive industry", "abstract": "In August 2016, IATF issued the new requirement of quality management system IATF 16949. With these new requirements, the automotive industry that is willing to migrate to the new version will face some challenges. The main challenge comes from the need to re map the business processes that are needed for the internal audit. The other challenges are the readiness of the quality of the internal auditors, measure gaps, and predict the success of the certification audits. This research is based on a case study at one of the automotive manufacturing company. A framework for measuring the gap analysis of the compliance based on the automotive standards requirements (IATF 16949: 2016) through an internal quality audit score has presented in this research. The analysis has done by using a turtle diagram for risk analysis and follows by a survey on an internal quality auditor's perception. Based on the analysis, it can be determined which processes need to be audited. The research has found that there are 32 processes in the company which is needed to be an audit. The survey has indicated that internal quality auditor is ready with the new requirement of the quality management system. The internal audit's result with a weighted score shows the level that can be achieved by the company to fulfill the standard IATF 16949:2016. The gap that has shown in the spider chart depicts that an automotive manufacturing company will be able to passe the certification audit.", "venue": "", "year": 2018.0, "author_names": ["Tulus Puji Ruswanto", "D S Saroso"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 116827776, "title": "An Analysis of the state of the Project Management Maturity in Automotive Industry", "abstract": "Extensive research about the utility of Project Management has already been accomplished in various sorts of industries such as construction, engineering, and information technology, and these larger industry sectors have been able to increase the value of organizational processes with the application of formalized project management methods. These organizations have also seen that improved project success can result in fewer business disruptions, allowing them to concentrate on their primary objectives. Not only are organizations benefiting from using project management for building products and delivering solutions for external clients, but internally the value of project management for the control of project delivery and execution has been improved. In Auto industry, though it started very late, but a lot of work has already been done in Japanese, North American and European car industries. Toyota, Chrysler and Renault have defined history in creating specialized PM knowledge. However not much work has been observed in OEMs that is constituted majorly by small and medium sized auto industry. Therefore, it is reasonably important to track the level of maturity and direction of Project Management implementation in the said sector. Survey research methodology was selected as the research methodology and a survey questionnaire was developed after consulting industry, university and a PM standardization body. Finalized Questionnaire was sent to the companies and responses were analyzed after statistical representation of data received. Data analysis revealed that implementation of Project Management practices in automotive sector is in the mid way and is tactical in nature. Car makers are happy by the results produced by PM techniques in implementation of various other management methodologies and are satisfied with the outcome whenever and in whatever capacity PM was used. Project management also helped the organizations to improve their repute in the market and in attaining competitive edge over competitors. Automotive companies have shown their resolve to further strengthen PM practices in future and convinced to share their experience and knowledge about PM implementation with rest of the industry stakeholders to help creation of new knowledge.", "venue": "", "year": 2018.0, "author_names": ["Muhammad Imran"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 199723382, "title": "How can the lack of impactful change in electrification of the German automotive industry be explained", "abstract": "The electrification process poses a challenge to the German automotive industry as well as the state. During this thesis, the process of electrification of traffic, as well as the challenges, which this transition brings for the involved stakeholders, are presented. By qualitatively analysing the behaviour of the involved parties, the different strategies pursued during this technological transition are shown. The strategy of the German automotive industry is elaborated by thoroughly analysing secondary data in forms of position papers of the automotive industry. In order to parse the strategy of the EU and German policymakers, primary data in forms of policy papers is analysed. Those analyses show the state as well as the automotive sector being in a complicated situation regarding the electrification of traffic. The analysis shows that European policymakers had little innovative policy input as well as a non cooperating automotive industry, concerning climate protection. Contrasted to Norway, little innovative policy approaches were made, which leads to the current low rates of electrification.", "venue": "", "year": 2019.0, "author_names": ["Junes Koohestanian"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169588559, "title": "A new maturity model for project risk management in the automotive industry", "abstract": "Purpose: The purpose of this article is to present a new maturity model for the assessment and on going management of project risk management capability in the automotive industry. Design/methodology/approach: The research design is based on a multi project case study analysis in a major German automotive company. The approach is qualitative and inductive, using 12 in depth interviews with major stakeholders in the project management function in the company to provide data for the construction of the initial maturity model. This model is then verified and refined via an on line survey and three follow up interviews. Findings: The findings provide material for the construction of a new maturity model that can be used for the assessment of project risk management capability and as a tool for on going monitoring and improvement. The model is structured around four dimensions of risk management identification, assessment, allocation and appetite and has four maturity stages rudimentary, intermediate, standardised and corporate. Research limitations/implications: The model is based on a detailed analysis of in depth interview material in a specific industry sector. It can be used as a basis for similar research in other industries. Originality/value: The model adds to existing risk management maturity models and is unique in being specific to the automotive industry. It can be used by risk and project managers, and can also be adapted to other industry sectors. Keywords risk management; project risk management; centricity; risk identification and assessment; risk ownership and appetite; maturity model; centricity. Paper type: Research paper.", "venue": "", "year": 2018.0, "author_names": ["Jose Irizar", "Martin George Wynn"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "Inferring lockstep behavior from connectivity pattern in large graphs", "session_id": 986900707413979, "user_id": 4753957018268050, "candidates": [{"corpus_id": 10069240, "title": "Inferring lockstep behavior from connectivity pattern in large graphs", "abstract": "Given multimillion node graphs such as \"who follows whom\" \"patent cites patent\" \"user likes page\" and \"actor/director makes movie\" networks, how can we find unexpected behaviors? When companies operate on the graphs with monetary incentives to sell Twitter \"Followers\" and Facebook page \"Likes\" the graphs show strange connectivity patterns. In this paper, we study a complete graph from a large Twitter style social network, spanning up to 3.33 billion edges. We report strange deviations from typical patterns like smooth degree distributions. We find that such deviations are often due to \"lockstep behavior\" that large groups of followers connect to the same groups of followees. Our first contribution is that we study strange patterns on the adjacency matrix and in the spectral subspaces with respect to several flavors of lockstep. We discover that (a) the lockstep behaviors on the graph shape dense \"block\" in its adjacency matrix and creates \"rays\" in spectral subspaces, and (b) partially overlapping of the behaviors shape \"staircase\" in its adjacency matrix and creates \"pearls\" in spectral subspaces. The second contribution is that we provide a fast algorithm, using the discovery as a guide for practitioners, to detect users who offer the lockstep behaviors in undirected/directed/bipartite graphs. We carry out extensive experiments on both synthetic and real datasets, as well as public datasets from IMDb and US Patent. The results demonstrate the scalability and effectiveness of our proposed algorithm.", "venue": "Knowledge and Information Systems", "year": 2015.0, "author_names": ["Meng Jiang", "Peng Cui", "Alex Beutel", "Christos Faloutsos", "Shiqiang Yang"], "n_citations": 39, "n_key_citations": 5, "score": 2}, {"corpus_id": 16690077, "title": "Inferring Strange Behavior from Connectivity Pattern in Social Networks", "abstract": "Given a multimillion node social network, how can we summarize connectivity pattern from the data, and how can we find unexpected user behavior? In this paper we study a complete graph from a large who follows whom network and spot lockstep behavior that large groups of followers connect to the same groups of followees. Our first contribution is that we study strange patterns on the adjacency matrix and in the spectral subspaces with respect to several flavors of lockstep. We discover that (a) the lockstep behavior on the graph shapes dense \"block\" in its adjacency matrix and creates \"ray\" in spectral subspaces, and (b) partially overlapping of the behavior shapes \"staircase\" in the matrix and creates \"pearl\" in the subspaces. The second contribution is that we provide a fast algorithm, using the discovery as a guide for practitioners, to detect users who offer the lockstep behavior. We demonstrate that our approach is effective on both synthetic and real data.", "venue": "PAKDD", "year": 2014.0, "author_names": ["Meng Jiang", "Peng Cui", "Alex Beutel", "Christos Faloutsos", "Shiqiang Yang"], "n_citations": 71, "n_key_citations": 4, "score": 0}, {"corpus_id": 1985429, "title": "CatchSync: catching synchronized behavior in large directed graphs", "abstract": "Given a directed graph of millions of nodes, how can we automatically spot anomalous, suspicious nodes, judging only from their connectivity patterns? Suspicious graph patterns show up in many applications, from Twitter users who buy fake followers, manipulating the social network, to botnet members performing distributed denial of service attacks, disturbing the network traffic graph. We propose a fast and effective method, CatchSync, which exploits two of the tell tale signs left in graphs by fraudsters: (a) synchronized behavior: suspicious nodes have extremely similar behavior pattern, because they are often required to perform some task together (such as follow the same user) and (b) rare behavior: their connectivity patterns are very different from the majority. We introduce novel measures to quantify both concepts \"synchronicity\" and \"normality\" and we propose a parameter free algorithm that works on the resulting synchronicity normality plots. Thanks to careful design, CatchSync has the following desirable properties: (a) it is scalable to large datasets, being linear on the graph size; (b) it is parameter free; and (c) it is side information oblivious: it can operate using only the topology, without needing labeled data, nor timing information, etc. while still capable of using side information, if available. We applied CatchSync on two large, real datasets 1 billion edge Twitter social graph and 3 billion edge Tencent Weibo social graph, and several synthetic ones; CatchSync consistently outperforms existing competitors, both in detection accuracy by 36% on Twitter and 20% on Tencent Weibo, as well as in speed.", "venue": "KDD", "year": 2014.0, "author_names": ["Meng Jiang", "Peng Cui", "Alex Beutel", "Christos Faloutsos", "Shiqiang Yang"], "n_citations": 151, "n_key_citations": 10, "score": 0}, {"corpus_id": 18801049, "title": "Catching Synchronized Behaviors in Large Networks", "abstract": "Given a directed graph of millions of nodes, how can we automatically spot anomalous, suspicious nodes judging only from their connectivity patterns? Suspicious graph patterns show up in many applications, from Twitter users who buy fake followers, manipulating the social network, to botnet members performing distributed denial of service attacks, disturbing the network traffic graph. We propose a fast and effective method, CatchSync, which exploits two of the tell tale signs left in graphs by fraudsters: (a) synchronized behavior: suspicious nodes have extremely similar behavior patterns because they are often required to perform some task together (such as follow the same user) and (b) rare behavior: their connectivity patterns are very different from the majority. We introduce novel measures to quantify both concepts \"synchronicity\" and \"normality\" and we propose a parameter free algorithm that works on the resulting synchronicity normality plots. Thanks to careful design, CatchSync has the following desirable properties: (a) it is scalable to large datasets, being linear in the graph size; (b) it is parameter free; and (c) it is side information oblivious: it can operate using only the topology, without needing labeled data, nor timing information, and the like. while still capable of using side information if available. We applied CatchSync on three large, real datasets, 1 billion edge Twitter social graph, 3 billion edge, and 12 billion edge Tencent Weibo social graphs, and several synthetic ones; CatchSync consistently outperforms existing competitors, both in detection accuracy by 36% on Twitter and 20% on Tencent Weibo, as well as in speed.", "venue": "ACM Trans. Knowl. Discov. Data", "year": 2016.0, "author_names": ["Meng Jiang", "Peng Cui", "Alex Beutel", "Christos Faloutsos", "Shiqiang Yang"], "n_citations": 56, "n_key_citations": 3, "score": 0}, {"corpus_id": 204960393, "title": "Mining Anomalies using Static and Dynamic Graphs", "abstract": "Data generated in a multitude of diverse contexts today have relational and temporal characteristics, with multiple entities interacting with each other and also evolving over time. Examples range from e commerce logs to online social networks to the internet of things. Our thesis addresses the problem of detecting and predicting anomalies deviations from usual patterns in such settings. Anomalies often encode suspicious, fraudulent or malicious behavior. They do not just influence users into making sub optimal decisions but also steadily erode their trust in businesses. As such, algorithms to detect ongoing anomalies and warn against upcoming anomalies have high impact for businesses and end users alike. In the first part of the thesis, we focus on the case where only static connectivity information is known, and the goal is to infer labels for vertices, e.g. whether a user account is honest or fraudulent, from limited labeled data. Our completed work broadens the scope of literature by handling heterogeneous graphs, and leveraging label uncertainty for more accurate vertex labeling. In the second part of the thesis, we mine anomalies from data where the connectivity evolves over time. Our primary focus here is on real time detection and early warning so as to enable timely corrective or preventive measures against anomalies. Our completed work can detect anomalous dense subgraphs and edges in near real time, by only storing a small synopsis of the graph seen so far and requiring no supervision. We also show how to early warn against user labeled anomalies in the presence of confounding interventions. As part of ongoing and future work, we will continue to push on both fronts by investigating the importance of higher order structures for vertex labeling and characterizing the anomalousness of any given graph substructure or motif.", "venue": "", "year": 2019.0, "author_names": ["Dhivya Eswaran"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 207207305, "title": "Effective connectivity inferred from fMRI transition dynamics during movie viewing points to a balanced reconfiguration of cortical interactions", "abstract": "ABSTRACT Our behavior entails a flexible and context sensitive interplay between brain areas to integrate information according to goal directed requirements. However, the neural mechanisms governing the entrainment of functionally specialized brain areas remain poorly understood. In particular, the question arises whether observed changes in the regional activity for different cognitive conditions are explained by modifications of the inputs to the brain or its connectivity? We observe that transitions of fMRI activity between areas convey information about the tasks performed by 19 subjects, watching a movie versus a black screen (rest) We use a model based framework that explains this spatiotemporal functional connectivity pattern by the local variability for 66 cortical regions and the network effective connectivity between them. We find that, among the estimated model parameters, movie viewing affects to a larger extent the local activity, which we interpret as extrinsic changes related to the increased stimulus load. However, detailed changes in the effective connectivity preserve a balance in the propagating activity and select specific pathways such that high level brain regions integrate visual and auditory information, in particular boosting the communication between the two brain hemispheres. These findings speak to a dynamic coordination underlying the functional integration in the brain.", "venue": "NeuroImage", "year": 2018.0, "author_names": ["Matthieu Gilson", "Gustavo Deco", "Karl J Friston", "Patric Hagmann", "Dante Mantini", "Viviana Betti", "Gian Luca Romani", "Maurizio Corbetta"], "n_citations": 33, "n_key_citations": 4, "score": 0}, {"corpus_id": 198262521, "title": "Unsupervised decoding of single trial EEG reveals unique states of functional brain connectivity that drive rapid speech categorization decisions", "abstract": "Categorical perception (CP) is an inherent property of speech perception. The response time (RT) of listeners' perceptual speech identification are highly sensitive to individual differences. While the neural correlates of CP have been well studied in terms of the regional contributions of the brain to behavior, functional connectivity patterns that signify individual differences in listeners' speed (RT) for speech categorization is less clear. To address these questions, we applied several computational approaches to the EEG including graph mining, machine learning (i.e. support vector machine) and stability selection to investigate the unique brain states (functional neural connectivity) that predict the speed of listeners' behavioral decisions. We infer that (i) the listeners' perceptual speed is directly related to dynamic variations in their brain connectomics, (ii) global network assortativity and efficiency distinguished fast, medium, and slow RT, (iii) the functional network underlying speeded decisions increases in negative assortativity (i.e. became disassortative) for slower RTs, (iv) slower categorical speech decisions cause excessive use of neural resources and more aberrant information flow within the CP circuitry, (v) slower perceivers tended to utilize functional brain networks excessively (or inappropriately) whereas fast perceivers (with lower global efficiency) utilized the same neural pathways but with more restricted organization. Our results showed that neural classifiers (SVM) coupled with stability selection correctly classify behavioral RTs from functional connectivity alone with over 90% accuracy (AUC=0.9) Our results corroborate previous studies by confirming the engagement of similar temporal (STG) parietal, motor, and prefrontal regions in CP using an entirely data driven approach.", "venue": "", "year": 2019.0, "author_names": ["Rakib Al-Fahad", "Mohammed Yeasin", "Gavin M Bidelman"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 23426935, "title": "Identifying Functional Connectivity in Large Scale Neural Ensemble Recordings: A Multiscale Data Mining Approach", "abstract": "Identifying functional connectivity between neuronal elements is an essential first step toward understanding how the brain orchestrates information processing at the single cell and population levels to carry out biological computations. This letter suggests a new approach to identify functional connectivity between neuronal elements from their simultaneously recorded spike trains. In particular, we identify clusters of neurons that exhibit functional interdependency over variable spatial and temporal patterns of interaction. We represent neurons as objects in a graph and connect them using arbitrarily defined similarity measures calculated across multiple timescales. We then use a probabilistic spectral clustering algorithm to cluster the neurons in the graph by solving a minimum graph cut optimization problem. Using point process theory to model population activity, we demonstrate the robustness of the approach in tracking a broad spectrum of neuronal interaction, from synchrony to rate co modulation, by systematically varying the length of the firing history interval and the strength of the connecting synapses that govern the discharge pattern of each neuron. We also demonstrate how activity dependent plasticity can be tracked and quantified in multiple network topologies built to mimic distinct behavioral contexts. We compare the performance to classical approaches to illustrate the substantial gain in performance.", "venue": "Neural Computation", "year": 2009.0, "author_names": ["Seif Eldawlatly", "Rong Jin", "Karim G Oweiss"], "n_citations": 79, "n_key_citations": 2, "score": 0}, {"corpus_id": 54057863, "title": "Improved Community Detection using Deep Embeddings from Multilayer Graphs", "abstract": "Community detection is a challenging, yet crucial, problem while mining large scale graph structured data. Most existing approaches solve this problem by mapping nodes into a vector space and performing unsupervised learning with the resulting embeddings. In cases where multiple types of connectivity patterns exist for the set of nodes, commonly modeled as multilayer graphs, new strategies are required to model the inter layer dependencies in order to perform effective inferencing. In this paper, we focus on learning embeddings for each node of a multilayer graph through neural modeling techniques, such that the complex dependencies can be concisely encoded into low dimensional representations. Referred to as multilayer graph embeddings, these representations can be utilized for discovering community structure in a scalable fashion, even with a large number of layers. Furthermore, in order to ensure that the semantics that persist over a longer range in the network are well modeled, we propose to refine the multilayer embeddings via a proxy clustering loss and a graph modularity measure. Using real world datasets, we demonstrate that this algorithm generates scalable and robust representations, and outperforms existing multilayer community detection approaches. Introduction Community Detection in Multilayer Graphs: Graphs are natural data structures to represent relational data, and hence modeling and inferencing with graph structured data have become central to a wide range of applications, such as social network analysis (Eagle and Pentland 2006) recommendation systems (Rao et al. 2015) neurological modeling (Fornito, Zalesky, and Breakspear 2013) etc. Though some of these applications require supervised or semi supervised learning formulations, mining large networks to identify cohesive clusters of densely connected nodes is a highly prevalent idea in the graph mining literature (Blondel et al. 2008; Kim and Lee 2015) Referred to as community detection, this unsupervised learning problem is most commonly addressed by mapping nodes into a vector space and performing clustering using the resulting embeddings (Dong et al. 2012; *This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE AC52 07NA27344. Copyright c (c) 2019, Association for the Advancement of Artificial Intelligence (www.aaai.org) All rights reserved. Ding, Lin, and Ishwar 2016; Yang et al. 2016) These latent low dimensional embeddings can be inferred by optimizing with a variety of measures that describe the network structure examples include decomposition of the graph Laplacian matrix (Ng, Jordan, and Weiss 2002) stochastic factorization of the adjacency matrix (Ahmed et al. 2013; Tang et al. 2015) and decomposition of the modularity matrix (Newman 2006; Chen, Kuzmin, and Szymanski 2014; Yang et al. 2016) etc. Until recently, the majority of existing work has focused on discovering community structure from a single network. However, with the emergence of multiview network data in real world scenarios, commonly represented as multilayer graphs, community detection has become more challenging. In general, multilayer graphs provide complementary views of connectivity patterns for the same set of nodes, thus requiring the need to model complex dependency structure across the views. The heterogeneity in the relationships, while providing richer information, makes statistical inferencing challenging. Furthermore, the varying levels of sparsity in different layers and the inherent uncertainties in neighborhoods, e.g. noisy edges or outliers, add to the complexity of this problem. Existing work on community detection from multilayer graphs can be broadly categorized into (a) methods that obtain a consensus community structure by fusing information from different layers and producing a single community label for the set of corresponding nodes (Dong et al. 2012; Dong et al. 2014; Kim, Lee, and Lim 2017; Tagarelli, Amelio, and Gullo 2017) and (b) methods that infer a separate embedding for a node in every layer, while exploiting the inter layer dependencies, and produce multiple potential community associations for each node (Mucha et al. 2010; Bazzi et al. 2016) In this paper, we address the problem of building effective latent embeddings for nodes on every layer from multilayer graph data, and our approach falls in the latter category. Constructing Node Embeddings: At their core, node embedding approaches attempt to identify low rank representations that can best represent the network topology. Despite their broad applicability, several of these approaches produce linear embeddings for nodes, naturally motivating the use of deep neural networks to potentially produce more expressive, non linear embeddings. Consequently, stacked graph auto encoder style solutions have been proposed ar X iv :1 81 1. 12 15 6v 1 cs .S I] 2 0 Se p 20 18 (Yang et al. 2016) that directly transform the objective measure (e.g. modularity matrix) into an undercomplete representation through a reconstruction cost. In addition to producing non linear mappings, deep learning approaches enable the use of robust reconstruction losses in lieu of a simple `2 measure (Thiagarajan et al. 2016) and supports the inclusion of additional prior constraints on community structure (Yang et al. 2016) A known limitation of node embedding techniques has been their scalability (e.g. Eigen value decomposition) with large scale graphs, and this issue persists even with graph autoencoders. In order to combat this limitation, recent approaches, such as DeepWalk (Perozzi, Al Rfou, and Skiena 2014) and Node2Vec (Grover and Leskovec 2016) have resorted to a distributional hypothesis, popularly adopted in language modeling (Harris 1954) where co occurrence of two nodes in short random walks implies a strong notion of semantic similarity. As a result, by extending highly scalable neural embedding techniques such as Word2Vec (Mikolov et al. 2013) to the construction of node embeddings, one can obtain state of the art results in community detection with single layer graphs. Proposed Work: In this paper, we develop a novel scalable technique for obtaining deep node embeddings from multilayer graphs. We show that a naive extension of DeepWalk to the multilayer case, that performs independent random walks on each of the layers, can be worse than even simple baselines, thus emphasizing the need to explicitly model dependencies across the different layers. Consequently, we propose to parameterize virtual edges to allow information flow between layers. Furthermore, the premise of using short random walks to infer the underlying semantic structure relies on the assumption that the networks are highly sparse and the node co occurrences follow a power law. However, by allowing inter layer edges, that assumption can be violated in cases where the semantics can persist over even longer walks. We address this challenge by including a refinement stage, where the multilayer embeddings are finetuned to produce more cohesive communities. In particular, we use entropy based proxy clustering cost and modularity based refinement. We show that the proposed approach is highly effective for as many as 37 layers and it outperforms existing approaches for multilayer community detection. Mathematical Preliminaries Definitions: A single layer undirected, unweighted graph is represented by G (V, E) where V denotes the set of nodes with cardinality |V| N and E denotes the set of edges. The goal of embedding techniques is to generate latent representations, X RNxd, where d is the desired number of latent dimensions. A multilayer graph is represented using a set of L inter dependent graphs G (V, E) for l 1, L, where there exists a node mapping between every pair of layers to indicate which vertices in one graph correspond to vertices in the other. Deep Embeddings for Network Analysis: The scalability challenge of factorization techniques has motivated the use of deep learning methods to obtain node embeddings. The earliest work to report results on this direction was the DeepWalk algorithm by Perozzi et al. (Perozzi, Al Rfou, and Skiena 2014) Interestingly, it draws analogy between node sequences generated by short random walks on graphs and sentences in a document corpus. Given this formulation, the authors utilize popular language modeling tools to obtain latent representations for the nodes (Mikolov et al. 2013) Let us consider a simple metric walkWt in step t, which is rooted at the vertex vi. The transition probability between the nodes vi and vj can be expressed as P (Wt+1 vj |Wt vi) h(|xi xj||2/s) (1) where |xi xj||2 indicates the similarity metric between the two vertices in the latent space to be recovered and h is a linking function that connects the vertex similarity to the actual co occurrence probability. With appropriate choice of the walk length, the true metric can be recovered accurately from the co occurrence statistics constructed using random walks. Furthermore, the authors note that the frequency in which vertices appear in the short random walks follows a power law distribution, similar to words in natural language. Given a length S sequence of words, (w0, w1, wS 1) wherews denotes a word in the vocabulary, neural word embeddings attempt to obtain vector spaces that can recover the likelihood of observing a word given its context, i.e. P (ws|w0, w1, ws 1) over all sequences. Extending this idea to the case of graphs, a random walk on the nodes, starting from node vi, produces the sequence analogous to sentences in language data. Modularity based Community Detection: A popular measure used in community detection algorithms is the modularity function Q (Newman 2006) defined as the difference between the number of edges within cohesive communities and the expected number of", "venue": "ArXiv", "year": 2018.0, "author_names": ["Huan Song", "Jayaraman J Thiagarajan"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 209314283, "title": "Decoding of single trial EEG reveals unique states of functional brain connectivity that drive rapid speech categorization decisions.", "abstract": "Categorical perception (CP) is an inherent property of speech perception. The response time (RT) of listeners' perceptual speech identification is highly sensitive to individual differences. While the neural correlates of CP have been well studied in terms of the regional contributions of the brain to behavior, functional connectivity patterns that signify individual differences in listeners' speed (RT) for speech categorization is less clear. To address these questions, we applied several computational approaches to the EEG, including graph mining, machine learning (i.e. support vector machine) and stability selection to investigate the unique brain states (functional neural connectivity) that predict the speed of listeners' behavioral decisions. We infer that (i) the listeners' perceptual speed is directly related to dynamic variations in their brain connectomics, (ii) global network assortativity and efficiency distinguished fast, medium, and slow RT, (iii) the functional network underlying speeded decisions increases in negative assortativity (i.e. became disassortative) for slower RTs, (iv) slower categorical speech decisions cause excessive use of neural resources and more aberrant information flow within the CP circuitry, (v) slower responders tended to utilize functional brain networks excessively (or inappropriately) whereas fast responders (with lower global efficiency) utilized the same neural pathways but with more restricted organization. Our results showed that neural classifiers (SVM) coupled with stability selection correctly classify behavioral RTs from functional connectivity alone with over 92% accuracy (AUC=0.9) Our results corroborate previous studies by supporting the engagement of similar temporal (STG) parietal, motor, and prefrontal regions in CP using an entirely data driven approach.", "venue": "Journal of neural engineering", "year": 2019.0, "author_names": ["Rakib Al-Fahad", "Mohammed Yeasin", "Gavin M Bidelman"], "n_citations": 14, "n_key_citations": 1, "score": 0}]} -{"query": "Buddhism after mao", "session_id": 2255795081427968, "user_id": 5901516120075555, "candidates": [{"corpus_id": 225816141, "title": "Buddhism after Mao: Negotiations, Continuities and Reinventions Edited by Ji Zhe, Gareth Fisher and Andre Laliberte Honolulu: University of Hawai'i Press, 2019 viii 355 pp. $72.00 ISBN 978 0 8248 7734 7", "abstract": "", "venue": "The China Quarterly", "year": 2020.0, "author_names": ["Natasha Heller"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 229191187, "title": "Buddhism after Mao: Negotiations, Continuities, and Reinventions, edited by Ji Zhe, Gareth Fisher, and Andre Laliberte", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["Tzu-Lung Chiu"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 146991445, "title": "Buddhism Under Mao", "abstract": "", "venue": "", "year": 1972.0, "author_names": ["Holmes H Welch"], "n_citations": 56, "n_key_citations": 1, "score": 0}, {"corpus_id": 171306684, "title": "Recovering Buddhism in Modern China", "abstract": "AcknowledgmentsIntroductionPart I: Republican Era Modernity1. Buddhist Activism, Urban Space, and Ambivalent Modernity in 1920s Shanghai, by J. Brooks Jessup2. Buddhism and the Modern Epistemic Space: Buddhist Intellectuals in the Science and Philosophy of Life Debates, by Erik J. Hammerstrom3. A Revolution of Ink: Chinese Buddhist Periodicals in the Early Republic, by Gregory Adam ScottPart II: Midcentury War and Revolution4. Resurrecting Xuanzang: The Modern Travels of a Medieval Monk, by Benjamin Brose5. Buddhist Efforts for the Reconciliation of Buddhism and Marxism in the Early Years of the People's Republic of China, by Xue Yu6. The Communist Dismantling of Temple and Monastic Buddhism in Suzhou, by Jan KielyPart III: Contemporary Social Practice7. Mapping Religious Difference: Lay Buddhist Textual Communities in the Post Mao Period, by Gareth Fisher8. \"Receiving Prayer Beads\" A Lay Buddhist Ritual Performed by Menopausal Women in Ninghua, Western Fujian, by Neky Tak ching CheungBibliographyList of ContributorsIndex", "venue": "", "year": 2016.0, "author_names": ["Jan Kiely", "J Brooks Jessup"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 148186786, "title": "Chinese Buddhism as a Social Force", "abstract": "This article examines the level of religious mobilization of Buddhism in post Mao China and explores the potential of Buddhism for reconfiguring the relationships between religion, state, and society. In the first part, with existing data, three aspects of the ongoing Buddhist revival are measured: the number of lay Buddhists, the size and composition of sangha (Buddhist clergy) and the number and geographical repartition of monasteries. In the second part, the author analyzes three possible roles that Buddhism may play in Chinese public life, which are, respectively, a spiritual reference for political protest, a source of civil religion, and an element of the state's soft power. The author argues that although Buddhism has become a basic system of symbolic reference for 10 to 20 percent of the Chinese adult population, it is politically conservative and contributes little to changing the existing social structure and power relations", "venue": "", "year": 2012.0, "author_names": ["Zhe Ji"], "n_citations": 17, "n_key_citations": 1, "score": 0}, {"corpus_id": 142281151, "title": "BUDDHISM: RETHINKING SEXUAL MISCONDUCT", "abstract": "Man has actively engaged in creating religions ever since the beginning of humankind. Religion, reversely, creates an illusory reality for man to live in, which sets its systematic moral sanction that can be rendered a double edge sword: one edge works as moral enhancement and the other what I call moral terrorism, derived from the dominant moral claim and the fear of inability or failure to fulfill. This article explores under the revival of Buddhism in post Mao China, how the dominant interpretation of sexual misconduct has, instead of functioning as initially intended, victimized women and queer bodies, pushing them to the forefront of moral criticism. Through textual analysis and sociological approach, the article attempts to give an up to date interpretation to sexual misconduct, largely not only helping man, oftentimes stuck in such a dilemma, abstain from growing materialism but liberate from fear created by man himself.", "venue": "", "year": 2012.0, "author_names": ["Huai Bao"], "n_citations": 6, "n_key_citations": 1, "score": 0}, {"corpus_id": 146958425, "title": "The Chinese Buddhist Ecology in Post Mao China: Contours, Types and Dynamics", "abstract": "The author delineates the configuration of the Chinese Buddhist ecology in post Mao China by focusing on three major types of religious actor found in the ecology. She spells out how the interactions between the internal characteristics of religious groups and external structural conditions have shaped the development patterns of groups in each type. First, the dominant Buddhist temples, which enjoy state recognition, have been beset by the hollowing out process. Second, the type of Buddhist groups with ambiguous legal status has been growing vigorously in the interstices of the current Chinese socio political structure but faces uncertainties. An array of actors and forms, including self appointed monks and the mixed form of Buddhism and popular religion, exist on the fringe of institutional Buddhism and constitute the third type. Within this type, the syncretic sects, receiving censure from both the state and the Buddhist establishment, are forced to operate underground.", "venue": "", "year": 2011.0, "author_names": ["Sun Yanfei"], "n_citations": 19, "n_key_citations": 0, "score": 0}, {"corpus_id": 144863880, "title": "The Central Position of the Shan/Tai Buddhism for the Socio Political Development of Wa and Kayah Peoples", "abstract": "This paper concerns work I have done on the China Burma border between 2001 and 2007, with background of work with Shan both in Burma and in North Western Thailand. It will be about the place of the Shan and their Buddhism in the network of ethnic and trade relations on this border. It will raise questions about Shan Monastic traditions. On the one hand I have worked on the nature of Wa (Pirok) Theravada Buddhism and the history of the Wa 'kingdom' of Ban Hong, and the Shan have played a central role as source of knowledge about Buddhism and of kingship, providing models of both for these Wa. A number of interesting questions arise about the Shan sources of models of Buddhist monastic organisation here; and it is quite clear that Wa 'kingship' was based upon the Shan notion of a Caofa or Cao Mang. The second focus (during most of 2003, mostly at Ruili/Meng Mao) has been the cross border, inter ethnic trade system chiefly in gemstones and jade. In this context the Shan have played a central role as what an.", "venue": "", "year": 2009.0, "author_names": ["Chit Su Hlaing"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 145177982, "title": "Buddhism and Christianity: Rivals and Allies", "abstract": "Diverse worldviews in today's world contrasts and comparisons between Buddhism and Christianity Buddhism in the context of Chinese religion and philosophy meditation in the two traditions Nirvana versus God Hua yen, Buddhism and modern Japanese thought new Christian interpretations science, liberalism and religion continuities and discontinuities between Mao Zedong thought and the traditional religions of China Buddhism, Christianity complementarity? Buddhism, Christianity and other religions towards a higher order agreement on worldviews the coming victory of pluralism. Appendix: the Western meaning of Eastern philosophies.", "venue": "", "year": 1993.0, "author_names": ["Ninian Smart"], "n_citations": 21, "n_key_citations": 0, "score": 0}, {"corpus_id": 59393123, "title": "The Protection and Development of Guizhou Ethnic Characteristic Culture from the Perspective of Ecological Civilization On the Experience and Enlightenment of Buddhist Cultural Localization", "abstract": "Since the late Han Dynasty was introduced into China, Buddhism has quickly harmonized with local Confucian and Taoist culture in the fields of religion, art and literature by adhering to great compassion for all the common people and equality of all living creatures, to the extraordinary ideological level of detachment from secular world and the pursuit of peace, happiness and wisdom. After two thousand years spread, inheritance, development and evolution, Buddhism gradually integrated into the local culture, converged and formed the trinity culture pattern of Confucianism, Buddhism and Taoism, which highlights the value concept of pluralism and the spirit of tolerance of coexistence of Chinese traditional culture. On the one hand, the active guide of Buddhism for people's yearning for the value pursuit of peace, happiness and wisdom, equality, and the reflections on life of equality of all living creatures and cause and effect transmigration are intrinsically in accordance with the harmonious development between people and nature, economic construction and environmental protection advocated by ecological civilization, which conforms to the value concept of sustainable economic and social development and exerts the important enlightenment significance to the protection of Guizhou ethnic characteristic culture under the background of urbanization. On the other hand, the experience and practice of localization of Buddhist culture exert the vital significance to the development of Guizhou ethnic characteristic culture under the background of urbanization.", "venue": "", "year": 2017.0, "author_names": ["Jin-Ru Mao", "Bowen Zhang"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "safety in neighborhoods", "session_id": 3323277646110868, "user_id": 399276823977584, "candidates": [{"corpus_id": 17972809, "title": "Neighborhoods and health", "abstract": "Features of neighborhoods or residential environments may affect health and contribute to social and race/ethnic inequalities in health. The study of neighborhood health effects has grown exponentially over the past 15 years. This chapter summarizes key work in this area with a particular focus on chronic disease outcomes (specifically obesity and related risk factors) and mental health (specifically depression and depressive symptoms) Empirical work is classified into two main eras: studies that use census proxies and studies that directly measure neighborhood attributes using a variety of approaches. Key conceptual and methodological challenges in studying neighborhood health effects are reviewed. Existing gaps in knowledge and promising new directions in the field are highlighted.", "venue": "Annals of the New York Academy of Sciences", "year": 2010.0, "author_names": ["Ana V Diez Roux", "Christina F Mair"], "n_citations": 1987, "n_key_citations": 131, "score": 1}, {"corpus_id": 9277333, "title": "The neighborhoods they live in: the effects of neighborhood residence on child and adolescent outcomes.", "abstract": "This article provides a comprehensive review of research on the effects of neighborhood residence on child and adolescent well being. The first section reviews key methodological issues. The following section considers links between neighborhood characteristics and child outcomes and suggests the importance of high socioeconomic status (SES) for achievement and low SES and residential instability for behavioral/emotional outcomes. The third section identifies 3 pathways (institutional resources, relationships, and norms/collective efficacy) through which neighborhoods might influence development, and which represent an extension of models identified by C. Jencks and S. Mayer (1990) and R. J. Sampson (1992) The models provide a theoretical base for studying neighborhood mechanisms and specify different levels (individual, family, school, peer, community) at which processes may operate. Implications for an emerging developmental framework for research on neighborhoods are discussed.", "venue": "Psychological bulletin", "year": 2000.0, "author_names": ["Tama Leventhal", "Jeanne Brooks-Gunn"], "n_citations": 3092, "n_key_citations": 205, "score": 0}, {"corpus_id": 17153821, "title": "Neighborhood based differences in physical activity: an environment scale evaluation.", "abstract": "OBJECTIVES This study evaluated a neighborhood environment survey and compared the physical activity and weight status of the residents in 2 neighborhoods. METHODS On 2 occasions, 107 adults from neighborhoods with differing \"walkability\" were selected to complete a survey on their neighborhood environment. Physical activity was assessed by self report and by accelerometer; height and weight were assessed by self report. RESULTS Neighborhood environment characteristics had moderate to high test retest reliabilities. Residents of high walkability neighborhoods reported higher residential density, land use mix, street connectivity, aesthetics, and safety. They had more than 70 more minutes of physical activity and had lower obesity prevalence (adjusted for individual demographics) than did residents of low walkability neighborhoods. CONCLUSIONS The reliability and validity of self reported neighborhood environment subscales were supported. Neighborhood environment was associated with physical activity and overweight prevalence.", "venue": "American journal of public health", "year": 2003.0, "author_names": ["Brian E Saelens", "James F Sallis", "Jennifer B Black", "Diana Chen"], "n_citations": 1675, "n_key_citations": 125, "score": 0}, {"corpus_id": 146392027, "title": "Neighborhoods and health", "abstract": "1. Introduction 2. Neighbourhoods and Health: An Overview PART I: METHODOLOGICAL AND CONCEPTUAL APPROACHES TO STUDYING NEIGHBOURHOOD EFFECTS ON HEALTH 3. The examination of neighbourhood effects on health: Conceptual and methodological issues related to the presence of multiple levels of organization 4. Multilevel methods for public health research 5. The quantitative assessment of neighbourhood social environments 6. Neighbourhood level context and health: lessons from sociology 7. Geocoding and measurement of neighbourhood socioeconomic position: A U.S. perspective 8. Area based deprivation measures: A UK perspective PART II: NEIGHBOURHOODS AND HEALTH OUTCOMES 9. Neighbourhoods and infectious diseases 10. Infant health: Race, risk, and residence 11. Putting asthma into context: community influences on risk, behaviour, and intervention PART III: THE CONTOURS OF NEIGHBOURHOOD EFFECTS ON HEALTH 12. Residential segregation and health 13. Neighbourhoods and networks: The construction of safe places and bridges 14. Neighbourhoods, aging and functional limitations 15. Neighbourhoods, health research, and its relevance to public policy", "venue": "", "year": 2003.0, "author_names": ["Ichiro Kawachi", "Lisa F Berkman"], "n_citations": 831, "n_key_citations": 28, "score": 0}, {"corpus_id": 144936250, "title": "Do Neighborhoods Influence Child and Adolescent Development?", "abstract": "The effects of neighborhood characteristics on the development of children and adolescents are estimated, using two data sets, each of which contains information gathered about individual children and the families and neighborhoods in which they reside. There are reasonalby powerful neighborhood effects particularly effects of the presence of affluent neighbors on Childhood IQ, teenage births, and school leaving, even after the differences in the socioeconomic characteristics of families are adjusted for. The study finds that white teenagers benefit more from the presence of affluent neighbors than do black teenagers.", "venue": "American Journal of Sociology", "year": 1993.0, "author_names": ["Jeanne Brooks-Gunn", "Greg J Duncan", "Pamela Kato Klebanov", "Naomi Sealand"], "n_citations": 1669, "n_key_citations": 80, "score": 0}, {"corpus_id": 3263276, "title": "Unsafe to Play? Neighborhood Disorder and Lack of Safety Predict Reduced Physical Activity among Urban Children and Adolescents", "abstract": "Purpose. Lack of physical activity is associated with increased risk of overweight and cardiovascular disease, conditions associated with lower socioeconomic status (SES) Associations between activity levels of urban youth and limited access to safe recreation areas in their neighborhoods of residence were investigated. Design. Analyses of data from the Project on Human Development in Chicago Neighborhoods, a multilevel longitudinal study of families and communities, are reported. Setting. Chicago, Illinois. Subjects. Individual level data were obtained from 1378 youth 11 to 16 years old and caregivers living in 80 neighborhood clusters. Neighborhood level data were collected from 8782 community residents and videotapes of 15,141 block faces. Measures. Parental estimates of hours youth spent in recreational programming were used to estimate physical activity. A scale of residents' assessment of neighborhood safety for children's play was created; disorder measures came from videotaped observations. Results. Physical activity averaged 2.7 hours/week (SD 5.0) varying significantly across neighborhoods. Using hierarchical linear regression, SES, age, and male gender, but not body mass index, were independently associated with physical activity. Lower neighborhood safety and social disorder were significantly associated with less activity, controlling for demographics. Conclusions. One mechanism for reduced physical activity among youth may be the influence of unsafe neighborhoods. Neighborhood interventions to increase safety and reduce disorder may be efficacious in increasing physical activity, thereby reducing risk of overweight and cardiovascular disease.", "venue": "American journal of health promotion AJHP", "year": 2004.0, "author_names": ["Beth E Molnar", "Steven L Gortmaker", "Fiona C Bull", "Stephen L Buka"], "n_citations": 506, "n_key_citations": 14, "score": 0}, {"corpus_id": 23854385, "title": "Perceived School and Neighborhood Safety, Neighborhood Violence and Academic Achievement in Urban School Children.", "abstract": "Community and school violence continue to be a major public health problem, especially among urban children and adolescents. Little research has focused on the effect of school safety and neighborhood violence on academic performance. This study examines the effect of the school and neighborhood climate on academic achievement among a population of 3(rd) 5(th) grade students in an urban public school system. Community and school safety were assessed using the School Climate Survey, an annual city wide assessment of student's perception of school and community safety. Community violence was measured using the Neighborhood Inventory for Environmental Typology, an objective observational assessment of neighborhood characteristics. Academic achievement was measured using the Maryland State Assessment (MSA) a standardized exam given to all Maryland 3(rd) 8(th) graders. School Climate Data and MSA data were aggregated by school and grade. Objective assessments of neighborhood environment and students' self reported school and neighborhood safety were both strongly associated with academic performance. Increasing neighborhood violence was associated with statistically significant decreases from 4.2% 8.7% in math and reading achievement; increasing perceived safety was associated with significant increases in achievement from 16% 22% These preliminary findings highlight the adverse impact of perceived safety and community violence exposure on primary school children's academic performance.", "venue": "The Urban review", "year": 2010.0, "author_names": ["Milam Aj", "Furr-Holden Cdm", "Leaf Pj"], "n_citations": 123, "n_key_citations": 10, "score": 0}, {"corpus_id": 1265950, "title": "Food store availability and neighborhood characteristics in the United States.", "abstract": "OBJECTIVE This study provides a multivariate analysis of the availability of food store outlets in the US and associations with neighborhood characteristics on race, ethnicity and socioeconomic status (SES) METHOD Commercial food store outlet data are linked across 28,050 zip codes to Census 2000 data. Multivariate regression analyses are used to examine associations between the availability of chain supermarkets, non chain supermarkets, grocery stores and convenience stores and neighborhood characteristics on race, ethnicity and SES including additional controls for population size, urbanization and region. RESULTS Low income neighborhoods have fewer chain supermarkets with only 75% (p<0.01) of that available in middle income neighborhoods. Even after controlling for income and other covariates, the availability of chain supermarkets in African American neighborhoods is only 52% (p<0.01) of that in White neighborhoods with even less relative availability in urban areas. Hispanic neighborhoods have only 32% (p<0.01) as many chain supermarkets compared to non Hispanic neighborhoods. Non chain supermarkets and grocery stores are more prevalent in low income and minority neighborhoods. CONCLUSION The study results highlight the importance of various potential public policy measures for improving access to supermarkets that may serve to reduce systematic local area barriers that are shown to exist by race, ethnicity and income.", "venue": "Preventive medicine", "year": 2007.0, "author_names": ["Lisa M Powell", "Sandy J Slater", "Donka M Mirtcheva", "Yanjun Bao", "Frank J Chaloupka"], "n_citations": 992, "n_key_citations": 58, "score": 0}, {"corpus_id": 40418902, "title": "ASSESSING \"NEIGHBORHOOD EFFECTS\" Social Processes and New Directions in Research", "abstract": "Abstract This paper assesses and synthesizes the cumulative results of a new \"neighborhood effects\" literature that examines social processes related to problem behaviors and health related outcomes. Our review identified over 40 relevant studies published in peer reviewed journals from the mid 1990s to 2001, the take off point for an increasing level of interest in neighborhood effects. Moving beyond traditional characteristics such as concentrated poverty, we evaluate the salience of social interactional and institutional mechanisms hypothesized to account for neighborhood level variations in a variety of phenomena (e.g. delinquency, violence, depression, high risk behavior) especially among adolescents. We highlight neighborhood ties, social control, mutual trust, institutional resources, disorder, and routine activity patterns. We also discuss a set of thorny methodological problems that plague the study of neighborhood effects, with special attention to selection bias. We conclude with promising", "venue": "", "year": 2002.0, "author_names": ["Robert J Sampson", "Jeffrey D Morenoff", "Thomas Gannon-Rowley"], "n_citations": 3514, "n_key_citations": 168, "score": 0}, {"corpus_id": 21296949, "title": "Parents' perceptions of neighborhood safety and children's physical activity.", "abstract": "OBJECTIVE The obesity epidemic disproportionately affects minority and poor children. Negative perceptions of neighborhood safety in poor communities may affect overweight by inhibiting children's physical activity. This study investigates the degree to which parents in a poor inner city vs. a middle class suburban community limit their children's outdoor activity because of neighborhood safety concerns. METHOD Parents of children aged 5 10 years from an inner city family practice in a poor community and from a suburban pediatric practice in a middle class community completed a 20 item questionnaire. Parents estimated the amount of their child's activity in various situations and indicated their level of anxiety concerning gangs, child aggression, crime, traffic, and personal safety in their neighborhood. RESULTS Inner city children (n 204) engaged in less physical activity than suburban children (N 103) (P 0.001) Inner city parents expressed much greater anxiety about neighborhood safety than suburban parents (P 0.0001) In the inner city population, children's physical activity levels were negatively correlated with parental anxiety about neighborhood safety (r 0.18, P 0.05) CONCLUSIONS Inner city parents have high levels of anxiety about neighborhood safety. While these concerns may not entirely explain the discrepancy in activity levels between inner city and suburban children, a safe environment is crucial to increasing opportunities for physical activity.", "venue": "Preventive medicine", "year": 2006.0, "author_names": ["Lori Weir", "Debra Etelson", "Donald A Brand"], "n_citations": 337, "n_key_citations": 10, "score": 0}]} -{"query": "Development of a Digital Content-Free Speech Analysis Tool for the Measurement of Mental Health and Follow-Up for Mental Disorders: Protocol for a Case-Control Study", "session_id": 5067075983716378, "user_id": 3217267898288310, "candidates": [{"corpus_id": 201967112, "title": "Development of a Digital Content Free Speech Analysis Tool for the Measurement of Mental Health and Follow Up for Mental Disorders: Protocol for a Case Control Study", "abstract": "Background The prevalence of mental disorders worldwide is very high. The guideline oriented care of patients depends on early diagnosis and regular and valid evaluation of their treatment to be able to quickly intervene should the patient's mental health deteriorate. To ensure effective treatment, the level of experience of the physician or therapist is of importance, both in the initial diagnosis and in the treatment of mental illnesses. Nevertheless, experienced physicians and psychotherapists are not available in enough numbers everywhere, especially in rural areas or in less developed countries. Human speech can reveal a speaker's mental state by altering its noncontent aspects (speech melody, intonations, speech rate, etc) This is noticeable in both the clinic and everyday life by having prior knowledge of the normal speech patterns of the affected person, and with enough time spent listening to the patient. However, this time and experience are often unavailable, leaving unused opportunities to capture linguistic, noncontent information. To improve the care of patients with mental disorders, we have developed a concept for assessing their most important mental parameters through a noncontent analysis of their active speech. Using speech analysis for the assessment and tracking of mental health patients opens up the possibility of remote, automatic, and ongoing evaluation when used with patients' smartphones, as part of the current trends toward the increasing use of digital and mobile health tools. Objective The primary objective of this study is to evaluate measurements of participants' mental state by comparing the analysis of noncontent speech parameters to the results of several psychological questionnaires (Symptom Checklist 90 [SCL 90] the Patient Health Questionnaire [PHQ] and the Big 5 Test) Methods In this paper, we described a case controlled study (with a case group and one control group) The participants will be recruited in an outpatient neuropsychiatric treatment center. Inclusion criteria are a neurological or psychiatric diagnosis made by a specialist, no terminal or life threatening illnesses, and fluent use of the German language. Exclusion criteria include psychosis, dementia, speech or language disorders in neurological diseases, addiction history, a suicide attempt recently or in the last 12 months, or insufficient language skills. The measuring instrument will be the VoiceSense digital voice analysis tool, which enables the analysis of 200 specific speech parameters, and the assessment of findings using psychometric instruments and questionnaires (SCL 90, PHQ, Big 5 Test) Results The study is ongoing as of September 2019, but we have enrolled 254 participants. There have been 161 measurements completed at timepoint 1, and a total of 62 participants have completed every psychological and speech analysis measurement. Conclusions It appears that the tone and modulation of speech are as important, if not more so, than the content, and should not be underestimated. This is particularly evident in the interpretation of the psychological findings thus far acquired. Therefore, the application of a software analysis tool could increase the accuracy of finding assessments and improve patient care. Trial Registration ClinicalTrials.gov NCT03700008; https:/clinicaltrials.gov/ct2/show/NCT03700008 International Registered Report Identifier (IRRID) PRR1 10.2196/13852", "venue": "JMIR research protocols", "year": 2020.0, "author_names": ["Peter Tonn", "Yoav Degani", "Shani Hershko", "Amit Klein", "Lea Seule", "Nina Schulze"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 209494486, "title": "Development of the eTAP T: A measure of mental health professionals' attitudes and process towards e interventions", "abstract": "Background The development of technological applications within psychotherapy has opened up new opportunities for mental health professionals (MHPs) to address client need. Despite the clinical efficacy and utility of evidence based electronic interventions, MHPs' engagement with these interventions remains poorly understood. Objective The aim of the current study was to develop and conduct a preliminary psychometric investigation of the measurement properties of the electronic therapy attitudes and process questionnaire therapist version (eTAP T) Based upon the theory of planned behaviour (TPB) the eTAP T measures factors related to MHPs' engagement with e interventions for clients' mental health concerns. Methods Participants were 222 practicing MHPs who reported being in direct contact with clients. Participants completed the eTAP T and related measures with a subsample of 40 participants completing a two week follow up questionnaire. Results Exploratory factor analysis with item reduction resulted in a 12 item eTAP T, with four factors accounting for 82% of variance. The four factors (subjective norms, perceived behavioural control, attitudes and intentions) were consistent with the four TPB domains. The eTAP T demonstrated satisfactory validity and reliability as per the consensus based standards for the selection of health measurement instruments. Conclusions The development and preliminary psychometric investigation supported the validity and reliability of the eTAP T. Further research is required for confirmatory analyses. The eTAP T may be useful in identifying the training needs of MHPs and evaluating training programs. Specific areas for intervention, such as attitudes or perceived credibility may be identified and targetted, with the measure then also used to evaluate change across these domains. It is anticipated that the eTAP T may useful tool in improving uptake of digital interventions by MHPs.", "venue": "Internet interventions", "year": 2019.0, "author_names": ["Bonnie A Clough", "Dale Rowland", "Leanne M Casey"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 3613158, "title": "Pre post, mixed methods feasibility study of the WorkingWell mobile support tool for individuals with serious mental illness in the USA: a pilot study protocol", "abstract": "Introduction Successful competitive employment has been found to be related to enhanced self esteem, higher quality of life and reduced mental health service use for individuals living with serious mental illnesses (SMIs) including schizophrenia, bipolar disorder and major depression. The effectiveness of the individual placement and support model has been demonstrated in multiple randomised controlled trials in many countries. The management of stress, depression and anxiety in the workplace may be effectively enhanced through digital mental health interventions. The WorkingWell mobile support tool 'app' is specifically designed to meet the need for illness management support for individuals with SMI in the workplace, as an adjunct to professional treatment. Methods and analysis The WorkingWell app, grounded in evidence based supported employment, is informed by user experience design. It will be tested in a pre post design, mixed methods pilot study to explore issues of feasibility, acceptability and usefulness, and to provide preliminary data on the impact of use. Putative mediators of improved job tenure and psychological well being, including postintervention changes in social support, self efficacy and work related motivation, will be investigated. Forty individuals at least 18 years of age, meeting the eligibility requirements for supported employment services (ie, diagnosed with a mental illness meeting the criteria for severity, duration and treatment) working a minimum of 10 hours per week at study enrolment, and speaking, reading and writing in English will be recruited for the pilot study. Research staff will recruit individuals at community based mental health agencies; provide orientation to the study, the study smartphones and the WorkingWell app; conduct research interviews including standardised measures as well as semistructured items; and provide technical assistance in telephone calls and inperson meetings. A sample of 10 agency staff will be recruited to obtain further information on the feasibility, acceptability and usefulness of WorkingWell. Ethics and dissemination The study design and procedures are approved by the Dartmouth Hitchcock Medical Center Committee for the Protection of Human Subjects, the Massachusetts Department of Mental Health Central Office Research Review Committee and the Vermont Agency of Human Services Institutional Review Board. Study findings will be disseminated to agency partners, state agencies and funders, and to the research and technology development communities. Findings from the study will inform the design, data collection procedures and protocol for future full scale randomised controlled trial testing of the effectiveness of the WorkingWell app, as well as investigations of work related variables as mediators of psychological well being and quality of life for individuals with SMI.", "venue": "BMJ Open", "year": 2018.0, "author_names": ["Joanne Nicholson", "Spenser M Wright", "Alyssa M Carlisle"], "n_citations": 11, "n_key_citations": 1, "score": 0}, {"corpus_id": 91002642, "title": "The Healthy Brain Network Biobank: An open resource for transdiagnostic research in pediatric mental health and learning disorders", "abstract": "Innovations in methods and technologies are equipping researchers with unprecedented capabilities for detecting and characterizing pathologic processes in the developing human brain. As a result, there is growing enthusiasm about the prospect of achieving clinically useful tools that can assist in the diagnosis and management of mental health and learning disorders. For these ambitions to be realized, it is critical to accrue large scale multimodal datasets that capture a broad range of commonly encountered clinical psychopathology. To this end, the Child Mind Institute has launched the Healthy Brain Network (HBN) an ongoing initiative focused on creating and sharing a biobank comprised of data from 10,000 New York City area children and adolescents (ages 5 21) The HBN has adopted a community referred recruitment model. Specifically, study advertisements seek the participation of families who have concerns about one or more psychiatric symptoms in their child. The HBN Biobank houses data about psychiatric, behavioral, cognitive, and lifestyle (e.g. fitness, diet) phenotypes, as well as multimodal brain imaging, electroencephalography, digital voice and video recordings, genetics, and actigraphy. In this paper, we present the motivation, rationale and design for the HBN along with the initial implementation and evolution of the HBN protocols. We describe the first major open data release (n 664) containing descriptive, electroencephalography, and multimodal brain imaging data (resting state and naturalistic viewing functional MRI, diffusion MRI and morphometric MRI) Beyond accelerating transdiagnostic research, we discuss the potential of the HBN Biobank to advance related areas, such as biophysical modeling, voice and speech analysis, natural viewing fMRI and EEG, and methods optimization.", "venue": "", "year": 2017.0, "author_names": ["Lindsay M Alexander", "Jasmine Escalera", "Lei Ai", "Charissa F Andreotti", "Karina Febre", "Alexander Mangone", "Natan Vega Potler", "Nicolas Langer", "Alexis Alexander", "Meagan Kovacs", "Shannon G Litke", "Bridget O'Hagan", "Batya Bronstein", "Anastasia Bui", "Marijayne T Bushey", "Victoria Castagna", "Nicolas Camacho", "Elisha Chan", "Danielle Citera", "Jon C Clucas", "Samantha Cohen", "Megan Eaves", "Brian Fradera", "Natalie Grant-Villegas", "Gabriella Green", "Camille Gregory", "Emily Hart", "Shana Harris", "Catherine Lord", "Danielle Kahn", "Katya Kabotyanski", "Kayla Kleinman", "Bonhwang Koo", "Eliza Kramer", "Amy E Margolis", "Kathleen R Merikangas", "Judith Milham", "Giuseppe Minniti", "Rebecca Neuhaus", "Alexandra Nussbaum", "Yael Osman", "Lucas C Parra", "Kenneth R Pugh", "Amy Racanello", "Anita Restrepo", "Tian Saltzman", "Batya Septimus", "Russell H Tobe", "Rachel Waltz", "Anna Williams", "Anna J Yeo", "Francisco Xavier Castellanos", "Arno Klein", "Tomas Paus", "Bennett L Leventhal", "Cameron Craddock", "Harold S Koplewicz", "Michael Peter Milham"], "n_citations": 25, "n_key_citations": 3, "score": 0}, {"corpus_id": 218560354, "title": "PSYCHIATRIC NURSING Review A technological tool for treating social anxiety: Virtual reality", "abstract": "What is known on this subject? Virtual reality is not used for mental health care in Turkey, but it has been used to treat many psychological problems in other countries for more than 20 years. What is the contribution of this paper? This study is a compilation of the studies that examine the use of virtual reality to treat social anxiety problems. It conveyed the characteristics of virtual reality interventions and presented a holistic view to the literature concerning it. What is its contribution to the practice? Experimental studies of using virtual reality practices can be planned, and research projects regarding different problem fields can be conducted. 297 Omer Ozer, Virtual reality in social anxiety dx.doi.org/10.14744/phd.2019.75010 environments. VR based treatment can be done in a variety of environments with the technological opportunities of researchers and their different exposure interventions. The most common environments are based on manipulating spectator reactions or performances before various spectator profiles. This study is a compilation of the studies that examine the use of VR to treat social anxiety. Its aim is to provide information about their results related to the studies in which VR interventions are used against the social anxiety problem in relation to the VR practices used in psychological assistance period, about the methodological characteristics of the studies, and about the current state in this regard in Turkey. The Use of Virtual Reality for Psychological Assistance VR has been used successfully to treat a variety of psychological problems. Compilation studies, experimental studies without control groups, randomized controlled studies and even meta analyses have been conducted in this field. VR is used to treat post traumatic stress disorder,[6 10] obsessivecompulsive disorder,[11 13] alcohol abuse,[14] eating disorders, problems related to body image[15,16] and schizophrenia.[17,18] VR has been used both as an intervention method and as a diagnostic tool. VR treatments are based on cognitive behavioral therapy, which was developed as a treatment for anxiety disorders and is related to exposure practices[19] They were first used for specific phobias such as arachnophobia,[20,21] acrophobia,[22,23] and flying phobia.[24 28] VR is also used to treat claustrophobia, common anxiety disorder and panic attacks. VR treatments for anxiety disorders are based on exposure/facing interventions. Exposure therapy involves confronting the stimulus that causes anxiety,[29] which may be objects such as snakes or spiders, environments such as busses, restaurants or meeting halls, or situations such as speaking before an audience or communicating with the opposite gender. The main point here is confronting the issues causing disorders. Exposure, which is based on a behavioral method, is often used with the interventions based on the cognitive behavioral approach. The literature includes several types of exposure therapy. One is in vivo exposure therapy. Individuals directly confront the cause of their anxiety in vivo exposure therapy. Putting a cynophobic person and a dog in the same environment is an example of in vivo exposure therapy. Another exposure therapy type is based on the objects or environment avoided by individuals. This type of exposure therapy is called imaginal exposure therapy.[29] Another treatment, interoceptive exposure therapy, involves replicating the physical sensations that occur during panic attacks.[30] The use of digital environments created with VR has become more common. Exposure therapy in digital environments is called in virtuo exposure therapy. Although VR can be used to treat almost all anxiety disorders, most of the studies of this topic concern social anxiety. The details of these studies are provided under the effect title. In addition to experimental research, there are meta analyses that evaluate the research on VR treatments. Opris et al.[31] Parsons and Rizzo,[32] and Powers and Emmelkamp,[33] who have studied VR interventions in anxiety disorders, performed similar studies. Opris et al.[31] evaluated studies of VR based exposure therapy and included 23 studies in their analysis. They found that VR based exposure therapy yields more positive results, and that it yields results similar to those of conventional cognitive behavioral approaches. Evidence also indicates that VR treatments are as effective as conventional approaches, that their effects are long lasting, and that they have a dosage response relationship. No significant differences were found between two methods in regard to in vivo exposure, VR and exposure interventions. Parsons and Rizzo's meta analysis[32] evaluated the effectiveness of VR based exposure therapy for anxiety problems and specific phobias and found that it was effective, although the number of analyses of moderator effects was limited due to inconsistent reporting in the literature. They included 21 studies in their meta analysis, and it was stated that VRbased exposure therapy was an effective clinical psychology treatment for all anxiety and phobia cases (social anxiety, arachnophobia, acrophobia, panic attacks with agoraphobia and flying phobia)[32] Powers and Emmelkamp's meta analysis[33] included 13 studies (n=397) They found that VR based exposure therapy has a significant effect, that findings are specific to the problem, and that it is effective for subjective stress levels, and cognitive and behavioral psychophysiology. They found that results are not related to sample sizes, and that there is an exposure response relationship in VRbased exposure therapy.[33] The Characteristics of Virtual Reality Treatments for Social Anxiety Disorder This section briefly describes the VR treatments for social anxiety, reviewing their common aspects, types of exposure therapy, the characteristics of study samples, and experimental patterns and scenarios. Studies of VR treatments for social anxiety have been conducted with control groups who were on the waiting list of certain studies,[3 5] who did exposure therapy using their imagination,[34,35] or in vivo exposure therapy,[36,37] making it possible to compare the effectiveness of the method with different intervention methods. The literature has study samples with different characteristics. They were generally conducted with clinical cases.[37,38] However, one study did not have a clinical sample,[39] and another did not clearly define its sample's characteristics.[40] The participants in most of the studies were diagnosed with social anxiety.[3,36,37,41] Bouchard et al.[36] used a program with scenarios such as giving a speech at a meeting, introducing oneself, talking to 298 Psikiyatri Hemsireligi Dergisi Journal of Psychiatric Nursing so called relatives in an apartment and communicating with an insistent salesperson. The same program was used by Klinger et al.[37] In the VR environment developed by Heuett and Heuett[35] for the purpose of reducing the fear of speaking before an audience, the subject stands on a stage in a conference room. Package programs can be purchased in accordance with study aims, and study specific VR environments can be developed.[35] Some VR environments are interactive, but some environments have been designed to be non interactive. The Effects of Virtual Reality Treatments for Social Anxiety Some studies have focused on whether VR environments actually cause anxiety before examining their effectiveness as a treatment for social anxiety. Pertaub, Slater and Barker[42] attempted to answer to this question and assessed the anxiety related reactions of individuals who made a presentation before virtual audiences of eight male avatars. The experiment used three different virtual audiences: emotionally neutral spectators who were static during the entire experiment, positive spectators who displayed sincere and appreciative behaviors toward the speakers, and negative spectators who were bored and displayed hostile expressions. The study found that the negative audience clearly caused anxiety, and the participants were anxious despite their awareness that the environment was virtual.[42] Owens and Beidel[43] assessed the effects of a VR environment on 21 participants with social anxiety disorder and 24 participants with no disorders. Their study was conducted to determine the effect of giving an in vivo speech before an audience in a VR environment on physiological and subjective stimulation. All the participants had more anxiety symptoms than they did in face to face interactions. The evidence indicated that the VR environment significantly increased pulse rates, electrothermal activity and sinus arrhythmia, and the subjects reported that they experienced anxiety. Another significant result is that there were no significant differences in their physiological measurements during the face to face speech and the speech in the VR environment. Another study did physiological measurements of 12 participants who were assigned tasks in both a VR environment and real life. There were significant increases in systolic and diastolic blood pressure, and pulse rate as a response to all stressful tasks. There was also a physiological reaction. The VR treatments in the literature were reviewed from a different point of view, and their usefulness as a diagnosis instrument was assessed.[44] Among 119 participants, physiological measurements of 19 individuals who had the highest social anxiety scores and 18 individuals who had the lowest score in this regard were assessed. This pilot study found that VR solutions can also be used for diagnosis. These studies show that VR environments can successfully be used to diagnose anxiety reactions, and practices based on diagnostic assessment can be formed in the future. Only VR is used in some of the studies that examine its effectiveness as a treatment for social anxiety. A randomized controlled study conducted by An", "venue": "", "year": 2019.0, "author_names": ["Omer Akil Ozer", "Mustafa Kemal Yontem"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 199597022, "title": "Development of a computer based algorithm for supporting community pharmacists in providing personalised lifestyle interventions for men with prostate cancer", "abstract": "Background: The number of people living with and beyond a cancer diagnosis has increased, however survivors may experience long term side effects from treatment that can impact on physical fitness and cardiovascular health. Lifestyle interventions enhance outcomes after cancer treatment but innovations and technology are needed to provide consistency and scalability. Interventions to support exercise and dietary modification in secondary care settings have been limited by the lack of personalisation, clinician time and resources. Community pharmacies are well positioned to provide lifestyle advice for people with cancer and long term conditions. This study is the first to develop a tailored lifestyle intervention using a computer algorithm to enable community pharmacists to provide personalised advice for cancer patients. Objective: To create a computer based algorithm to support community pharmacists to deliver a tailored lifestyle intervention for men during and after treatment for prostate cancer. Method: An observational study was conducted at two UK centres involving 83 men with prostate cancer who were 3 36 months' post diagnosis. Physical fitness, strength and cardiovascular health were assessed. Qualitative interviews were undertaken with 20 participants to understand their interpretation of the assessment and analysed using a framework analysis. These data were used to inform our computer based algorithm and lifestyle prescriptions. Results: Physical fitness varied across participants. Limb strength was categorised with upper body strength low for 40% of men compared to their age (40 out of 83) and lower limb strength (44 of 83) 53% of men were low in comparison to age normative values. The Siconolfi step test provided classification of cardiopulmonary fitness with 26.5% (22 of 83) men unable to complete level 1 with very low physical fitness and 41% (34 of 83) of men moderate completing stage 2 of the test. Cardiovascular risk was categorised as high >20% QRISK2) in 41% of men contributed to by the number of men who had a high hip to waist ratio 72 of 83 men (86.7% indicating abdominal fat. Three emergent themes from the qualitative analysis highlighted different perceptions of the physical assessment experience. The algorithm provided a clear pathway for decision making, that it was safe and effective to enable community pharmacists to prescribe tailored lifestyle advice for men with prostate cancer. Conclusion: We have developed a computer algorithm that uses simple, safe and validated assessments to provide tailored lifestyle advice which addresses specific areas of cardiovascular risk, strength and physical fitness in men with prostate cancer. It generates a real time lifestyle prescription at the point of care and has been integrated into the software platform used by pharmacies in the UK. The algorithm was integrated into the software platform used by pharmacies within the UK. INTRODUCTION Improving cancer survivors' lifestyle to reduce future health risks In 2012, there were more than 32 million people living beyond 5 years of a cancer diagnosis globally [1] and in the UK half of all people diagnosed will now survive 10 years or more [2] Cancer is increasingly being recognised as a chronic disease and survivors are at elevated risk of disease recurrence and cardio metabolic problems [3, 4] Comorbid conditions such as obesity, hypertension, chronic heart disease and mental health problems, which are commonly reported in cancer survivors [5 7] are linked with poorer survival, reduced quality of life and higher healthcare costs [8] Evidence for the positive impact of lifestyle interventions in relation to recovery from cancer and reducing comorbidity has been well documented in several systematic reviews and meta analyses [911] Real time assessments of health and function are needed to support effective lifestyle interventions in follow up care and this is increasingly important as survivorship services are transferred to primary and community care settings [12, 13] Technology supported physical activity and nutrition interventions are an opportunity to deliver health interventions to adults with cancer, however current programmes lack an evidence base for benefit [14] The number of men with prostate cancer living longer is increasing because of improvements in diagnosis and the introduction of better treatments [15] Prostate cancer is one of the most commonly diagnosed cancers in developed countries [1] and 1.1 million men are diagnosed per year globally. This accounts for 15% of all cancer diagnoses in men [16] In the United Kingdom (UK) over 47,000 men are diagnosed with prostate cancer each year and 84% are now surviving 10 years or more [17] Enhancing men's health by addressing factors such as obesity and physical inactivity are vital, as excess body weight and poor lifestyle behaviours are associated with increased risk of prostate cancer recurrence, aggressiveness and comorbidity [18 20] Furthermore, prostate cancer treatments such as androgen deprivation therapy (ADT) widely used as an adjunct for localised and high risk disease, have been associated with adverse cardiovascular effects [21 26] Specific lifestyle recommendations for men on ADT exist in the UK with the National Institute for Health and Care Excellence (NICE) advocating a 12 week exercise programme to reduce fatigue symptoms [27] Cardio oncology recommendations for people with cancer on hormone therapies advise pre assessment of existing cardiovascular disease and lifestyle intervention to reduce risk of adverse effects [28 30] There is a growing requirement to include lifestyle interventions as part of the cancer pathway, with recent reports highlighting the value of prehabilitation and rehabilitation for recovery [31 33] Community pharmacy have a role in supporting cancer survivors Improved survivorship has shifted cancer care from secondary to primary and community care, providing links to existing health and well being services. Lifestyle or physical activity on prescription schemes (PARs) for individuals with long term conditions have been implemented in primary care settings for promoting health in a range of conditions [34 36] However, people with cancer are included as eligible 'at risk' populations only in Australia and New Zealand [37] PARs aim to increase a patient's physical activity levels via general practitioner or nurse referral to a specialist sport and exercise practitioner led supervised exercise programme [38] Such interventions have been shown to increase self reported physical activity and improve health but have the recognised issues of poor adherence and high costs [39 41] Increasingly, pharmacy teams are being encouraged to provide brief interventions to promote obesity management and increase physical activity in people with long term conditions, as a supplement to such schemes [42] Community pharmacies are well positioned to provide general lifestyle advice for people with cancer due to their experience with health checks, smoking cessation and obesity management [43 45] High street pharmacists are accessible and understand behavioural change techniques [43] Pharmacists can enquire about physical activity at medicine reviews, refer their clients to local services and integrate brief advice about physical activity into routine consultations [46] However, for such advice to be most effective it needs to be responsive to the person's own lifestyle needs, co morbidities and existing level of physical fitness. Community pharmacists have limited training in physical activity assessment, knowledge of exercise advice or cancer treatment adverse effects but with more training and support tools, could effectively deliver lifestyle interventions for cancer survivors [47, 48] Pharmacists also need skills in motivating men to maintain and adhere to healthy lifestyle changes [49] The challenge is to ensure that lifestyle interventions are personalised, appealing and accessible [50] This requires innovations in assessment and technology tools that can be used within existing computer technology used by pharmacy teams. Developing algorithm led computer interventions for pharmacy Remote support for lifestyle interventions is a growing area of patient care, which is driven by the rising use of internet, smartphones and mobile technology [51, 52] Digital health behaviour change interventions (DCBIs) have mainly focused on direct communication with patients using text messages, email and web based applications [53] These have been shown to be successful in empowering people with long term conditions [54] and cancer survivors [55] and can be used to promote lifestyle change and increase physical activity [56] Digital technology holds promise to encourage behaviour change but needs methodologies and tailored approaches to promote user engagement [14, 53] Computer based tailoring is a method of assessing individuals and selecting communication content using data driven decision rules that produce feedback automatically from a database of content elements [57] The use of use of computer based tailoring for supporting lifestyle interventions in a community pharmacy setting is unique and is based on the recommended guidance for developing digital interventions to promote behaviour change in health and healthcare [53] In this study, we assessed: (i) objective measurements of cardio metabolic health, strength and physical fitness of men with prostate cancer that could be undertaken within a pharmacy, (ii) how to communicate risk and promote lifestyle behaviour change, and (iii) developed a computer algorithm to provide a safe and personalised lifestyle prescription at the point of care.", "venue": "", "year": 2019.0, "author_names": ["Jennifer Spink BMedSci"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 209501606, "title": "Study protocol: The Dutch 20|30 Postmeningitis study: a cross sectional follow up of two historical childhood bacterial meningitis cohorts on long term outcomes", "abstract": "BackgroundBacterial meningitis (BM) is a serious, life threatening infectious disease of the central nervous system that often occurs in young children. The most common severe to moderate sequelae following BM are sensorineural hearing loss, neuromotor disabilities and mental retardation, while subtle sequelae include academic and behavioral disabilities. It is largely unknown whether these more subtle sequelae persist into adolescence and adulthood. Therefore, this study will investigate the very long term effects of childhood BM in later life. Better understanding of long term effects and early identification of adverse outcomes after BM are essential for more timely interventions. Additionally, certain single nucleotide polymorphisms (SNPs) are associated with disease severity and might predict adverse sequelae. These include SNPs in genes encoding for pathogen recognition and immune response upon infection. Accordingly, a secondary objective of this study is to investigate the role of genetic variation in BM and use any insights to predict short and long term outcomes.MethodsIn the Dutch 20|30 Postmeningitis study, adolescents and young adults (n 947) from two historical cohorts with a prior episode of BM during childhood will be enrolled into a cross sectional follow up investigation using mainly questionnaires that examine executive and behavioral functioning, health related quality of life, subjective hearing, mood and sleeping disorders, academic performance, and economic self sufficiency. The results will be compared to normative data by one sample t tests. Multivariable regression analysis will be used to assess for any associations with causative pathogens and severity of BM. Participants that complete the questionnaires will be approached to provide a swab for buccal DNA and subsequent sequencing analyses. Logistic regression models will be used to predict sequelae.DiscussionThe unique follow up duration of this cohort will enable us to gain insights into the possible very long term adverse effects of childhood BM and how these might impact on quality of life. The investigation of host genetic factors will contribute to the development of prediction models which will serve as prognostic tools to identify children who are at high risk of adverse outcome after BM.Trial RegistrationDutch Trial Register NTR 6891. Retrospectively registered 28 December 2017.", "venue": "BMC Pediatrics", "year": 2019.0, "author_names": ["O El Tahir", "Rogier C J de Jonge", "Sander Ouburg", "Servaas Antonie Morre", "A Marceline van Furth"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 13913179, "title": "Effects of cognitive behavioural therapy for insomnia on the mental health of university students: study protocol for a randomized controlled trial", "abstract": "BackgroundInsomnia, defined as repeated difficulties getting or staying asleep, is common in the general population. Such sleep difficulties are a problem in their own right, but increasingly it is being recognised that they may also be a contributory factor in the development of a wide range of mental health problems. Our focus is upon the relationship between insomnia and psychotic experiences, such as paranoia and hallucinations. Psychotic experiences commonly occur in mild forms in the general population and have been linked to disrupted sleep. These psychotic like experiences raise the risk of development of a clinical disorder. Our aim is to reduce insomnia in a large general population group, and examine the effect on paranoia and hallucinations at the age when mental health problems typically emerge. The primary hypotheses are that cognitive behaviour therapy (CBT) for insomnia will reduce insomnia and also levels of paranoia and hallucinations. The theoretical links will be substantiated by a planned mediation analysis. Improvements in a number of other mental health outcomes are also predicted.Methods/DesignWe will carry out a parallel group, randomised controlled trial of 2,614 students with insomnia in universities across the UK. In the Oxford Access for Students Improving Sleep (OASIS) trial, participants will be randomised to digital CBT for insomnia (in addition to treatment as usual) or treatment as usual. Online assessments will take place at zero, three, 10 (post treatment) and 22 (follow up) weeks. Primary outcomes are insomnia and psychotic like experiences (paranoia or hallucinatory experiences) at 10 weeks. Secondary outcomes are levels of mania, depression, anxiety, nightmares, psychological wellbeing, and the development of mental health disorders. All main analyses will be carried out at the end of the last follow up assessment and will be based on the intention to treat principle. The trial is funded by the Wellcome Trust.DiscussionThis study will be the first large scale causal test of the relationship between sleep disturbance and psychotic experiences. It will provide evidence concerning the clinical effects of treating insomnia in young adults.Trial registrationThis trial was registered with Current Controlled Trials (identifier: ISRCTN61272251) on 29 January 2015.", "venue": "Trials", "year": 2015.0, "author_names": ["Daniel Freeman", "Bryony Sheaves", "Guy M Goodwin", "Ly-Mee Yu", "Paul J Harrison", "Richard Emsley", "Sophie Bostock", "Russell G Foster", "Vanashree Wadekar", "Christopher Hinds", "Colin A Espie"], "n_citations": 34, "n_key_citations": 0, "score": 0}, {"corpus_id": 31576960, "title": "Tech giants enter mental health", "abstract": "In September 2015, the Director of the National Institute of Mental Health (NIMH) T. Insel, announced his departure from the NIMH to lead Google's Life Sciences Mental Health Division. His decision attracted global attention. Interestingly for the field of mental health, Google intends to only back innovations expected to be ten times \"10x\" better than competitors. Indeed, mental health care and research are beset with myriad challenges that may be better tackled using the informatic capacity that tech giants can leverage. The field of mental health captures arguably the largest amount of data of any medical specialty, given that it encompasses behaviour, the brain and the mind. The physical neuroscience of psychiatry is augmented by high resolution neuroimaging of various modalities, as well as \"omic\" data including genomics, epigenomics, proteomics, microbiomics and metabolomics. The growth of such big data aggregation in psychiatry provides unprecedented opportunities for exploration, descriptive observation, hypothesis generation, and prediction for clinical, research and business/operational issues. The scale of data outputs, however, means that computer models are required to assist humans to find and comprehend meaning and delineate non obvious patterns converting data to information, knowledge and wisdom. Computerized analysis of complex human behaviours such as speech may present an opportunity to move psychiatry beyond reliance on self report and clinical observation toward more objective measures of health and illness in the individual patient. A recent pilot study used automated speech analyses to predict later psychosis onset in youths at clinical high risk for psychosis1. The analysis assessed for semantic coherence and two syntactic markers of speech complexity. These speech features predicted psychosis development with 100% accuracy and outperformed classification from structured clinical interviews. Electronic health records (EHRs) have changed the landscape of clinical data collecting and sharing, facilitating more efficient care delivery. They provide multiple types of data about individual patient encounters, as well as longitudinal data about a patient's medical history over an extended period of time (see Hayes et al2 in this issue of the journal) An example of the value of EHR data comes from a study which developed a statistical suicide risk stratification model3. The model resulted from examining suicide attempts and completed suicide in a large cohort of patients who underwent assessment in a regional health service. Researchers compared EHR based predictions of suicidal behaviour at 3 months with clinician predictions, which were based on a checklist. The model derived EHR was superior (area under the ROC curves, AUC=0.79 vs. 0.58 using the checklist) Big biomedical data are currently scattered across databases, and intentionally isolated to protect patient privacy. Linking big data will enable physicians and researchers to test new hypotheses and identify areas of possible intervention4. An example of the value of data linkage between genomics and EHRs comes from a large scale application of the phenome wide association study (PheWAS) paradigm5. The researchers scanned for associations between 2,476 single nucleotide polymorphisms (previously implicated by genome wide association studies as mediators of human traits) and 539 EHR derived phenotypes in 4,268 individuals of European ancestry. Several new PheWAS findings were identified, including a cluster of association near the NDFIP1 gene for mental retardation, and an association near PLCL1 gene for developmental delays and speech disorder. With the number of smart devices (i.e. smartphones and tablets) reaching into the billions worldwide, there are increasing opportunities to harness their power and multifunctionality for clinical use. There are now several examples of psychoeducation based products in use for depression, bipolar disorder, dementia and psychological distress. Smartphones also have capacity to offer telemental health functions. These functions are increasingly viewed as useful opportunities for more rapid patient clinician engagement and offering services to geographically isolated areas. They are reported to be as good as in person care for diagnosis and treatment in comparative and non inferiority studies. However, there are concerns about effects on the therapeutic alliance, and more research is required in specific populations (i.e. geriatric, child and minorities)6. With the huge number of \"apps\" available to patients and clinicians, it is important to use sensible approaches to analyzing clinical value. A Mobile App Rating Scale has been developed7, and there are websites available which appraise digital mental health programs. Recent years have seen the rise and miniaturization of many wearable sensors, for personal health care, fitness and activity awareness, as well as the wireless networking of these devices with EHRs and smartphones. These innovations also coincide with the popularity of patient owned health records, community based management of disease aiming to avoid hospitalization, and finally participatory health care, where patients are hypothetically empowered for health behaviour change through accessing their own health data. Smart and connected health care aims to accelerate the development and use of innovative approaches that would support the much needed transformation of health care from reactive and hospital centered to preventive, proactive, evidence based, person centered and focused on well being rather than disease. The opportunities afforded by tech giants moving into mental health, with their capital, digital and data analysis tools, and human resource talent pools, provide much hope for mental health sufferers around the world. While the encounter of electronic approaches with health is not without its risks, surrounding data privacy, use and storage, its potential is overt8. The engagement of tech giants also raises many questions for how we train our next generation of researchers and clinicians. Convergence science involves the transdisciplinary integration of fields including computer science, physics, engineering, medicine, chemistry, mathematics, the arts and biology; synergy between government, academia and industry is also critical. Convergence psychiatry involves embedding convergence science into the clinical mental health care setting by closer integration of scientists, clinicians and industry, as well as enhanced education of health professionals. This approach is critical, given modern psychiatric research problems are characterized by their complexity, multi systemic nature and broad societal impact, hence making them poorly suited to siloed approaches of thinking and innovation. Care must be taken to ensure researchers and clinicians are exposed to these frontier fields, and potential mechanisms include hackathons (intensive collaborations with coders, designers and managers on projects to meet a specific brief) multidisciplinary research groups, educational systems involving convergence science concepts, and industry academic collaborations. Harris A. Eyre1,2, Ajeet B. Singh2, Charles Reynolds III3 1Discipline of Psychiatry, University of Adelaide, Adelaide, South Australia, Australia; 2Deakin University, IMPACT SRC, School of Medicine, Geelong, Victoria, Australia; 3University of Pittsburgh Medical Centre, Pittsburgh, PA, USA", "venue": "World psychiatry official journal of the World Psychiatric Association", "year": 2016.0, "author_names": ["Harris A Eyre", "Ajeet B Singh", "Charles F Reynolds"], "n_citations": 14, "n_key_citations": 0, "score": 0}, {"corpus_id": 208164451, "title": "Treatments for Poststroke Motor Deficits and Mood Disorders: A Systematic Review for the 2019 U.S. Department of Veterans Affairs and U.S. Department of Defense Guidelines for Stroke Rehabilitation", "abstract": "Background Early rehabilitation after stroke is essential to help reduce disability. Purpose To summarize evidence on the benefits and harms of nonpharmacologic and pharmacologic treatments for motor deficits and mood disorders in adults who have had stroke. Data Sources English language searches of multiple electronic databases from April 2009 through July 2018; targeted searches to December 2018 for studies of selective serotonin reuptake inhibitors (SSRIs) or serotonin norepinephrine reuptake inhibitors. Study Selection 19 systematic reviews and 37 randomized controlled trials addressing therapies for motor deficits or mood disorders in adults with stroke. Data Extraction One investigator abstracted the data, and quality and GRADE assessment were checked by a second investigator. Data Synthesis Most interventions (for example, SSRIs, mental practice, mirror therapy) did not improve motor function. High quality evidence did not support use of fluoxetine to improve motor function. Moderate quality evidence supported use of cardiorespiratory training to improve maximum walking speed and repetitive task training or transcranial direct current stimulation to improve activities of daily living (ADLs) Low quality evidence supported use of robotic arm training to improve ADLs. Low quality evidence indicated that antidepressants may reduce depression, whereas the frequency and severity of antidepressant related adverse effects was unclear. Low quality evidence suggested that cognitive behavioral therapy and exercise, including mind body exercise, may reduce symptoms of depression and anxiety. Limitation Studies were of poor quality, interventions and comparators were heterogeneous, and evidence on harms was scarce. Conclusion Cardiorespiratory training, repetitive task training, and transcranial direct current stimulation may improve ADLs in adults with stroke. Cognitive behavioral therapy, exercise, and SSRIs may reduce symptoms of poststroke depression, but use of SSRIs to prevent depression or improve motor function was not supported. Primary Funding Source U.S. Department of Veterans Affairs, Veterans Health Administration.", "venue": "Annals of Internal Medicine", "year": 2019.0, "author_names": ["K E D'anci", "Stacey Uhl", "Jeff Oristaglio", "Nancy M Sullivan", "Amy Y Tsou"], "n_citations": 9, "n_key_citations": 1, "score": 0}]} -{"query": "Scholarly journal evaluation based on panel data analysis", "session_id": 2055665381813292, "user_id": 1030841885142456, "candidates": [{"corpus_id": 13568471, "title": "Scholarly journal evaluation based on panel data analysis", "abstract": "This paper proposes a new method for indicator selection in panel data analysis and tests the method with relevant data on agricultural journals provided by the Institute of Scientific Technical Information of China. An evaluation exercise by the TOPSIS method is conducted as a comparison. The result shows that panel data analysis is an effective method for indicator selection in scholarly journal evaluation; journals of different disciplines should not be evaluated with the same criteria; it is beneficial to publish all the evaluation indicators; unavailability of a few indicators has a limited influence on evaluation results; simplifying indicators can reduce costs and increase efficiency as well as accuracy of journal evaluation.", "venue": "J. Informetrics", "year": 2009.0, "author_names": ["Liping Yu", "Xiaoming Shen", "Yuntao Pan", "Yishan Wu"], "n_citations": 11, "n_key_citations": 0, "score": 1}, {"corpus_id": 214765151, "title": "Are subsidies for Polish enterprises effective: empirical results based on panel data", "abstract": "Abstract The objective of this article was to identify and evaluate the effectiveness of subsidies used by companies, as well as to develop an approach to assess the effectiveness of subsidies for the manufacturing sector of Polish economy. In order to organise the results obtained by researchers dealing with the efficacy of subsidies, a meta analysis, i.e. a quantitative assessment of empirical literature, was carried out. Based on the data from the financial statements of medium sized and large Polish companies, published in Monitor Polski B (a former Official Journal of the Republic of Poland) an evaluation study was conducted to verify the research hypotheses. Based on the obtained results, it was found that the aid in the form of subsidies did not have a significant impact on the productivity of the subsidised companies, growth rate of assets or profitability.", "venue": "", "year": 2019.0, "author_names": ["Beata Gosk", "Natalia Nehrebecka"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 232132940, "title": "The Evaluation of Problem Based Learning Model Towards High School Students' Critical Thinking Skills: A Meta Analysis Study in Indonesia", "abstract": "ABSTRACT This study aims to see the effectiveness of applying PBL (Problem Based Learning) models in Indonesian High School students' mathematical critical thinking skills that are based on the series of similar studies. This study uses a meta analysis method from a number of national scholarly journal articles indexed by Google Scholar, Semantic Scholar, Garuda Portal, DOAJ, ERIC and direct URL from the national journals published between 2012 2019 based on inclusion criteria. The statistical data processing used Comprehensive Meta Analysis (CMA) V3 software. The results of the study based on the random effect model shows that the combined effect size is 1.201 with large effect criteria. It means that the application of the PBL model significantly has a greater effect on students' mathematical critical thinking skills than the application of conventional models. In addition, there are four study characteristics that are used in the analysis such as; research class, type of research school, year of research and sample size. There were significant differences in the application of the PBL model to students' mathematical critical thinking skills in terms of years of research based on the study characteristics. Meanwhile, from the research class, type of research school and sample size implicate no difference in the effect of applying the PBL model between the study groups. The findings unveil the effectiveness of using the PBL model in mathematical learning that is based on the study characteristics.", "venue": "", "year": 2020.0, "author_names": ["Yohannes Yohannes", "Dadang Juandi", "Nanang Diana Diana"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213858507, "title": "Can we automate expert based journal rankings? Analysis of the Finnish publication indicator", "abstract": "Abstract The publication indicator of the Finnish research funding system is based on a manual ranking of scholarly publication channels. These ranks, which represent the evaluated quality of the channels, are continuously kept up to date and thoroughly reevaluated every four years by groups of nominated scholars belonging to different disciplinary panels. This expert based decision making process is informed by available citation based metrics and other relevant metadata characterizing the publication channels. The purpose of this paper is to introduce various approaches that can explain the basis and evolution of the quality of publication channels, i.e. ranks. This is important for the academic community, whose research work is being governed using the system. Data based models that, with sufficient accuracy, explain the level of or changes in ranks provide assistance to the panels in their multi objective decision making, thus suggesting and supporting the need to use more cost effective, automated ranking mechanisms. The analysis relies on novel advances in machine learning systems for classification and predictive analysis, with special emphasis on local and global feature importance techniques.", "venue": "J. Informetrics", "year": 2020.0, "author_names": ["Mirka Saarela", "Tommi J Karkkainen"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 49681425, "title": "Journal editorship index for assessing the scholarly impact of academic institutions: An empirical analysis in the field of economics", "abstract": "Abstract Assessing the scholarly impact of academic institutions has become increasingly important. The achievements of editorial board members can create benchmarks for research excellence and can be used to evaluate both individual and institutional performance. This paper proposes a new method based on journal editor data for assessing an institution's scholarly impact. In this paper, a journal editorship index (JEI) that simultaneously accounts for the journal rating (JR) editor title (ET) and board size (BS) is constructed. We assess the scholarly impact of economics institutions based on the editorial boards of 211 economics journals (which include 8640 editorial board members) in the ABS Academic Journal Guide. Three indices (JEI/ET, JEI/JR, and JEI/BS) are also used to rank the institutions. It was found that there was only a slight change in the relative institutional rankings using the JEI/ET and JEI/BS compared to the JEI. The BS and ET weight factors did not have a substantial influence on the ranking of institutions. It was also found that the journal rating weight factor had a large effect on the ranking of institutions. This paper presents an alternative approach to using editorial board memberships as the basis for assessing the scholarly impact of economics institutions.", "venue": "J. Informetrics", "year": 2018.0, "author_names": ["Dengsheng Wu", "Jing Li", "Xiaoli Lu", "Jianping Li"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 69284125, "title": "Multilevel Graph Based Decision Making in Big Scholarly Data: An Approach to Identify Expert Reviewer, Finding Quality Impact Factor, Ranking Journals and Researchers", "abstract": "Digital libraries, such as conference papers, journal documents, books and thesis, research patents, and experiments generate a vast amount of data, named as, Scholarly Big Data. It covers scholarly related information for both researcher's perspective as well as publisher's perspective, such as academic activities, author's demography, academic social networks, etc. The relationships among Big Scholarly Data can be worthy of solving researcher as well as journal related concerns, if they are prudently treated to extract knowledge. The best approach to efficiently process these relationships is the graph. However, with the rapid growth in the number of digital articles by various libraries, the relationships raise exponentially, generating large graphs, which have become increasingly challenging to be handled in order to analyze scholarly information. On the other hand, many researchers and publishers/journals have severe concerns about the ranking control mechanisms and the consideration of quantity rather than quality. Therefore, in this paper, we proposed graph based mechanisms to perform four critical decisions that are the need of the today's scholarly community. To improve the quality of the article, we proposed a mechanism for selecting and recommending suitable reviewers for a submitted paper based on researchers' expertise and their popularity in that particular field while avoiding conflict of interest. Also, due to shortcomings in the existing journal ranking approaches, we also designed a journal ranking mechanism including its new impact factor and relative ranking by using a modified version of traditional page ranking algorithm and excluding self authors citations as well as self journal citations. Similarly, researchers ranking is also important for various motives that is calculated based on the expert's field, citation count, and a number of publications while avoiding any loophole to increase the ranking such as, self citations and wrong citations. Also, to efficiently process big graphs generated by a massive number of scholarly related relationships, we proposed an architecture that uses the parallel processing mechanism of the Hadoop ecosystem over the real time analysis approach of Apache Spark with GraphX. Finally, the efficiency of the proposed system is evaluated in terms of processing time and throughput while implementing the designed decision mechanisms.", "venue": "IEEE Transactions on Emerging Topics in Computing", "year": 2021.0, "author_names": ["Muhammad Mazhar Ullah Rathore", "Malik junaid jami Gul", "Anand Paul", "Ashraf Ali Khan", "Raja Wasim Ahmad", "Joel J P C Rodrigues", "Spiridon Bakiras"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 44148914, "title": "Evaluate academic influence of research teams in scholarly data", "abstract": "This paper builds a network of author cooperation and cooperation in the field of complex networks based on journal articles of Web of Science Core Database from 2013 to 2017 as sources of information. Considering the two aspects of literature measurement and social network analysis, the research comprehensively adopts a number of indicators, identifies the research team, and evaluates the academic influence of the research team. When assigning the weight to the index, this study adopts the method of entropy, which avoids the disadvantage of subjective determination of weight to some extent. Finally, this study tests consistency of the evaluation index and the comprehensive rankings by Spearman's rank correlation. The results show that Yu Wenwu, Cao Jinde, Kurths Juergen led the team's composite score is the highest and most influential. It is of great significance to mine the invisible research teams based on scholarly data and to make a reasonable evaluation of the research teams. It not only inspires research teams to innovate, but also guides and encourages scholars to constantly adjust and calibrate their research direction so as to engage in more valuable scientific research activities.", "venue": "2018 IEEE 3rd International Conference on Big Data Analysis (ICBDA)", "year": 2018.0, "author_names": ["Kaiyu Liu", "Hong Qiao", "Mingchun Zheng"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 218471108, "title": "Panel: Peer Based Faculty Evaluation v. Student Evaluation of Teaching", "abstract": "Bargaining regarding faculty evaluation is challenging in an environment in which administrators throughout higher education have successfully imposed corporate style forms of evaluation and supervision that many have come to accept as normal, despite their incompatibility with principles of academic freedom and peer review. Student surveys of teaching are increasingly central to this management strategy, despite the growing body of evidence indicating bias against historically marginalized groups in student survey results. In our presentation we will discuss our 2016 contract negotiations at Dutchess Community College (SUNY) in Poughkeepsie, New York as a case study. During these negotiations the college administration sought to expand the use of \"student evaluations\" of teaching (SET) despite significant evidence that student feedback provides limited meaningful evaluative content concerning teaching and is shaped by gender, racial, and ethnic bias, as well as bias against academic rigor. Our presentation will briefly describe our effort to maintain a peer based evaluation of student survey data, including the published research we used during negotiations, and we will analyze the strengths and weaknesses of our approach and results. These results include: a successful effort to maintain the practice of limiting review of qualitative student feedback to peer based review between faculty and department chairs within academic departments limited but significant expansion of administrative oversight of some quantitative student survey data contract language that limits the use of student survey results in faculty evaluation contract language that requires that all consideration of these data shall be undertaken with the understanding that student feedback is an important but limited vehicle for understanding the effectiveness of an individual's teaching contract language that established an all faculty committee of full time and part time faculty that is charged with evaluating the new survey form and process Background: During the last contract negotiations undertaken between Dutchess United Educators (DUE) the union representing faculty and most professional staff at Dutchess Community College, and the L. Akins and L. Murphy, NCSCBHEP 2019 1 1 Akins and Murphy: Panel: Peer Based Faculty Evaluation v. Student Evaluation of Tea Published by The Keep, 2019 college administration, DUE negotiators were confronted with a demand to change the long established process of faculty evaluation and use of student surveys of teaching that would affect both full time and part time faculty. The decades old full time faculty evaluation process involved two reports produced by the faculty member's department chair and submitted to the Dean of Academic Affairs: (1) a classroom observation report and (2) a professional development report (PDR) that covered teaching effectiveness, student advisement, professional activities, and contributions to the department and college. Additionally, student surveys of teaching were administered on paper in all course sections every spring with results going to the faculty member's department chair and eventually returned to the faculty member. These surveys included statements to be rated on a Likert scale as well as opportunities for students to respond to reflective questions. Survey data was not compiled or quantified but were used by department chairs to inform their commentary on teaching effectiveness in the PDR and/or to generate conversations about teaching. No results were submitted to Academic Affairs. The demand from administration during negotiations was that survey data needed to be submitted to Academic Affairs to assure that student voice was clearly a part of the process of evaluating faculty. Even though the negotiations went on for two years (2014 2016) with a one year contract agreed to for 2015 2016 before eventually agreeing to a four year contract for 2016 2020, pressure concerning student surveys of teaching was a recurring theme. To address this concern, union and college negotiators agreed to research the matter and make evidence based decisions to resolve the disagreement. At the time, Dr. Akins served on the negotiating team for the Full time Faculty and Staff 2016 2020 contract and Dr. Murphy served on the negotiating team for the Part time Faculty and Staff 2016 2020 contract. For the purpose of collecting research and formulating our arguments for negotiations, a joint subcommittee on Faculty Evaluations made up of members of both the full time and part time teams was formed. Both Akins and Murphy served on this sub committee and led the research effort. Research: Our study of the research on student surveys quickly led to an ever expanding body of scholarship indicating that student surveys do not reliably measure the quality of teaching. While student opinions about their educational experiences are important and valuable, numerous studies demonstrate that students are not qualified to judge teaching effectiveness. In addition, research indicates that survey results are influenced by the gender, race, ethnicity, and perceived attractiveness of the instructor. For example, Anne Boring, Kellie Ottoboni, and Philip B. Stark write that \"SET are biased against female instructors by an amount that is large and statistically significant,\" and \"gender biases can be large enough to cause more effective instructors to get lower SET than less effective instructors.\" Boring, Ottoboni, and Stark argue that \"it is not L. Akins and L. Murphy, NCSCBHEP 2019 2 2 Journal of Collective Bargaining in the Academy, Vol. 0, Iss. 14 [2019] Art. 57 https:/thekeep.eiu.edu/jcba/vol0/iss14/57 possible to adjust for the bias, because it depends on so many factors.\" There is also evidence to 1 suggest that the academic rigor of a course, as well as a student's desire to take a course and how much prior knowledge student has about a subject impact survey results. Any implementation of 2 student surveys with low response rates, such as commonly occurs when surveys are delivered online, is statistically problematic, further distorting the data. 3 Based on our review of the available scholarship we concluded that our then present system of faculty evaluation utilized student surveys in a way that was best suited to provide faculty with the opportunity to gain what is valuable about student opinions expressed in surveys while minimizing damage to academic freedom and academic integrity, and minimizing discrimination against faculty who are rated lower on surveys for reasons that have nothing to do with their teaching effectiveness, or are even rated lower because they are effective teachers. A list of articles consulted for contract negotiations and subsequent Evaluation Committee work is provided at the end of this paper. Negotiations: As described in the previous section, for the purpose of negotiations we researched important questions about evaluations of teachers and the use of Student Evaluation of Teaching (SET) or, as we prefer to call it Student Surveys of Teaching (SST) This preferred name is what the form has been called historically at our campus and we believe \"survey\" better reflects its output. The negotiations process was based on an interest based bargaining (IBB) model without formal training for either side to assure we understood how to apply IBB principles fairly and equitably, which was a challenge to effective negotiations. Another challenge to effective negotiations was a silence agreement that limited our ability to mobilize faculty. Within this framework for discussion, however, there was agreement around the table that student voice had a role in evaluating the effectiveness of a faculty member. Our conflict was in the appropriate extent of that role and the process to include that voice as well as other relevant voices in the evaluation process. That conflict was not only with management but also with other faculty negotiators, which was further complicated by the silence agreement because it limited these discussions to a very small group of faculty. 1 Anne Boring, Kellie Ottoboni, and Philip B. Stark, \"Student evaluations of teaching (mostly) do not measure teaching effectiveness,\" ScienceOpen Research (2016) https:/www.scienceopen.com/document?vid=818d8ec0 5908 47d8 86b4 5dc38f04b23e 2 Stephen L. Benton and Kenneth R. Ryalls, \"Challenging Misconceptions About Student Ratings of Instruction\" The IDEA Center* IDEA IDEA paper #58, (April 2016) https:/www.ideaedu.org/Portals/0/Uploads/Documents/Challenging_Misconceptions_About_Student_Ratings_of_Instruction.pd f 3 Philip B. Stark and Richard Freishtat, \"An evaluation of course evaluations.\" ScienceOpen Research, (2014) https:/www.stat.berkeley.edu/~stark/Preprints/evaluations14.pdf L. Akins and L. Murphy, NCSCBHEP 2019 3 3 Akins and Murphy: Panel: Peer Based Faculty Evaluation v. Student Evaluation of Tea Published by The Keep, 2019 Our limited understanding of the IBB process meant that we would agree on a shared interest, separately perform research on the topic including data collection and analysis, share with the other side our research and analysis, then discuss our differences to find what we all could agree to. While we did a significant review of literature on the topic, as described above, the administration shared a limited number of articles which mostly focussed on outdated data and analysis or were crafted by organizations that benefit financially from \"quantifying\" student feedback. When confronted with evidence, including from the very same documents supplied by the administration, that the decades long trend outside of our institution to collect, quantify, and elevate the numerical significance of student feedback is problematic, the administration was not swayed. In negotiations, we focused on two conclusions from the research: (1) students are not effective evaluator", "venue": "", "year": 2019.0, "author_names": ["Leah M Akins", "Laura Murphy"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 49593963, "title": "Author based analysis of conference versus journal publication in computer science", "abstract": "Conference publications in computer science (CS) have attracted scholarly attention due to their unique status as a main research outlet, unlike other science fields where journals are dominantly used for communicating research findings. One frequent research question has been how different conference and journal publications are, considering an article as a unit of analysis. This study takes an author based approach to analyze the publishing patterns of 517,763 scholars who have ever published both in CS conferences and journals for the last 57 years, as recorded in DBLP. The analysis shows that the majority of CS scholars tend to make their scholarly debut, publish more articles, and collaborate with more coauthors in conferences than in journals. Importantly, conference articles seem to serve as a distinct channel of scholarly communication, not a mere preceding step to journal publications: coauthors and title words of authors across conferences and journals tend not to overlap much. This study corroborates findings of previous studies on this topic from a distinctive perspective and suggests that conference authorship in CS calls for more special attention from scholars and administrators outside CS who have focused on journal publications to mine authorship data and evaluate scholarly performance.", "venue": "J. Assoc. Inf. Sci. Technol.", "year": 2019.0, "author_names": ["Jinseok Kim"], "n_citations": 20, "n_key_citations": 0, "score": 0}, {"corpus_id": 221102579, "title": "Evaluation of Cycle Performance of End Plate Bolt Connections Based on Connection parameters", "abstract": "The precise study of connections performance in a steel structure is important. End plate connections are commonly used in steel structures. In this research, the cyclic behavior of cantilever beam connection to the endplate column under cyclic loading has been evaluated. In the calculations, all of the parameters affecting the behavior of this connection have been investigated and solutions are proposed to reach the maximum connection capacity. Two general models of connecting beams to different columns have been considered for the research with beams of 30 and 40 centimeters depth. The end plate thickness, end plate shape, column thickness, strength or the material of the bolt, necessity of using the continuity plate, double and gusset plate, are the parameters that were considered in the analyzes. According to the results of this study, the use of a thin end plate causes localized buckling in the plate. The use of continuity plates is very important and prevents local buckling in the panel zone. It is also necessary to use a double plate if the column web is thin. Also, the use of gussets improves the flexural strength of the section. Keywordscyclic behavior, bolt connections, end plate, continuity plate, ductility. IINTRODUCTION Due to the increasing use of steel structures and the extended application of bolt connections, a careful examination of the connections performance in a steel structure is important, and inaccuracy in the design and implementation of connections not only causes failure in the joint, but also has devastating effects on other members of the structure. The beams and end plate set with embedded holes are connected to the column flange usig high strength bolts in the workshop. There are two types of beam column flange connections with named four bolts and eight bolts. The four bolt connection is used for lower moment values and the eight bolt connection for more bending moment values. In this research, the bending behavior of flange connection under increasing cyclic loading is evaluated by changing the effective parameters such as the end plate shape and the bolt material. For this purpose, several numerical models are prepared using finite element software and the extent of the effect of parameters on the flange connection behavior is determined. The main objective of this research is to investigate the bending behavior of end plate connection in steel structures under incremental cyclic loading by Impact Factor Value 5.856 e ISSN: 2456 3463 International Journal of Innovations in Engineering and Science, Vol 5, No.8, 2020 www.ijies.net 24 varying parameters affecting the connection behavior, such as bolt, the use of stiffener (gusset) continuity plate and double plate. To this end, stress distribution in the connection components is investigated by changing the parameters and changing the position of the plastic joint in the beam and the bending capacity of the connection and the connection behavior based on hysteresis and rotational capacity and energy depletion. The results of this research can help to better design the flange bending connection. This study evaluates the cyclic behavior of beam column flange connection in steel frames using finite element analysis. The ability to simulate the nonlinear behavior of the connection provides very high precision. In this connection, the steel beam is connected to the end plate using groove weld in flanges and groove or fillet weld in web with a suitable control. The set of beams and end plates in which holes are embedded, is attached to the column flange using high strength bolts. Types of rigid connections used in steel structures include: rigid welded flange plate connection, rigid connection with straight welding of beam to column, rigid flange plate connection with 4 and 8 bolts, rigid end plate flange connection, flush and extended end plate connections, with and without stiffener. In the flush end plate connection, the dimensions of the end plate do not exceed the beam flange, but the extended end plate connection, the dimensions of the end plate should exceed beam flange so that the bolt can be placed in the outer part of the plate. The extended end plate connections can be with or without hardening between the tensile flange of the beam and the end plate on the web plate, which is used rigid connection of beam to column. IIPREVIOUS RESEARCH Grundy et al. in 1980 [1] studied various parameters affecting the performance of flange connections. In their experiments, they measured three parameters, including the bolt diameter, the plate thickness and the column stiffener. Tsai and Popov in 1990 [2] tested three types of bending flange connection under a cyclic load using finite element analysis. The results show that the flange plate connections have excellent energy dissipation capacity. Hejazi and Mehdad in 2009 [3] investigated the effects of reinforcing plates on the flange connection of beam to the column using finite element analysis. In this research, 6 numerical models were modeled using Ansys software. Based on the results obtained, the use of stiffener, especially triangular plates, significantly increases the ductility, rigidity and load bearing capacity of flanged connections. Joshi and colleagues in 2014 [4] modeled rigid flange connections using Abaqus software. The results of this study showed that, with increasing bolt diameter, the bending bearing capacity of the connection increases, and also the failure mode is located on the column flange. Rajeshkumar et al. In 2013 [5] investigated the rigid flange connection under fire induced heat. The modeling was performed using Abaqus finite element software different end plate thicknesses were evaluated. According to the results of this study, the increase in plate thickness, although increasing the flexural strength of the joint, increases the force in bolts and increases the risk of brittle fracture in the bolts. Balc In 2012 [6] investigated the beam connection to welded and bolted columns using numerical models. In this study, two samples of welded and bolted flange plates were analyzed using the finite element method in Abaqus software and the results were evaluated using laboratory data. Baei et al. In 2012 [7] presented a finite element modeling approach for numerical verification of the seismic performance of a rigid connection with bolted end plate, taking into account axial force. The results of the research showed that the presence of axial force affects the connection performance, so that the presence of the tensile force reduces the final bending strength of the joint. Ismail et al. In 2016 [8] examined the final performance of the end plate steel connections. The results show that the non stiffening model with flush end plate has a much lower bending strength than other models, and with increasing stiffening angle, the bending strength of the connection also increases. Saberi et.al In 2014,2016 [9] have explored comparison of bolted end plate and T stub connection Sensivity to component thickness and bolt diameter on cyclic behavior. The result showed that The bolted T stub connections are more sensitive to component thickness and bolt diameter rather than end plate connections.Moreover, they proposed using of post tensioned tendons for rehabilitation of mentioned weak connections. Dessouki et al. In 2013 [10] investigated the bending behavior of Ibeam to end plate connection in 2013. In this research, Impact Factor Value 5.856 e ISSN: 2456 3463 International Journal of Innovations in Engineering and Science, Vol 5, No.8, 2020 www.ijies.net 25 Ansys software was used to analyze the parameters such as beam depth, plate thickness, bolt diameter, bolt thread, bolt strength and stiffness. Ghassemieh et al. In 2012 [11] used an artificial neural network supra innovative algorithm to optimize the stiffened bending flange connection. Chen and Shi in 2016 [12] performed a finite elemental study on end plate bending connection for very high bending strength. In order to study the seismic behavior of bending frames with flange connections, a precise and effective finite element model under cyclic loading was prepared by Wang and colleagues in 2013 [13] study the effect of retrofit parameters on cyclic behavior of bolted connections with weak end plate. The analytical study is about the cyclic behavior of bolted connections with weak end plate retrofitted by welded haunches, the accuracy of modeling is verified by comparing the finite element model result with the experimental test results two specimens EP R And EPWP H15 D30 that are tested by saberi et .al In 2017 [14] under SAC cyclic load Since end plate and beam are connected with complete joint penetration (CJP) groove and fillet welds, they are considered to be continuous in the FE model by Saberi et.al. In 2011 [15] Saedi Daryan et. al. in 2012 [16] to consider interaction in contact surfaces of the connection, tangential behavior is defined by friction coulomb's coefficient of 0.3 and hard contact normal behavior. A pretension stress of 550Mpa is applied to all bolts. In this research, both extended and flush flange connections were modeled and analyzed. IIIRESEARCH METHODOLOGY In this study, using nonlinear analyzes and finite element, several examples beam column bolt connections with endplate have been investigated. In this regard, static nonlinear analysis was performed using Abaqus software. With reliable modeling results from comparison of numerical and experimental results, effective parameters in the design of connections, such as the number of bolts and their placement, as well as the presence of stiffening agents, were investigated. For this purpose, two general models, including (A, a beam of 30 centimeters depth) and (B, a beam of 40 centimeters depth) are considered. Each of A and B models are modeled and analyzed in different modes and their performance have been evaluated under cyclic loads.", "venue": "", "year": 2020.0, "author_names": ["Farzan Ekhlasi", "Hamid Khabbaz Saberi", "Vahid Saberi"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "transfer learning in computer vision", "session_id": 2060296050204267, "user_id": 2055640442831930, "candidates": [{"corpus_id": 209522368, "title": "Transfer learning in computer vision tasks: Remember where you come from", "abstract": "Abstract Fine tuning pre trained deep networks is a practical way of benefiting from the representation learned on a large database while having relatively few examples to train a model. This adjustment is nowadays routinely performed so as to benefit of the latest improvements of convolutional neural networks trained on large databases. Fine tuning requires some form of regularization, which is typically implemented by weight decay that drives the network parameters towards zero. This choice conflicts with the motivation for fine tuning, as starting from a pre trained solution aims at taking advantage of the previously acquired knowledge. Hence, regularizers promoting an explicit inductive bias towards the pre trained model have been recently proposed. This paper demonstrates the versatility of this type of regularizer across transfer learning scenarios. We replicated experiments on three state of the art approaches in image classification, image segmentation, and video analysis to compare the relative merits of regularizers. These tests show systematic improvements compared to weight decay. Our experimental protocol put forward the versatility of a regularizer that is easy to implement and to operate that we eventually recommend as the new baseline for future approaches to transfer learning relying on fine tuning.", "venue": "Image Vis. Comput.", "year": 2020.0, "author_names": ["Xuhong Li", "Yves Grandvalet", "Franck Davoine", "Jingchun Cheng", "Yin Cui", "Han Zhang", "Serge J Belongie", "Yi-Hsuan Tsai", "Ming-Hsuan Yang"], "n_citations": 14, "n_key_citations": 0, "score": 1}, {"corpus_id": 67855411, "title": "FixyNN: Efficient Hardware for Mobile Computer Vision via Transfer Learning", "abstract": "The computational demands of computer vision tasks based on state of the art Convolutional Neural Network (CNN) image classification far exceed the energy budgets of mobile devices. This paper proposes FixyNN, which consists of a fixed weight feature extractor that generates ubiquitous CNN features, and a conventional programmable CNN accelerator which processes a dataset specific CNN. Image classification models for FixyNN are trained end to end via transfer learning, with the common feature extractor representing the transfered part, and the programmable part being learnt on the target dataset. Experimental results demonstrate FixyNN hardware can achieve very high energy efficiencies up to 26.6 TOPS/W $4.81 \\times$ better than iso area programmable accelerator) Over a suite of six datasets we trained models via transfer learning with an accuracy loss of <1\\ resulting in up to 11.2 TOPS/W nearly $2 \\times$ more efficient than a conventional programmable CNN accelerator of the same area.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Paul N Whatmough", "Chuteng Zhou", "Patrick Hansen", "Shreyas Kolala Venkataramanaiah", "Jae-sun Seo", "Matthew Mattina"], "n_citations": 38, "n_key_citations": 2, "score": 0}, {"corpus_id": 219850485, "title": "FixyNN: Energy Efficient Real Time Mobile Computer Vision Hardware Acceleration via Transfer Learning", "abstract": "", "venue": "MLSys", "year": 2019.0, "author_names": ["Paul N Whatmough", "Chuteng Zhou", "Patrick Hansen", "Shreyas Kolala Venkataramanaiah", "Jae-sun Seo", "Matthew Mattina"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 116738023, "title": "Deep Convolutional Neural Networks with transfer learning for computer vision based data driven pavement distress detection", "abstract": "Abstract Automated pavement distress detection and classification has remained one of the high priority research areas for transportation agencies. In this paper, we employed a Deep Convolutional Neural Network (DCNN) trained on the 'big data' ImageNet database, which contains millions of images, and transfer that learning to automatically detect cracks in Hot Mix Asphalt (HMA) and Portland Cement Concrete (PCC) surfaced pavement images that also include a variety of non crack anomalies and defects. Apart from the common sources of false positives encountered in vision based automated pavement crack detection, a significantly higher order of complexity was introduced in this study by trying to train a classifier on combined HMA surfaced and PCC surfaced images that have different surface characteristics. A single layer neural network classifier (with 'adam' optimizer) trained on ImageNet pre trained VGG 16 DCNN features yielded the best performance.", "venue": "", "year": 2017.0, "author_names": ["Kasthurirangan Gopalakrishnan", "Siddhartha Kumar Khaitan", "Alok N Choudhary", "Ankit Agrawal"], "n_citations": 335, "n_key_citations": 5, "score": 0}, {"corpus_id": 216552893, "title": "Transfer learning for leveraging computer vision in infrastructure maintenance", "abstract": "Monitoring the technical condition of infrastructure is a crucial element to its maintenance. Currently, the applied methods are outdated, labour intensive and highly inaccurate. At the same time, the latest methods using Artificial Intelligence techniques, despite achieving satisfactory results in the detection of infrastructure damage, are severely limited in their application due to two main factors labour intensive gathering of new datasets and high demand for computing power. In the presented work, we propose to utilize Transfer Learning techniques and computer vision to overcome these limiting factor and fully harness the advantages of Artificial Intelligence methods. We describe a framework which enables hassle free development of unique infrastructure defects detectors on digital images, achieving the accuracy of above 90% The framework supports semi automatic creation of new datasets and has modest computing power requirements. It is implemented in the form of a ready to use software package distributed under an open software licence and available for the public. Thus, it can be used to immediately implement the methods proposed in this paper in the process of infrastructure management by government units, regardless of their financial capabilities. With the help of introduced framework it is possible to improve the efficiency of infrastructure management and the quality of its life cycle documentation globally, leading to a more accurate mapping of the processes taking place in the infrastructure's life cycle for better infrastructure planning in the future.", "venue": "ArXiv", "year": 2020.0, "author_names": ["Mateusz Zarski", "Bartosz W'ojcik", "Jaroslaw Adam Miszczak"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 221593825, "title": "An Innovative Approach of Textile Fabrics Identification from Mobile Images using Computer Vision based on Deep Transfer Learning", "abstract": "The identification of different textile fabrics is a task commonly learned in practice and, therefore, is considered a very strenuous and costly form of learning, causing annoyance to the individual who performs it. Based on this context, this paper proposes a new method for classifying textile fabrics, based on the development of a computer vision system using Convolutional Neural Network (CNN) CNN works as a feature extractor by incorporating the concept of Transfer Learning. Using Transfer Learning allows a pre trained CNN model to be reused for a new problem. In order to highlight the high performance of CNN, an analysis is performed with feature extractors established in the literature. Parameters such as Accuracy, F1 Score, and processing time are considered to evaluate the efficiency of the proposed approach. For the classification were used Bayesian Classifier, Multi layer Perceptron (MLP) k Nearest Neighbor (kNN) Random Forest (RF) and Support Vector Machine (SVM) The results show that the best combination is the CNN architecture DenseNet201 with SVM (RBF) obtaining an accuracy of 94% and F1 Score of 94.2%", "venue": "2020 International Joint Conference on Neural Networks (IJCNN)", "year": 2020.0, "author_names": ["Antonio Carlos da Silva Barros", "Elene Firmeza Ohata", "Suane Pires Pinheiro da Silva", "Jefferson Silva Almeida", "Pedro Pedrosa Reboucas"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 222143274, "title": "An Innovative Approach of Textile Fabrics Identification from Mobile Images using Computer Vision based on Deep Transfer Learning", "abstract": "The identification of different textile fabrics is a task commonly learned in practice and, therefore, is considered a very strenuous and costly form of learning, causing annoyance to the individual who performs it. Based on this context, this paper proposes a new method for classifying textile fabrics, based on the development of a computer vision system using Convolutional Neural Network (CNN) CNN works as a feature extractor by incorporating the concept of Transfer Learning. Using Transfer Learning allows a pre trained CNN model to be reused for a new problem. In order to highlight the high performance of CNN, an analysis is performed with feature extractors established in the literature. Parameters such as Accuracy, F1 Score, and processing time are considered to evaluate the efficiency of the proposed approach. For the classification were used Bayesian Classifier, Multi layer Perceptron (MLP) k Nearest Neighbor (kNN) Random Forest (RF) and Support Vector Machine (SVM) The results show that the best combination is the CNN architecture DenseNet201 with SVM (RBF) obtaining an accuracy of 94% and F1 Score of 94.2%", "venue": "IJCNN", "year": 2020.0, "author_names": ["Antonio Carlos da Silva Barros", "Elene Firmeza Ohata", "Suane Pires Pinheiro da Silva", "Jefferson Silva Almeida", "Pedro Pedrosa Reboucas Filho"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 216648453, "title": "Computer Vision and Deep Learning Techniques for the Analysis of Drone Acquired Forest Images, a Transfer Learning Study", "abstract": "Unmanned Aerial Vehicles (UAV) are becoming an essential tool for evaluating the status and the changes in forest ecosystems. This is especially important in Japan due to the sheer magnitude and complexity of the forest area, made up mostly of natural mixed broadleaf deciduous forests. Additionally, Deep Learning (DL) is becoming more popular for forestry applications because it allows for the inclusion of expert human knowledge into the automatic image processing pipeline. In this paper we study and quantify issues related to the use of DL with our own UAV acquired images in forestry applications such as: the effect of Transfer Learning (TL) and the Deep Learning architecture chosen or whether a simple patch based framework may produce results in different practical problems. We use two different Deep Learning architectures (ResNet50 and UNet) two in house datasets (winter and coastal forest) and focus on two separate problem formalizations (Multi Label Patch or MLP classification and semantic segmentation) Our results show that Transfer Learning is necessary to obtain satisfactory outcome in the problem of MLP classification of deciduous vs evergreen trees in the winter orthomosaic dataset (with a 9.78% improvement from no transfer learning to transfer learning from a a general purpose dataset) We also observe a further 2.7% improvement when Transfer Learning is performed from a dataset that is closer to our type of images. Finally, we demonstrate the applicability of the patch based framework with the ResNet50 architecture in a different and complex example: Detection of the invasive broadleaf deciduous black locust (Robinia pseudoacacia) in an evergreen coniferous black pine (Pinus thunbergii) coastal forest typical of Japan. In this case we detect images containing the invasive species with a 75% of True Positives (TP) and 9% False Positives (FP) while the detection of native trees was 95% TP and 10% FP.", "venue": "Remote. Sens.", "year": 2020.0, "author_names": ["Sarah Kentsch", "Maximo Larry Lopez Caceres", "Daniel Serrano", "Ferran Roure", "Yago Diez"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 221476012, "title": "Classification of Electroencephalogram Signals for Detecting Predisposition to Alcoholism using Computer Vision and Transfer Learning", "abstract": "Recent statistics have shown that the main difficulty in detecting alcoholism is the unreliability of the information presented by patients with addiction; this hampers early diagnosis and reduces the effectiveness of treatment. However, electroencephalogram (EEG) exams can contribute with more reliable data for this analysis. This paper proposes a new approach for the automatic diagnosis of patients with alcoholism. It offers a method for examining the EEG signals from a two dimensional perspective according to changes in the neural activity, highlighting the influence of high and low frequency signals. This approach combines Transfer Learning and Con volutional Neural Networks (CNN) to EEG signals analysis. The methodology to evaluate our proposal used 21 combinations of the classification traditional methods and 35 combinations of recent CNN architectures used as feature extractors combined with the following classical classifiers: Gaussian Naive Bayes, K Nearest Neighbor (k NN) Multilayer Perceptron (MLP) Random Forest (RF) and Support Vector Machine (SVM) CNN MobileNet combined with SVM achieved the best results in Accuracy (95.33% Precision (95.68% F1 Score (95.24% and Recall (95.00% This combination outperformed traditional methods by up to 8% Thus, this approach is applicable as a classification stage for computer aided diagnoses, useful for the triage of patients, and clinical support for the early diagnosis of this disease.", "venue": "2020 IEEE 33rd International Symposium on Computer Based Medical Systems (CBMS)", "year": 2020.0, "author_names": ["Francisco H S Silva", "Aldisio Goncalves Medeiros", "Elene Firmeza Ohata", "Pedro Pedrosa Reboucas Filho"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 221339124, "title": "British Sign Language Recognition via Late Fusion of Computer Vision and Leap Motion with Transfer Learning to American Sign Language", "abstract": "In this work, we show that a late fusion approach to multimodality in sign language recognition improves the overall ability of the model in comparison to the singular approaches of image classification (88.14% and Leap Motion data classification (72.73% With a large synchronous dataset of 18 BSL gestures collected from multiple subjects, two deep neural networks are benchmarked and compared to derive a best topology for each. The Vision model is implemented by a Convolutional Neural Network and optimised Artificial Neural Network, and the Leap Motion model is implemented by an evolutionary search of Artificial Neural Network topology. Next, the two best networks are fused for synchronised processing, which results in a better overall result (94.44% as complementary features are learnt in addition to the original task. The hypothesis is further supported by application of the three models to a set of completely unseen data where a multimodality approach achieves the best results relative to the single sensor method. When transfer learning with the weights trained via British Sign Language, all three models outperform standard random weight distribution when classifying American Sign Language (ASL) and the best model overall for ASL classification was the transfer learning multimodality approach, which scored 82.55% accuracy.", "venue": "Sensors", "year": 2020.0, "author_names": ["Jordan J Bird", "Aniko Ekart", "Diego Resende Faria"], "n_citations": 4, "n_key_citations": 0, "score": 0}]} -{"query": "Towards More Practical Adversarial Attacks on Graph Neural Networks", "session_id": 5350093546078525, "user_id": 3541393512395607, "candidates": [{"corpus_id": 225086593, "title": "Towards More Practical Adversarial Attacks on Graph Neural Networks", "abstract": "We study the black box attacks on graph neural networks (GNNs) under a novel and realistic constraint: attackers have access to only a subset of nodes in the network, and they can only attack a small number of them. A node selection step is essential under this setup. We demonstrate that the structural inductive biases of GNN models can be an effective source for this type of attacks. Specifically, by exploiting the connection between the backward propagation of GNNs and random walks, we show that the common gradient based white box attacks can be generalized to the black box setting via the connection between the gradient and an importance score similar to PageRank. In practice, we find attacks based on this importance score indeed increase the classification loss by a large margin, but they fail to significantly increase the mis classification rate. Our theoretical and empirical analyses suggest that there is a discrepancy between the loss and mis classification rate, as the latter presents a diminishing return pattern when the number of attacked nodes increases. Therefore, we propose a greedy procedure to correct the importance score that takes into account of the diminishing return pattern. Experimental results show that the proposed procedure can significantly increase the mis classification rate of common GNNs on real world data without access to model parameters nor predictions.", "venue": "NeurIPS", "year": 2020.0, "author_names": ["Jiaqi Ma", "Shuangrui Ding", "Qiaozhu Mei"], "n_citations": 15, "n_key_citations": 1, "score": 1}, {"corpus_id": 218517553, "title": "Adversarial Attacks on Graph Neural Networks", "abstract": "Deep learning models for graphs have achieved strong performance for the task of node classification. Despite their proliferation, little is known about their robustness to adversarial attacks. Yet, in domains where they are likely to be used, e.g. the web, adversaries are common. Can deep learning models for graphs be easily fooled? In this work, we present a study of adversarial attacks on attributed graphs, specifically focusing on models exploiting ideas of graph convolutions. In addition to attacks at test time, we tackle the more challenging class of poisoning/causative attacks, which focus on the training phase of a machine learning model. We generate adversarial perturbations targeting the node's features and the graph structure, thus, taking the dependencies between instances in account. Moreover, we ensure that the perturbations remain unnoticeable by preserving important data characteristics. To cope with the underlying discrete domain, we propose an efficient algorithm Nettack exploiting incremental computations. Our experimental study shows that accuracy of node classification significantly drops even when performing only few perturbations. Even more, our attacks are transferable: the learned attacks generalize to other state of the art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given. For the first time, we successfully identify important patterns of adversarial attacks on graph neural networks (GNNs) a first step towards being able to detect adversarial attacks on GNNs.", "venue": "ACM Trans. Knowl. Discov. Data", "year": 2020.0, "author_names": ["Daniel Zugner", "Oliver Borchert", "Amir Akbarnejad", "Stephan Gunnemann"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 211171800, "title": "Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks", "abstract": "Graph convolutional neural networks, which learn aggregations over neighbor nodes, have achieved great performance in node classification tasks. However, recent studies reported that such graph convolutional node classifier can be deceived by adversarial perturbations on graphs. Abusing graph convolutions, a node's classification result can be influenced by poisoning its neighbors. Given an attributed graph and a node classifier, how can we evaluate robustness against such indirect adversarial attacks? Can we generate strong adversarial perturbations which are effective on not only one hop neighbors, but more far from the target? In this paper, we demonstrate that the node classifier can be deceived with high confidence by poisoning just a single node even two hops or more far from the target. Towards achieving the attack, we propose a new approach which searches smaller perturbations on just a single node far from the target. In our experiments, our proposed method shows 99% attack success rate within two hops from the target in two datasets. We also demonstrate that $m $layer graph convolutional neural networks have chance to be deceived by our indirect attack within m hop neighbors. The proposed attack can be used as a benchmark in future defense attempts to develop graph convolutional neural networks with having adversary robustness.", "venue": "2019 IEEE International Conference on Big Data (Big Data)", "year": 2019.0, "author_names": ["Tsubasa Takahashi"], "n_citations": 9, "n_key_citations": 1, "score": 0}, {"corpus_id": 199442418, "title": "The General Black box Attack Method for Graph Neural Networks", "abstract": "With the great success of Graph Neural Networks (GNNs) towards representation learning on graph structure data, the robustness of GNNs against adversarial attack inevitably becomes a central problem in graph learning domain. Regardless of the fruitful progress, current works suffer from two main limitations: First, the attack method required to be developed case by case; Second, most of them are restricted to the white box attack. This paper promotes current frameworks in a more general and flexible sense we demand only one single method to attack various kinds of GNNs and this attacker is black box driven. To this end, we begin by investigating the theoretical connections between different kinds of GNNs in a principled way and integrate different GNN models into a unified framework, dubbed as General Spectral Graph Convolution. As such, a generalized adversarial attacker is proposed towards two families of GNNs: Convolution based model and sampling based model. More interestingly, our attacker does not require any knowledge of the target classifiers used in GNNs. Extensive experimental results validate the effectiveness of our method on several benchmark datasets. Particularly by using our attack, even small graph perturbations like one edge flip is able to consistently make a strong attack in performance to different GNN models.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Heng Chang", "Yu Rong", "Tingyang Xu", "Wenbing Huang", "Honglei Zhang", "Peng Cui", "Wenwu Zhu", "Junzhou Huang"], "n_citations": 6, "n_key_citations": 1, "score": 0}, {"corpus_id": 52182712, "title": "Towards Query Efficient Black box Attacks: An Input free Perspective", "abstract": "Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black box scenario. However, most of the existing black box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Recent studies have highlighted that deep neural networks (DNNs) are vulnerable to adversarial attacks, even in a black box scenario. However, most of the existing black box attack algorithms need to make a huge amount of queries to perform attacks, which is not practical in the real world. We note one of the main reasons for the massive queries is that the adversarial example is required to be visually similar to the original image, but in many cases, how adversarial examples look like does not matter much. It inspires us to introduce a new attack called input free attack, under which an adversary can choose an arbitrary image to start with and is allowed to add perceptible perturbations on it. Following this approach, we propose two techniques to significantly reduce the query complexity. First, we initialize an adversarial example with a gray color image on which every pixel has roughly the same importance for the target model. Then we shrink the dimension of the attack space by perturbing a small region and tiling it to cover the input image. To make our algorithm more effective, we stabilize a projected gradient ascent algorithm with momentum, and also propose a heuristic approach for region size selection. Through extensive experiments, we show that with only 1,701 queries on average, we can perturb a gray image to any target class of ImageNet with a 100% success rate on InceptionV3. Besides, our algorithm has successfully defeated two real world systems, the Clarifai food detection API and the Baidu Animal Identification API.", "venue": "AISec@CCS", "year": 2018.0, "author_names": ["Yali Du", "Meng Fang", "Jinfeng Yi", "Jun Cheng", "Dacheng Tao"], "n_citations": 12, "n_key_citations": 0, "score": 0}, {"corpus_id": 214807853, "title": "RoLMA: A Practical Adversarial Attack Against Deep Learning Based LPR Systems", "abstract": "With the advances of deep learning, license plate recognition (LPR) based on deep learning has been widely used in public transport such as electronic toll collection, car parking management and law enforcement. Deep neural networks are proverbially vulnerable to crafted adversarial examples, which has been proved in many applications like object recognition, malware detection, etc. However, it is more challenging to launch a practical adversarial attack against LPR systems as any covering or scrawling to license plate is prohibited by law. On the other hand, the created perturbations are susceptible to the surrounding environment including illumination conditions, shooting distances and angles of LPR systems. To this end, we propose the first practical adversarial attack, named as RoLMA, against deep learning based LPR systems. We adopt illumination technologies to create a number of light spots as noises on the license plate, and design targeted and non targeted strategies to find out the optimal adversarial example against HyperLPR, a state of the art LPR system. We physicalize these perturbations on a real license plate by virtue of generated adversarial examples. Extensive experiments demonstrate that RoLMA can effectively deceive HyperLPR with an 89.15% success rate in targeted attacks and 97.3% in non targeted attacks. Moreover, our experiments also prove its high practicality with a 91.43% success rate towards physical license plates, and imperceptibility with around 93.56% of investigated participants being able to correctly recognize license plates.", "venue": "Inscrypt", "year": 2019.0, "author_names": ["Mingming Zha", "Guozhu Meng", "Chaoyang Lin", "Zhe Zhou", "Kai Chen"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 209315004, "title": "SEMANTICADV: GENERATING ADVERSARIAL EXAM", "abstract": "Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples which are manipulated instances targeting to mislead DNNs to make incorrect predictions. Currently, most such adversarial examples try to guarantee \"subtle perturbation\" by limiting the Lp norm of the perturbation. In this paper, we aim to explore the impact of semantic manipulation on DNNs predictions by manipulating the semantic attributes of images and generate \"unrestricted adversarial examples\" Such semantic based perturbation is more practical compared with the Lp bounded perturbation. In particular, we propose an algorithm SemanticAdv which leverages disentangled semantic factors to generate adversarial perturbation by altering controlled semantic attributes to fool the learner towards various \"adversarial\" targets. We conduct extensive experiments to show that the semantic based adversarial examples can not only fool different learning tasks such as face verification and landmark detection, but also achieve high targeted attack success rate against real world black box services such as Azure face verification service based on transferability. To further demonstrate the applicability of SemanticAdv beyond face recognition domain, we also generate semantic perturbations on street view images. Such adversarial examples with controlled semantic manipulation can shed light on further understanding about vulnerabilities of DNNs as well as potential defensive approaches.", "venue": "", "year": 2019.0, "author_names": ["Ples Via"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 4731502, "title": "Adaptive Spatial Steganography Based on Probability Controlled Adversarial Examples", "abstract": "Deep learning model is vulnerable to adversarial attack, which generates special input sample to the deep learning model that can make the model misclassify the sample. Besides deep learning model, adversarial attack is also effective to feature based machine learning model. In this paper, we discuss the application of adversarial attack for improving the capability to resist steganalysis of steganographic schemes. We apply the steganalytic neural network as the adversarial generator. Our goal is to improve the performance of typical spatial adaptive steganography towards the steganalysis of rich model feature and neural network. The adversarial method can be combined with adaptive steganography by controlling the flipping directions of pixels following the gradient map. The steganographic adversarial example can make itself \"seems like\" innocent cover towards steganalyzer. However, the generated steganographic adversarial examples are only effective to deceive the steganalyzer trained with non adversarial examples. When facing the steganalyzer trained with adversarial examples, they can be easily detected with a low detecting error rate. The adversarial method makes the generated stego image more distinguishable from the cover image. To improve this situation, we adjust the method to calculate the gradient map by modifying the probability vector of softmax layer to a particular vector rather than modifying the category vector. Therefore, the generated adversarial example is controlled by probability output of softmax. With the adjustment, the adversarial scheme performs better than the typical adaptive steganography. We develop an practical adversarial steganographic method with double layered STC. The experiment proves its effectiveness on rich model and neural network.", "venue": "ArXiv", "year": 2018.0, "author_names": ["Sai Ma", "Qingxiao Guan", "Xianfeng Zhao", "Yaqi Liu"], "n_citations": 6, "n_key_citations": 1, "score": 0}, {"corpus_id": 213175598, "title": "MINT: Deep Network Compression via Mutual Information based Neuron Trimming", "abstract": "Most approaches to deep neural network compression via pruning either directly evaluate a filter's importance using its weights or optimize an alternative objective function with sparsity constraints. While these methods offer a useful way to approximate contributions from similar filters, they often either ignore the dependency between layers or solve a more difficult optimization objective than standard cross entropy. Our method, Mutual Information based Neuron Trimming (MINT) approaches deep compression via pruning by enforcing sparsity based on the strength of the dependency between filters of adjacent layers, across every pair of layers in the network. The dependency is calculated using conditional geometric mutual information which evaluates the amount of similar information exchanged between filters using a graph based criterion. When pruning a network, we ensure that retained filters contribute the majority of the information towards succeeding layers which ensures high performance. Our novel approach is highly competitive with existing state of the art compression via pruning methods on standard benchmarks for this task: MNIST, CIFAR 10, and ILSVRC2012, across a variety of network architectures despite using only a single retraining pass. Also, we discuss our observations of a common denominator between our pruning methodology's response to adversarial attacks and calibration statistics when compared to the original network.", "venue": "2020 25th International Conference on Pattern Recognition (ICPR)", "year": 2021.0, "author_names": ["Madan Ravi Ganesh", "Jason J Corso", "Salimeh Yasaei Sekeh"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 151361297, "title": "Combating Attacks and Abuse in Large Online Communities", "abstract": "Author(s) Wang, Gang Advisor(s) Zhao, Ben Y; Zheng, Haitao Abstract: Internet users today are connected more widely and ubiquitously than ever before. As a result, various online communities are formed, ranging from online social networks (Facebook, Twitter) to mobile communities (Foursquare, Waze) to content/interests based networks (Wikipedia, Yelp, Quora) While users are benefiting from the ease of access to information and social interactions, there is a growing concern for users' security and privacy against various attacks such as spam, phishing, malware infection and identity theft. Combating attacks and abuse in online communities is challenging. First, today's online communities are increasingly dependent on users and user generated content. Securing online systems demands a deep understanding of the complex and often unpredictable human behaviors. Second, online communities can easily have millions or even billions of users, which requires the corresponding security mechanisms to be highly scalable. Finally, cybercriminals are constantly evolving to launch new types of attacks. This further demands high robustness of security defenses. In this thesis, we take concrete steps towards measuring, understanding, and defending against attacks and abuse in online communities. We begin with a series of empirical measurements to understand user behaviors in different online services and the uniquesecurity and privacy challenges that users are facing with. This effort covers a broad set of popular online services including social networks for question and answering (Quora) anonymous social networks (Whisper) and crowdsourced mobile communities (Waze) Despite the differences of specific online communities, our study provides a first look at their user activity patterns based on empirical data, and reveals the need for reliable mechanisms to curate user content, protect privacy, and defend against emerging attacks. Next, we turn our attention to attacks targeting online communities, with focus on spam campaigns. While traditional spam is mostly generated by automated software, attackers today start to introduce \"human intelligence\" to implement attacks. This is maliciouscrowdsourcing (or crowdturfing) where a large group of real users are organized to carry out malicious campaigns, such as writing fake reviews or spreading rumors on social media. Using collective human efforts, attackers can easily bypass many existing defenses (e.g.,CAPTCHA) To understand the ecosystem of crowdturfing, we first use measurements to examine their detailed campaign organization, workers and revenue. Based on insights from empirical data, we develop effective machine learning classifiers to detect crowdturfingactivities. In the meantime, considering the adversarial nature of crowdturfing, we also build practical adversarial models to simulate how attackers can evade or disrupt machine learning based defenses. To aid in this effort, we next explore using user behavior models to detect a wider range of attacks. Instead of making assumptions about attacker behavior, our idea is to model normal user behaviors and capture (malicious) behaviors that are deviated from norm. In this way, we can detect previously unknown attacks. Our behavior model is based on detailed clickstream data, which are sequences of click events generated by users when using the service. We build a similarity graph where each user is a node and the edges are weightedby clickstream similarity. By partitioning this graph, we obtain \"clusters\" of users with similar behaviors. We then use a small set of known good users to \"color\" these clusters to differentiate the malicious ones. This technique has been adopted by real world social networks (Renren and LinkedIn) and already detected unexpected attacks. Finally, we extend clickstream model to understanding more grained behaviors of attackers (and real users) and tracking how user behavior changes over time. In summary, this thesis illustrates a data driven approach to understanding and defending against attacks and abuse in online communities. Our measurements have revealed new insights about how attackers are evolving to bypass existing security defenses today. Inaddition, our data driven systems provide new solutions for online services to gain a deep understanding of their users, and defend them from emerging attacks and abuse.", "venue": "", "year": 2016.0, "author_names": ["Gang Wang"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "direct policy optimization", "session_id": 5023878665266933, "user_id": 620941871913435, "candidates": [{"corpus_id": 223953572, "title": "Direct Policy Optimization Using Deterministic Sampling and Collocation", "abstract": "We present an approach for approximately solving discrete time stochastic optimal control problems by combining direct trajectory optimization, deterministic sampling, and policy optimization. Our feedback motion planning algorithm uses a quasi Newton method to simultaneously optimize a reference trajectory, a set of deterministically chosen sample trajectories, and a parameterized policy. We demonstrate that this approach exactly recovers LQR policies in the case of linear dynamics, quadratic objective, and Gaussian disturbances. We also demonstrate the algorithm on several nonlinear, underactuated robotic systems to highlight its performance and ability to handle control limits, safely avoid obstacles, and generate robust plans in the presence of unmodeled dynamics.", "venue": "IEEE Robotics and Automation Letters", "year": 2021.0, "author_names": ["Taylor A Howell", "Chunjiang Fu", "Zachary Manchester"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 189897944, "title": "Direct Policy Gradients: Direct Optimization of Policies in Discrete Action Spaces", "abstract": "Direct optimization is an appealing framework that replaces integration with optimization of a random objective for approximating gradients in models with discrete random variables. A$\\star$ sampling is a framework for optimizing such random objectives over large spaces. We show how to combine these techniques to yield a reinforcement learning algorithm that approximates a policy gradient by finding trajectories that optimize a random objective. We call the resulting algorithms \"direct policy gradient\" (DirPG) algorithms. A main benefit of DirPG algorithms is that they allow the insertion of domain knowledge in the form of upper bounds on return to go at training time, like is used in heuristic search, while still directly computing a policy gradient. We further analyze their properties, showing there are cases where DirPG has an exponentially larger probability of sampling informative gradients compared to REINFORCE. We also show that there is a built in variance reduction technique and that a parameter that was previously viewed as a numerical approximation can be interpreted as controlling risk sensitivity. Empirically, we evaluate the effect of key degrees of freedom and show that the algorithm performs well in illustrative domains compared to baselines.", "venue": "NeurIPS", "year": 2020.0, "author_names": ["Guy Lorberbom", "Chris J Maddison", "Nicolas Manfred Otto Heess", "Tamir Hazan", "Daniel Tarlow"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 211075805, "title": "Convergence Guarantees of Policy Optimization Methods for Markovian Jump Linear Systems", "abstract": "Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the convergence of policy optimization for quadratic control of Markovian jump linear systems (MJLS) First, we study the optimization landscape of direct policy optimization for MJLS, and, in particular, show that despite the non convexity of the resultant problem the unique stationary point is the global optimal solution. Next, we prove that the Gauss Newton method and the natural policy gradient method converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which stabilizes the closed loop dynamics in the mean square sense. We propose a novel Lyapunov argument to fix a key stability issue in the convergence proof. Finally, we present a numerical example to support our theory. Our work brings new insights for understanding the performance of policy learning methods on controlling unknown MJLS.", "venue": "2020 American Control Conference (ACC)", "year": 2020.0, "author_names": ["Joao P Jansch-Porto", "Bin Hu", "Geir E Dullerud"], "n_citations": 20, "n_key_citations": 1, "score": 0}, {"corpus_id": 3841474, "title": "Bayesian Optimization with Automatic Prior Selection for Data Efficient Direct Policy Search", "abstract": "One of the most interesting features of Bayesian optimization for direct policy search is that it can leverage priors (e.g. from simulation or from previous tasks) to accelerate learning on a robot. In this paper, we are interested in situations for which several priors exist but we do not know in advance which one fits best the current situation. We tackle this problem by introducing a novel acquisition function, called Most Likely Expected Improvement (MLEI) that combines the likelihood of the priors and the expected improvement. We evaluate this new acquisition function on a transfer learning task for a 5 DOF planar arm and on a possibly damaged, 6 legged robot that has to learn to walk on flat ground and on stairs, with priors corresponding to different stairs and different kinds of damages. Our results show that MLEI effectively identifies and exploits the priors, even when there is no obvious match between the current situations and the priors.", "venue": "2018 IEEE International Conference on Robotics and Automation (ICRA)", "year": 2018.0, "author_names": ["Remi Pautrat", "Konstantinos Chatzilygeroudis", "Jean-Baptiste Mouret"], "n_citations": 29, "n_key_citations": 2, "score": 0}, {"corpus_id": 227151643, "title": "Policy Optimization for Markovian Jump Linear Quadratic Control: Gradient Based Methods and Global Convergence", "abstract": "Recently, policy optimization for control purposes has received renewed attention due to the increasing interest in reinforcement learning. In this paper, we investigate the global convergence of gradient based policy optimization methods for quadratic optimal control of discrete time Markovian jump linear systems (MJLS) First, we study the optimization landscape of direct policy optimization for MJLS, with static state feedback controllers and quadratic performance costs. Despite the non convexity of the resultant problem, we are still able to identify several useful properties such as coercivity, gradient dominance, and almost smoothness. Based on these properties, we show global convergence of three types of policy optimization methods: the gradient descent method; the Gauss Newton method; and the natural policy gradient method. We prove that all three methods converge to the optimal state feedback controller for MJLS at a linear rate if initialized at a controller which is mean square stabilizing. Some numerical examples are presented to support the theory. This work brings new insights for understanding the performance of policy gradient methods on the Markovian jump linear quadratic control problem.", "venue": "ArXiv", "year": 2020.0, "author_names": ["Joao P Jansch-Porto", "Bin Hu", "Geir E Dullerud"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 224819491, "title": "Iterative Amortized Policy Optimization", "abstract": "Policy networks are a central feature of deep reinforcement learning (RL) algorithms for continuous control, enabling the estimation and sampling of high value actions. From the variational inference perspective on RL, policy networks, when employed with entropy or KL regularization, are a form of amortized optimization, optimizing network parameters rather than the policy distributions directly. However, this direct amortized mapping can empirically yield suboptimal policy estimates. Given this perspective, we consider the more flexible class of iterative amortized optimizers. We demonstrate that the resulting technique, iterative amortized policy optimization, yields performance improvements over conventional direct amortization methods on benchmark continuous control tasks.", "venue": "ArXiv", "year": 2020.0, "author_names": ["Joseph Marino", "Alexandre Piche", "Alessandro Davide Ialongo", "Yisong Yue"], "n_citations": 5, "n_key_citations": 1, "score": 0}, {"corpus_id": 5564303, "title": "Sequential Classification Based Optimization for Direct Policy Search", "abstract": "Direct policy search often results in high quality policies in complex reinforcement learning problems, which employs some optimization algorithms to search the parameters of the policy for maximizing the its total reward. Classificationbased optimization is a recently developed framework for derivative free optimization, which has shown to be effective and efficient for non convex optimization problems with many local optima, and may provide a power optimization tool for direct policy search. However, this framework requires to sample a batch of solutions for every update of the search model, while in reinforcement learning, the environment often offers only sequential policy evaluation. Thus the classification based optimization may not efficient for direct policy search, where solutions have to be sampled sequentially. In this paper, we adapt the classification based optimization for sequential sampled solutions by forming the sample batch via reusing historical solutions. Experiments on a helicopter hovering task and controlling tasks in OpenAI Gym show that the new algorithm significantly improve the performance from several state of the art derivative free optimization approaches.", "venue": "AAAI", "year": 2017.0, "author_names": ["Yi-Qi Hu", "Hong Qian", "Yang Yu"], "n_citations": 24, "n_key_citations": 8, "score": 0}, {"corpus_id": 32749750, "title": "Parallel Nonstationary Direct Policy Search for Risk Averse Stochastic Optimization", "abstract": "This paper presents an algorithmic strategy to nonstationary policy search for finite horizon, discrete time Markovian decision problems with large state spaces, constrained action sets, and a risk sensitive optimality criterion. The methodology relies on modeling time variant policy parameters by a nonparametric response surface model for an indirect parametrized policy motivated by Bellman's equation. The policy structure is heuristic when the optimization of the risk sensitive criterion does not admit a dynamic programming reformulation. Through the interpolating approximation, the level of nonstationarity of the policy, and consequently, the size of the resulting search problem can be adjusted. The computational tractability and the generality of the approach follow from a nested parallel implementation of derivative free optimization in conjunction with Monte Carlo simulation. We demonstrate the efficiency of the approach on an optimal energy storage charging problem, and illustrate the effect of the.", "venue": "INFORMS J. Comput.", "year": 2017.0, "author_names": ["Somayeh Moazeni", "Warrren B Powell", "Boris Defourny", "Belgacem Bouzaiene-Ayari"], "n_citations": 9, "n_key_citations": 5, "score": 0}, {"corpus_id": 116057571, "title": "Contextual Direct Policy Search", "abstract": "Stochastic search and optimization techniques are used in a vast number of areas, ranging from refining the design of vehicles, determining the effectiveness of new drugs, developing efficient strategies in games, or learning proper behaviors in robotics. However, they specialize for the specific problem they are solving, and if the problem's context slightly changes, they cannot adapt properly. In fact, they require complete re leaning in order to perform correctly in new unseen scenarios, regardless of how similar they are to previous learned environments. Contextual algorithms have recently emerged as solutions to this problem. They learn the policy for a task that depends on a given context, such that widely different contexts belonging to the same task are learned simultaneously. That being said, the state of the art proposals of this class of algorithms prematurely converge, and simply cannot compete with algorithms that learn a policy for a single context. We describe the Contextual Relative Entropy Policy Search (CREPS) algorithm, which belongs to the before mentioned class of contextual algorithms. We extend it with a technique that allows the algorithm to severely increase its performance, and we call it Contextual Relative Entropy Policy Search with Covariance Matrix Adaptation (CREPS CMA) We propose two variants, and demonstrate their behavior in a set of classic contextual optimization problems, and on complex simulator robot tasks.", "venue": "J. Intell. Robotic Syst.", "year": 2019.0, "author_names": ["Abbas Abdolmaleki", "David Apolinario Simoes", "Nuno Lau", "Luis Paulo Reis", "Gerhard Neumann"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 4902958, "title": "Using direct policy search to identify robust strategies in adapting to uncertain sea level rise and storm surge", "abstract": "Sea level rise poses considerable risks to coastal communities, ecosystems, and infrastructure. Decision makers are faced with uncertain sea level projections when designing a strategy for coastal adaptation. The traditional methods are often silent on tradeoffs as well as the effects of tail area events and of potential future learning. Here we reformulate a simple sea level rise adaptation model to address these concerns. We show that Direct Policy Search yields improved solution quality, with respect to Pareto dominance in the objectives, over the traditional approach under uncertain sea level rise projections and storm surge. Additionally, the new formulation produces high quality solutions with less computational demands than an intertemporal optimization approach. Our results illustrate the utility of multi objective adaptive formulations for the example of coastal adaptation and point to wider ranging application in climate change adaptation decision problems.", "venue": "Environ. Model. Softw.", "year": 2018.0, "author_names": ["Gregory Garner", "Klaus Keller"], "n_citations": 14, "n_key_citations": 0, "score": 0}]} -{"query": "How Software Platforms Drive Innovation and Transform Industries.", "session_id": 4169469681958080, "user_id": 6706012333891098, "candidates": [{"corpus_id": 109112539, "title": "Invisible Engines: How Software Platforms Drive Innovation and Transform Industries", "abstract": "Winner of the Business, Management Accounting category in the 2006 Professional/Scholarly Publishing Annual Awards Competition presented by the Association of American Publishers, Inc. Software platforms are the invisible engines that have created, touched, or transformed nearly every major industry for the past quarter century. They power everything from mobile phones and automobile navigation systems to search engines and web portals. They have been the source of enormous value to consumers and helped some entrepreneurs build great fortunes. And they are likely to drive change that will dwarf the business and technology revolution we have seen to this point. Invisible Engines examines the business dynamics and strategies used by firms that recognize the transformative power unleashed by this new revolutiona revolution that will change both new and old industries. The authors argue that in order to understand the successes of software platforms, we must first understand their role as a technological meeting ground where application developers and end users converge. Apple, Microsoft, and Google, for example, charge developers little or nothing for using their platforms and make most of their money from end users; Sony PlayStation and other game consoles, by contrast, subsidize users and make more money from developers, who pay royalties for access to the code they need to write games. More applications attract more users, and more users attract more applications. And more applications and more users lead to more profits. Invisible Engines explores this story through the lens of the companies that have mastered this platform balancing act. It offers detailed studies of the personal computer, video game console, personal digital assistant, smart mobile phone, and digital media software platform industries, focusing on the business decisions made by industry players to drive profits and stay a step ahead of the competition. Shorter discussions of Internet based software platforms provide an important glimpse into a future in which the way we buy, pay, watch, listen, learn, and communicate will change forever. An electronic version of this book is available under a Creative Commons license.", "venue": "", "year": 2006.0, "author_names": ["David S Evans", "Andrei Hagiu", "Richard Schmalensee"], "n_citations": 425, "n_key_citations": 30, "score": 1}, {"corpus_id": 109112945, "title": "Book Review: Evans, Hagiu Schmalensee, Invisible Engines: How Software Platforms Drive Innovation and Transform Industries", "abstract": "This is a review of the book Invisible Engines: How Software Platforms Drive Innovation and Transform Industries by Evans, Hagiu Schmalensee.What makes the PlayStation 3 tick? The Apple iPod? Your BlackBerry? Software, or, more precisely and much more interestingly a software platform makes the hardware sing and sits in the middle of a business ecosystem of users, hardware makers and software developers. An invisible engine.The book centers on software platforms, one example of a two sided market. These are old markets newspapers, for example but many new markets are organized around these software platforms. The core question in two sided markets is open or closed? That turns importantly (but not necessarily decisively) on the pricing approach of the platform. If the platform is sold at a substantial loss, money has to be made somewhere. It is hard to sell the platform at a loss and open it fully for third parties. The platform either needs to be bundled with something else cell phones sold below cost bundled with service plans or the platform needs to be locked and participants need to be charged for unlocking it.Multi sided markets are rich places, and we need to master new rules of the road to navigate there. Our simple one market understandings will not map easily to this new, richer space. Invisible Engines sets all of this is out in a comprehensive and interesting way. If you are ready to jump in to better understand these markets, Invisible Engines is a very good place to start.", "venue": "", "year": 2008.0, "author_names": ["Randal C Picker"], "n_citations": 15, "n_key_citations": 1, "score": 0}, {"corpus_id": 167973746, "title": "Catalyst Code: The Strategies behind the World's Most Dynamic Companies by David S. Evans and Richard Schmalensee and Invisible Engines: How Software Platforms Drive Innovation and Transform Industries by David S. Evans, Andrei Hagiu, and Richard Schmalensee", "abstract": "", "venue": "", "year": 2008.0, "author_names": ["George Augustus Castellion"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 106424678, "title": "Book Review: Invisible Engines: How Software Platforms Drive Innovation and Transform Industries by David S. Evans, Andrei Hagiu, and Richard Schmalensee. Cambridge, Massachusetts: MIT Press, 2006.", "abstract": "", "venue": "", "year": 2008.0, "author_names": ["Ray Bert"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 167529320, "title": "Lean Product and Process Development by Allen C. Ward", "abstract": "Books reviewed in this issue: #Lean Product and Process Development #Open Business Models: How to Thrive in the New Innovation Landscape #Putting Hope to Work: Five Principles to Activate Your Organization's Most Powerful Resource #Catalyst Code: The Strategies behind the World's Most Dynamic Companies #Invisible Engines: How Software Platforms Drive Innovation and Transform Industries", "venue": "", "year": 2008.0, "author_names": ["Donald G Reinertsen"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 201041803, "title": "Solving the Global Continual Improvement and Innovation Challenge: How an Effective Pharmaceutical Quality System Can Transform Post Approval Change Management", "abstract": "Post approval changes are inevitable and necessary throughout the life of a drug product to implement new knowledge, maintain a state of control, and drive continual improvement. Many of these post approval changes require regulatory agency approval by individual countries before implementation. Because of the global regulatory complexity, individual post approval changes usually take years for full worldwide approval even when they reduce patient risk, improve compliance, or enhance the manufacturing process or test methods. This global complexity slows down continual improvement and innovation and can cause drug shortages and current good manufacturing practices compliance issues. Manufacturers that market products globally experience the greatest challenge and risks in their daily operations because of this post approval change complexity. A global problem needs a global solution. Quality leaders speaking globally with \"One Voice of Quality\" are essential for solving this difficult problem. This concept paper has been developed under the sponsorship of a group of Chief Quality Officers (Heads of Quality) from >25 global pharmaceutical companies and has been endorsed by the Parenteral Drug Association. The intent of this concept paper is to develop and implement aligned, standard solutions within the industry, leveraging the core foundation of the pharmaceutical quality system, such that a transformational shift can be achieved with faster implementation of new knowledge, continual improvement, and innovation through post approval changes. LAY ABSTRACT: Pharmaceutical manufacturers must make changes to their products and manufacturing processes over time as they incorporate new technology and new information. Because drug products are highly regulated in every country, the manufacturer must often contact national drug regulators before making these changes, even if the manufacturer is confident, based on testing and process controls, that the change will not have a negative impact on product quality or patient safety. For a product that is globally marketed, the manufacturer may have to contact dozens of regulators before making a change. This paper suggests that industry work together to identify ways to demonstrate to regulators that product and process knowledge as well as pharmaceutical quality systems are strong enough that the manufacturers should be allowed to manage some post approval changes themselves.", "venue": "PDA Journal of Pharmaceutical Science and Technology", "year": 2019.0, "author_names": ["Anders Lerbech Vinther", "Emma Ramnarine"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 214799071, "title": "Recent Trends in Innovation and Business Models in the New Digital Economy", "abstract": "In this paper, we present a model that shows how the new innovation environment defined by co creation experiments within a value chain and supported by e networks connecting suppliers, partners and customers at world level is changing business models. Due to the ongoing convergence of products, industries and technologies at world level, innovation capacity is rapidly improving. To succeed in this new environment for innovation, multinationals will have to rely on an \"ecosystem\" (M.Iansiti and R.Levien, 2004) made of external competencies and bear on all participants in the value chain from customers to suppliers, partners and universities R&D departments. CONVERGENCE OF PRODUCTS, TECHNOLOGIES AND INDUSTRIES AS THE SOURCE OF NEW WAYS FOR GENERATING INNOVATION The convergence of technologies, industries and markets is transforming the process and even the meaning of innovation (Prahald and Romaswamy, 2003) The digital era is remodelling the industries boundaries and even the frontiers of products and services. Before the Internet revolution, education, communication and leisure markets for example, used to be linked to very specific industries such as electronic (including TV, audio and video products) computer (PCs, laptops and video consoles) communication devices (phones, pages) software, movies and music industries (Prahald and Romaswamy, 2003) Each industry had well defined competitors. A computer was different from a mobile phone, and a mobile phone was different from a music player. As the digital economy unfolds, the frontiers between computers, mobile devices, video cameras, and music players tend to disappear. A mobile phone is no more a mobile communication tool, but is used as an Internet access and music download device. Diagram 1. New trends in market innovation: main components New technologies and the ability to bundle services on competing infrastructures, are driving this convergence and transforming many industries. The digital technology is eroding the traditional space of products and reshaping the solution space into an experience space including new ways of creating value and generating innovation. The product space is defined by the traditional way of innovating and producing goods and services (technological features generated by the firm and its base of suppliers) whereas the solution space is characterized by the recent trend of bundling sets of interconnected products and services to customers. On the other hand, networks effects and customers' communities form the new emerging experience space. Moreover, innovation includes changes not only in the products technology or features, but also encompasses changes in processes, organizations or business models. Furthermore, disrupting factors in any competitive landscape, due to deregulation, opening of borders and new customers' behaviour or new commercial channels, naturally lead to the new experience space involving more and more stakeholders. The model depicted in diagram 1 illustrates these new trends. THE NEED OF FLEXIBILITY, MASS CUSTOMISATION AND RESPONSIVENESS Companies are currently facing the strategic challenge of improving their customer relationship management as on line technology makes it possible to customize products and also makes both individual and business customers much more informed than they were not so long ago about prices, quality and other dimensions of goods and services. As a consequence, to survive, businesses can no longer keep the customers outside their core innovation process. Thus, customer retention and strategic competitiveness required redefining business models based on a close relationship with the customers to make them active partners within the innovation and marketing process. THE CO CREATION OF VALUE Information based economy has encouraged the trend of coo petition between all the industries. Competitors work in a network of shared resources and competencies for a better performance. Information networks are including competitors, who behave as co operators in a coproduction and co innovation process (B. Chakravorti, 2004 and C.K Prahald and V. Ramaswamy, 2003) As markets are more and more global, co creation of value through customized experiences is becoming the new opportunity space. Firms will rely on a locus of competencies including partners and customers, far beyond internal and suppliers' frontiers base. The innovation process is no more the strategic competency of large companies, but tends to be developed by e value added chains including small to medium suppliers. As there are less and less geographic constraints, new relations of alliances of all sorts are shaping complex co production constellations of actors. The co creation and the expansion of sharing information and experience among the value constellation drive financial ant IDEA GROUP PUBLISHING This paper appears in the book, Emerging Trends and Challenges in Information Technology Management, Volume 1 and Volume 2 edited by Mehdi Khosrow Pour (c) 2006, Idea Group Inc. 701 E. Chocolate Avenue, Suite 200, Hershey PA 17033 1240, USA Tel: 717/533 8845; Fax 717/533 8661; URL http:/www.idea group.com ITB12595", "venue": "", "year": 2020.0, "author_names": ["Soumaya Ben Letaifa"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 108964347, "title": "Theorizing Digital Business Innovation: Platforms and Capabilities in Ecosystems", "abstract": "This paper focuses on how information and technology drive digital business innovation in business ecosystems where multiple co contributors work together to innovate new business models. Specifically, we develop a framework based on two novel concepts (1) digital business innovation (DBI) platforms; and (2) digital business innovation capability at the ecosystem level of analysis. First, we describe DBI platform through three properties scale, scope and speed. Second, we describe DBI ecosystem capability using three dimensions operational, dynamic and improvisational capabilities. The DBI platform and capabilities together give rise to business value that are created at the level of the ecosystem and shared by the various co contributors. We generate a set of theoretical propositions by bringing the role of information and technology as drivers of value creation and capture in ecosystems. While extant research on IT and innovation has focused on the individual, organizational, and rarely at the inter organizational level of analysis, this paper argues for the importance of innovation ecosystems, which are complex, interdependent, and constantly evolving. We support our conceptual framework and propositions with a case example of digital business innovations with digital technologies underway in the automotive sector. We conclude with a set of critical issues for further development of theory on DBI platforms and ecosystem capabilities that are disrupting and transforming firms, industries, and markets. Our belief is that IS research could make important contributions to the broader innovation literature by explicating how information and technology individually and in tandem give rise to business value co created in ecosystems and shared by the various co contributors in the ecosystem. Accordingly, we hope this paper will entice IS research to rethink digital business innovation at the level of the ecosystem.", "venue": "", "year": 2014.0, "author_names": ["N Venkatraman", "Omar El Sawy", "Paul A Pavlou", "Anandhi S Bharadwaj"], "n_citations": 26, "n_key_citations": 3, "score": 0}, {"corpus_id": 209459288, "title": "Investigating How the Cloud Computing Transforms the Development of Industries", "abstract": "The Internet of Things (IoT) transforms many fields, including the educational, logistics, and manufacturing industries. The IoT is an internet framework whereby a large number of devices or equipment are connected and synchronized using gateways, third party technologies, and software in machine to machine and cloud computing networks. With the flourishing development of IoT, cloud computing plays an essential role in its application layer. Cloud computing technology has been widely applied in various industries and developed as particular cloud computing types: education as a service (EaaS) logistics as a service (LaaS) and manufacturing as a service (MaaS) The applicability of cloud computing in various industries has attracted significant attention from researchers and professionals. This study investigated the technical trends of emerging cloud computing technologies and surveyed 3,697 cloud computing related studies from 2010 to 2019. The findings indicate that intelligence and automation are the core issues that drive research on cloud computing. The main types of research are critical review, system design, and systematic analysis. Cloud computing services (e.g. XaaS, EaaS, LaaS, MaaS) are related to big data, analytical technologies, service orientation, and IoT. This study applied machine learning algorithms to analyze educational, logistic, and manufacturing data and yielded results with more than 90% accuracy and AUC. This study used various devices such as laptops, tablets, and smartphones to configure and review machine learning models using third party cloud platforms, which are infinitely scalable and flexible for data analytics, thereby allowing users to make quicker predictions and decisions focused on business needs.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Yu Hsin Hung"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 166233543, "title": "The Role of Platforms in Software Development Planting Real Options to Manage Uncertainties", "abstract": "The innovation process of developing new software is a challenging job and an uncertain process because of the tasks associated with the development. These uncertainties can be categorized based on what drives the uncertainty. In this paper we separate between inner and outer uncertainty. Inner uncertainty stems from how a software product shall be developed and the cost associated with it. The outer uncertainty stems from what software product shall be developed and the revenues associated with it. This paper draws on theories from innovation and development research to develop a model to analyze how the uncertainty during a software development project can be managed. In doing so, we take a supply side view on software development where the firm does not merely respond to given market needs in the development cycle but instead plays a more active role. We operationalize the supply side of the innovation process in the software development by developing a model where we analyze how service platforms create real options for future innovations. An empirical study has been conducted to examine whether and how platforms are used in software development to plant options for future innovations as suggested by the model. The study was conducted at a company that primarily develops IT phone service products. This study shows that the company use platforms for the development of their products. Further, it is a prerequisite for developing software at the pace as well as the cost and the quality demanded by their customers. The platforms play different roles in the development depending on the product being developed. The study describes the development of two different products and how the platforms are used in different ways in the development of these products. The differences in the use of platforms is partly because of the different nature of the products but also due to the market maturity of the software, this result in a difference in the kind of real options that is created for the future. The study shows that both inner and outer uncertainties are reduced by the use of platforms in the development phase.", "venue": "", "year": 2010.0, "author_names": ["Emil Numminen", "Anders Wrenne"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Aspek mahasiswa stres pada saat pandemi covid 19", "session_id": 2875764666788853, "user_id": 4074000896318596, "candidates": [{"corpus_id": 224958893, "title": "PENERAPAN MCA PADA PERBANDINGAN LAMA BELAJAR MAHASISWA TINGKAT III POLITEKNIK STATISTIKA STIS SEBELUM DAN SAAT PANDEMI COVID 19", "abstract": "Pandemi Covid 19 yang terjadi saat ini memaksa kampus Politeknik Statistika STIS untuk melakukan Pembelajaran Jarak Jauh. Dalam sistem tersebut dosen sebagai pihak pemberi material stimulus dan pendorong, sedangkan peserta belajar sebagai pihak penerima informasi yang berperan untuk mempraktekan stimulus dan respon yang diberikan. Hal ini menyebabkan mahasiswa harus melakukan effort yang lebih besar untuk mendapatkan tingkat pemahaman yang maksimal. Penelitian ini berupa studi kasus dengan populasi seluruh mahasiswa tingkat III Politeknik Statistika STIS tahun akademik 2019/2020. Adapun tujuan dalam penelitian ini yaitu untuk menganalisis perbedaan lama waktu belajar mahasiswa dan faktor faktor apa saja yang memengaruhinya pada sebelum dan saat terjadinya pandemi Covid 19. Metode analisis yang digunakan yaitu dengan MCA Multiple Classification Analysis) Adapun variabel yang diduga berpengaruh terhadap lamanya waktu belajar yaitu jenis kelamin, peminatan, daerah tempat tinggal, indeks prestasi, dan jabatan dalam kegiatan PKL (Praktik Kerja Lapangan) Hasil penelitian menunjukkan bahwa terjadi penurunan waktu belajar mahasiswa pada saat kondisi pandemik Covid 19 dibandingkan saat kondisi normal. Hal ini dibuktikan dengan penurunan nilai grand mean waktu belajar mahasiswa. Selanjutnya diperoleh bahwa sebelum adanya Covid 19, hanya variabel jabatan di PKL yang menunjukkan hasil yang signifikan dalam mempengaruhi lama waktu belajar mahasiswa. Sedangkan pada saat kondisi pandemi Covid 19, tidak hanya jabatan di PKL, melainkan juga peminatan yang diambil mahasiswa berpengaruh signifikan terhadap lama waktu belajarnya.", "venue": "", "year": 2020.0, "author_names": ["Firza Refo Adi Pratama", "Nadhifan Humam Fitrial", "Novia Putri Lestari", "Siti Andhasah", "Risni Julaeni Yuhan"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 225533820, "title": "Efektifitas Perkuliahan Daring (Online) pada Mahasiswa PGSD di Saat Pandemi Covid 19", "abstract": "Perkuliahan daring (online) merupakan sarana utama dalam pembelajaran ketika wabah Pandemi Covid 19. Tidak terkecuali Prodi PGSD FTIK Unisnu Jepara yang yang menggunakan sarana aplikasi online, seperti whatsapp grup, telegram grup, google classroom, dan media aplikasi lain ketika perkuliahan daring. Penelitian ini bertujuan untuk menganalisis efektifitas perkuliahan daring pada mahasiswa Prodi PGSD di saat Pandemi Covid 19. Penelitian ini merupakan penelitian deskriptif kuantitatif dengan menggunakan metode survey melalui google form secara online. Hasil pengujiannya dihasilkan bahwa mayoritas mahasiswa Prodi PGSD FTIK Unisnu Jepara mengikuti perkuliahan daring dirumah menggunakan gadget (hp) dengan koneksi data dalam keadaan sinyal internet yang cukup baik. Perkuliahan daring memberikan gambaran umum tentang kurang optimalnya pemahaman materi dan banyaknya tugas yang diberikan pada mahasiswa sehingga mengakibatkan proses perkuliahan yang kurang efektif. Hasil lain menunjukkan bahwa mahasiswa siap menghadapi aturan baru the new normal live apabila dilaksanakan perkuliahan secara luring. Sedangkan untuk sistem perkuliahan yang efektif selama pandemi adalah daring dan luring secara bergantian dengan memperhatikan prinsip protocol pencegahan Covid 19.", "venue": "", "year": 2020.0, "author_names": ["Aan Widiyono"], "n_citations": 14, "n_key_citations": 0, "score": 0}, {"corpus_id": 225707220, "title": "PENERAPAN MEDIA PEMBELAJARAN E LEARNING BERBASIS EDMODO PADA PEMBELAJARAN DARING SAAT PANDEMI COVID 19 (DITINJAU DARI PERSEPSI SISWA)", "abstract": "Penelitian ini bertujuan untuk mengetahui persepsi siswa terhadap penerapan media pembelajaran e learning berbasis Edmodo pada pembelajaran daring saat pandemi Covid 19. Subjek penelitian ini terdiri dari 68 siswa Kelas X dan XI yang dipilih secara acak disalah satu SMK Negeri di Kota Cimahi. Instrumen penelitian yang digunakan berupa kuisioner yang terdiri dari 36 pernyataan. Teknik analisis yang digunakan oleh peneliti yaitu teknik analisis deskriptif untuk mengetahui tingkat persepsi siswa terhadap Edmodo pada pembelajaran daring. Hasil penelitian menunjukkan bahwa tingkat persepsi siswa terhadap penerapan Edmodo pada masing masing aspek berada pada kategori tinggi, yaitu kategori pengukuran dan prestasi akademik sebesar 74% kategori komunikasi dan interaksi sebesar 73% dan kategori mengakes informasi sebesar 73% Sehingga dapat disimpulkan bahwa rata rata persepsi siswa terhadap Edmodo berada pada kategori tinggi yaitu sebesar 73,3% Artinya menurut siswa media pembelajaran Edmodo dapat membantu mereka dalam pembelajaran daring selama pandemi Covid 19.", "venue": "", "year": 2020.0, "author_names": ["Indri Oktaviani", "Ika Putera Waspada", "Neti Budiwati"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 234761018, "title": "Studi Eksplorasi Pembelajaran Pendidikan IPA Saat Masa Pandemi COVID 19 Di UIN Sunan Ampel Surabaya", "abstract": "Pandemi COVID 19 telah merubah tatanan hidup sebagian besar penduduk dunia, termasuk dalam dunia Pendidikan. Problematika pun muncul satu persatu sejalan dengan peralihan metode pembelajaran Universitas secara offline/tatap muka ke online/melalui jaringan internet. Pendidikan Ilmu Pengetahuan Alam juga memiliki masalah yang rumit saat materi yang seharusnya disampaikan dengan penuh perhatian pada pemodelan dan praktikum, harus di switch dengan metode tanpa tatap muka. Penelitian ini bertujuan untuk mengeksplorasi apa saja yang dirasakan mahasiswa selama pembelajaran daring ini dilakukan. Penelitian deskriptif kualitatif dilakukan pada 40 responden mahasiswa Pendidikan IPA Fakultas Tarbiyah dan Keguruan UIN Sunan Ampel Surabaya. Hasil menunjukkan bahwa pada aspek Kelancaran Pelaksanaan Pembelajaran 52,5% mahasiswa berpendapat Jumlah Pertemuan dan Kesesuaian Materi dengan Silabus Baik, sesuai dengan yang diharapkan. 37,5% Sistem Perkuliahan yang menggunakan Platform Daring atau Online berjalan baik namun 30% berpendapat tidak sesuai dengan yang diharapkan. Cara Penyampaian Dosen cukup sesuai dengan yang diharapkan berkaitan dengan penguasaan materi dan penguasaan penggunaan Platform Online oleh dosen, aspek ini mendapatkan penilaian 52,5% dari mahasiswa. Penugasan selama pembelajaran online ini dilakukan dirasa cukup memberatkan mahasiswa terbukti pada 30% mahasiwa menyatakan aspek ini tidak sesuai dengan yang diharapkan. Sementara aspek paling memberatkan dilakukan pembelajaran online salama pandemi COVID 19 ini adalah masalah jaringan yang berkaitan dengan Sinyal dan Kuota Paket Data. 40% mahasiswa meyatakan bahwa aspek ini dirasa tidak sesuai dengan yang diharapkan dan memberatkan. Whats App Group adalah Platform Online yang paling diminati mahasiswa, sementara Zoom bukan menjadi pilihan prioritas.", "venue": "", "year": 2020.0, "author_names": ["Yuanita Lely Rachmawati", "Muhammad Ma'arif", "Ninik Fadhillah", "Nailil Inayah", "Khoirotul Ummah", "Muh Nuh Fathsyah Siregar", "Rela Amalyaningsih", "Fahira Aftannailah", "Aisyatul Auliyah"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 218956142, "title": "Implementasi Model Kesuksesan Sistem Informasi DeLone And McLean Terhadap Sistem Pembelajaran Berbasis Aplikasi Zoom Di Saat Pandemi Covid 19", "abstract": "Investasi dalam sistem informasi saat ini memiliki dampak yang signifikan terhadap aspek yang multidimensional dalam artian mencakup berbagai aspek dalam kehidupan berbisnis seperti perbankan, pendidikan, kepariwisataan dan lainnya karena sistem informasi memainkan peran penting dalam memberikan pelayanan yang lebih baik dan keunggulan kompetitif. Tidak terlepas pada dunia pendidikan dimasa saat ini kondisi dunia dan Indonesia mengalami pendemi COVID 19 sehingga himbauan pemerintah yang menyatakan bekerja dari rumah work from home social distancing serta penyesuaian sistem kerja bukan berarti pelayan public dan pembelajaran dihentikan, namun semua aktivitas dilakukan dengan bantuan teknologi informasi atau secara online. Salah satu media atau aplikasi yang biasa digunakan untuk sistem pembelajaran adalah aplikasi zoom. Peningkatan penggunaan aplikasi yang pesat menarik peneliti untuk melihat implementasi dari model yang diterapkan oleh DeLone dan McLean dimana model kesuksesan sistem dapat dilihat dari sisi kualitas sistem, kualitas informasi dan kualitas pelayanan serta kebutuhan dan kepuasan dari pengguna informasi. Penelitian ini merupakan explanatory research, lokasi penelitian ini dilakukan di kota Malang Jawa Timur. Populasi dalam penelitian ini adalah semua orang yang pernah menggunakan aplikasi zoom dalam beraktivitas terutama bidang pendidikan dengan jumlah sampel 180 responden. Pengumpulan data dengan penyebaran kuesioner, karena kondisi pandemic Covid 19 dengan pentingnya menerapkan Social Distancing maka penyebaran kuesioner dilakukan secara online. Teknik analisis data menggunakan Analisis Statistik Deskriptif, Analisis data menggunakan SEM dan Pengujian Hipotesis. Hasil dari penelitian ini system quality berpengaruh positif terhadap User Satisfaction, Information Quality berpengaruh positif terhadap User Satisfaction, Service Quality berpengaruh positif terhadap User Satisfaction dan User Satisfaction berpengaruh positif terhadap Net Benefit. DOI: https:/doi.org/10.26905/jtmi.v6i1.4165", "venue": "", "year": 2020.0, "author_names": ["Syarif Hidayatullah", "Umu Khouroh", "Irany Windhyastiti", "Ryan Gerry Patalo", "Abd Waris"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 225664788, "title": "Learning From Home dalam Perspektif Persepsi Mahasiswa Era Pandemi Covid 19", "abstract": "Covid 19 saat ini telah menyebar ke berbagai negara di dunia. WHO (World Health Organisation) telah menyatakan Covid 19 sebagai pandemi pada 11 Maret 2020. Pandemi Covid 19 telah mempengaruhi semua sistem pendidikan di seluruh jenjang pendidikan dasar sampai pergurun tinggi, termasuk Stikes Rajekwesi Bojonegoro. Proses pembelajaran peserta didik dalam kelas harus dirubah metodenya dengan learning from home. Persepsi mahasiswa perlu diukur sebagai bahan evaluasi untuk perbaikan kualitas pelaksanaan learning from home sehingga tujuan pembelajaran bisa tercapai. Tujuan dari penelitian ini adalah mengeksplorasi persepsi mahasiswa Stikes Rajekwesi Bojonegoro T.A 2019/2020 terhadap learning from home di era pandemi Covid 19. Desain penelitian adalah deskriptif dengan sampel mahasiswa Stikes Rajekwesi Bojonegoro T.A 2019/2020 berjumlah 200 orang yang diambil dengan Accidental Sampling. Data dikumpulkan melalui Google Form, dan dikonversikan dalam bentuk prosentase. Hasil penelitian menunjukkan pemahaman materi perkuliahan 54.5% sulit memahami, kreativitas mahasiswa 50% kreatif, metode dan strategi pembelajaran 51.5% cukup sesuai, hubungan antara dosen dengan mahasiswa 46% kurang dekat, pelaksanaan tugas oleh mahasiswa 56.5% sulit dan lambat, dan 41% mahasiswa kurang aktif selama perkuliahan. Persepsi mahasiswa terkait learning from home masih kurang memuaskan. Perlu inovasi, komunikasi, dan strategi pelaksanaan learning from home yang lebih menyenangkan agar motivasi belajar mahasiswa bisa meningkat", "venue": "", "year": 2020.0, "author_names": ["Rahmawati Rahmawati", "Evita Muslima Isnanda Putri"], "n_citations": 5, "n_key_citations": 0, "score": 1}, {"corpus_id": 225702865, "title": "Analisis Perubahan Orientasi Pola Hidup Mahasiswa Pasca Berakhirnya Masa Pandemi Covid 19", "abstract": "Krisis yang terjadi di dunia saat ini akibat munculnya Covid 19 telah memberikan berbagai perubahan mendasar pada kehidupan sosial masyarakat. Salah satu yang patut disoroti adalah dalam bidang pembelajaran pada tingkat mahasiswa, dimana telah nampak terjadi perubahan secara mendasar. Mahasiswa merupakan salah satu status sosial yang disandang oleh orang orang yang menempuh pendidikan tingkat tinggi, yang juga sarat dengan keilmuan maupun sisi intelektualnya. Pemikiran mahasiswa yang cenderung maju dan kerapnya melakukan aksi sosial, menjadikan mahasiswa sebagai jembatan penghubung bagi perkembangan kehidupan masyarakat luas. Di tengah era Covid 19 ini, potensi sosial dari mahasiswa menjadi perbincangan oleh sebagian pihak terutama para akademisi dan peneliti. Tujuan dari penelitian ini adalah berupa bentuk analisis terhadap kemungkinan perubahan orientasi pada kehidupan mahasiswa pasca berakhirnya masa pandemi Covid 19. Metode yang dilakukan adalah dengan (studi kepustakaan) dimana mengamati fenomena yang terjadi serta didukung berbagai penelitian, artikel, maupun opini terkait untuk kemudian dijelaskan secara deskriptif. Kesimpulan dari penelitian ini adalah mahasiswa sangat berpotensi besar untuk mengalami perubahan pada pola hidup dan interaksi akibat penerapan belajar online. Eksistensi mahasiswa menjadi dikhawatirkan, sehingga hal ini mengancam terbentuknya generasi intelektual yang berkualitas. Mengingat saat ini pola interaksi dan pembelajaran pada mahasiswa menjadi berbeda, serta mereka juga berada dalam tahap penyesuaian. Hal ini dapat disimpulkan sebagai suatu permasalahan kompleks, tentang realitas sosial yang telah terjadi dan diprediksi pada kalangan mahasiswa. Maka dari itu, pola pembelajaran online merupakan sesuatu yang tak boleh habis untuk dikaji.", "venue": "", "year": 2020.0, "author_names": ["B Ahmad Farah", "Robby Darwis Nasution"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 225926306, "title": "MEMBANGUN KEMANDIRIAN BELAJAR MAHASISWA MELALUI BLENDED LEARNING DI MASA PANDEMI COVID 19", "abstract": "Pada masa pandemic covid 19 ini pembelajaran dilakukan secara daring demi memutus penyebaran covid 19, sehingga mahasiswa dituntut untuk dapat lebih mandiri dalam belajar karena pembelajaran tidak dilakukan secara tatap muka. Pada masa ini Kemandirian belajar merupakan salah satu aspek penting yang harus dimiliki oleh mahasiswa demi tercapainya kompetensi secara optimal, namun nyatanya kemandirian mahasiswa dalam belajar masih kurang begitu baik, mengingat pentingnya sikap ini dan dihadapkan pada situasi yang sulit akibat covid 19 maka pendidik sudah seharusnya melaksanakan pembelajaran yang dapat memfasilitasi terbentuknya kemandirian belajar. Salah satu bentuk pembelajaran yang mampu mengembangkan kemandirian belajar mahasiswa adalah blended learning, pembelajaran ini memadukan pembelajaran secara daring dan juga tatap muka. Bentuk pembelajaran ini memungkinkan mahasiswa dapat belajar secara efektif dan efesien, lebih mudah mengakses materi ajar, dan pada akhirnya meningkatkan kemandirian belajar mahasiswa karena belajar dilakukan secara mandiri.", "venue": "", "year": 2020.0, "author_names": ["Yuyu Yuliati", "Dudu Suhandi Saputra"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 224845590, "title": "Persepsi Mahasiswa Terhadap Pembelajaran Daring Pada Masa Pandemi Covid 19", "abstract": "Abstrak: Pandemi Covid 19 telah mengubah tatanan hidup masyarakat termasuk pada bidang pendidikan. Untuk menghindari bertambahnya kasus, Menteri Pendidikan dan Kebudayaan telah membuat kebijakan tentang proses belajar mengajar yang dilakukan secara daring Penelitian ini bertujuan untuk mengetahui persepsi mahasiswa terhadap pembelajaran daring di masa pandemi Covid 19. Penelitian ini menggunakan metode analisis deskriptif kuantitatif dengan instrumen penelitian berupa kuisioner yang disebar secara online dengan bantuan google form Jumlah sampel dalam penelitian ini adalah 95 mahasiswa Program Studi Teknologi Pendidikan Universitas Baturaja yang telah terlibat dalam pembelajaran daring selama masa pandemi Covid 19. Hasil penelitian menunjukkan bahwa 100% mahasiswa Program Studi Teknologi Pendidikan Universitas Baturaja menjalankan pembelajaran daring di semester genap tahun akademik 2019/2020. Adapun media online yang paling diminati mahasiswa saat pembelajaran daring yaitu Google Classroom (46,8% Whatsapp (27,4% Edmodo (19,4% dan Zoom (6,4% Meskipun begitu mayoritas mahasiswa yaitu 93,5% lebih menyukai pembelajaran secara offline di kelas tatap muka dibandingkan pembelajaran daring Abstract The Covid 19 pandemic has changed the way people live, including in the field of education. In order to avoid increasing cases, the Minister of Education and Culture has made a policy regarding the teaching and learning process which is carried out online. This study aims to determine students' perceptions of online learning during the Covid 19 pandemic. This study uses a quantitative descriptive analysis method with a research instrument in the form of a questionnaire distributed online with the help of google form. The number of samples in this study were 95 students of the University of Baturaja Educational Technology Study Program who had been involved in online learning during the Covid 19 pandemic. The results showed that 100% of Baturaja University Educational Technology Study Program students carried out online learning in the even semester of the 2019/2020 academic year. The online media that students are most interested in when learning online are Google Classroom (46.8% Whatsapp (27.4% Edmodo (19.4% and Zoom (6.4% Even so, the majority of students, namely 93.5% prefer offline learning in face to face classes compared to online learning.", "venue": "", "year": 2020.0, "author_names": ["Sulia Ningsih"], "n_citations": 7, "n_key_citations": 0, "score": 1}, {"corpus_id": 225569790, "title": "Dampak pandemi Covid 19 terhadap kepuasan pembelajaran jarak jauh", "abstract": "Pandemi Covid 19 saat ini berdampak pada perguruan tinggi. IAIN Padangsidimpuan sebagai salah satu institusi pendidikan tinggi negeri keagamaan Islam di Indonesia dituntut untuk mengikuti perubahan metode pembelajaran yaitu pembelajaran jarak jauh (PJJ) yang semula sepenuhnya dilakukan dengan tatap muka. Letak kampus yang berada di Bagian Selatan Sumatera Utara dengan asal mahasiswa yang beragam dan berada jauh dari perkotaan menjadi tantangan tersendiri bagi institusi. Penelitian ini menggunakan pendekatan kualitatif dengan analisis deskriptif. Jumlah informan 384 orang yang terdiri dari mahasiswa aktif Fakultas Ekonomi dan Bisnis Islam IAIN Padangsidimpuan yang dipilih secara acak. Berdasarkan hasil penelitian diketahui bahwa, meskipun mayoritas mahasiswa (95,8% sudah memiliki perangkat untuk menjalani PJJ, namun di sisi lain mahasiswa merasa metode PJJ saat ini belum tepat karena mahasiswa merasa tidak dapat memantau perkembangan PJJ dengan mudah, tidak dapat memperoleh materi pembelajaran dengan mudah juga tidak dapat mempelajari materi dengan mudah. Secara keseluruhan, baik dari sisi teknologi maupun sisi dosen, mahasiswa tidak puas dengan metode PJJ yang dijalaninya saat ini dan juga merasa tidak puas dengan kemampuan dosen dalam menyampaikan materi pada PJJ. Abstract The Covid19 pandemic is currently affecting universities. IAIN Padangsidimpuan as one of the institutions of Islamic religious state higher education in Indonesia is required to follow the changes in learning methods, namely distance learning (PJJ) which is from the beginning is fully face to face. The location of the campus in the Southern Part of North Sumatra with diverse origins of students and being far from urban areas is a challenge for institutions. This research uses a qualitative approach with descriptive analysis. The number of informants 384 people consisting of active students of the Faculty of Islamic Economics and Business were randomly selected. Based on the results of the study note that, although the majority of students (95.8% already have the tools to undergo Distance Learning, on the other hand students feel the distance learning method is currently not appropriate because students feel unable to monitor the development of distance learning easily, cannot obtain learning material easily nor can study material easily. Overall, both in terms of technology and the Lecturer side, students are not satisfied with the distance learning method they are currently undergoing and also feel dissatisfied with the ability of the Lecturer to deliver material to distance learning.", "venue": "", "year": 2020.0, "author_names": ["Rodame Monitorir Napitupulu"], "n_citations": 15, "n_key_citations": 1, "score": 1}]} -{"query": "peer specialist services: new Frontiers and new roles", "session_id": 5779217695792665, "user_id": 144322563056877, "candidates": [{"corpus_id": 199503789, "title": "Peer specialist services: New frontiers and new roles.", "abstract": "This special section of Psychological Services is devoted to the most recent work on services provided by peer specialists. As individuals who have overcome obstacles in life and who are trained to provide services to others with similar life challenges, peer specialists promote recovery, foster resilience, and build on patients' strengths to support community integration and help them lead more fulfilling lives. Expanding beyond the basic peer specialist model, this special section showcases innovative programs that more fully utilize peer specialists such as partnering with them in treatment engagement as well as successfully moving into new arenas such as suicide prevention. This special section speaks to both the increasing range of work peer specialists are engaging in and how their roles are growing in complexity. It also explores the impact this discipline is having on systems of care and provides research on approaches to optimize their implementation. (PsycINFO Database Record (c) 2019 APA, all rights reserved)", "venue": "Psychological services", "year": 2019.0, "author_names": ["Anne Klee", "Matthew J Chinman", "Lisa K Kearney"], "n_citations": 6, "n_key_citations": 0, "score": 1}, {"corpus_id": 237005395, "title": "A Longitudinal Qualitative Analysis of the Way Peer Support Specialist Roles Change Over Time in a Psychiatric Hospital Setting in Asia", "abstract": "The current study seeks to determine how peer support roles change as peer support specialists' positions within organizations and departments mature. We followed ten peer support specialists over the course of a year, interviewing them at three points, starting approximately three months after they began working as peer support specialists. We used an inductive process to analyze our data and followed guidelines on the structuring of longitudinal qualitative trajectories to divide the data into watershed moments. Our participants worked in a variety of departments in the hospital, and their service use experiences generally echo those of their service users. Participants appear to pass through four phases over the course of their employment as peers: early beginnings, establishing the role, role narrowing, and role sustainability. Services wishing to integrate new peers must be aware of the time required for integration. Having general job descriptions limited to specifying that peers are expected to use their lived experience to support current service users may lead to uncertainty amongst new and existing staff. Without role clarity, peers may struggle to find their place. Pairing new staff with mentors may limit this burden. As roles consolidate, boundaries may emerge. If these boundaries narrow the role of the PSS, they may no longer find the role appealing. They may then choose other caregiver roles with wider or different spheres of influence. Organizations may benefit by clearly indicating if they expect peer support positions to be static or transitionary.", "venue": "Administration and Policy in Mental Health and Mental Health Services Research", "year": 2021.0, "author_names": ["Daniel Poremski", "Jonathan Han Loong Kuek", "Kah Lai Yow", "Pui Wai Eu", "Hong Choon Chua"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 4737206, "title": "Provision of peer specialist services in VA patient aligned care teams: protocol for testing a cluster randomized implementation trial", "abstract": "BackgroundOver 1100 Veterans work in the Veterans Health Administration (VHA) as peer specialists (PSs) PSs are Veterans with formal training who provide support to other Veterans with similar diagnoses, primarily in mental health settings. A White House Executive Action mandated the pilot reassignment of VHA PSs from mental health to 25 primary care Patient Aligned Care Teams (PACT) in order to broaden the provision of wellness services that can address many chronic illnesses. An evaluation of this initiative was undertaken to assess the impact of outside assistance on the deployment of PS in PACT, as implementation support is often needed to prevent challenges commonly experienced when first deploying PSs in VHA settings. We present the protocol for this cluster randomized hybrid type II trial to test the impact of standard implementation (receive minimal assistance) vs. facilitated implementation (receive outside assistance) on the deployment of VHA PSs in PACT.MethodsA VHA Office of Mental Health Services work group is recruiting 25 Veterans Affairs Medical Centers to reassign a mental health PSs to provide wellness oriented care in PACT. Sites in three successive cohorts (n 8, 8, 9) beginning over 6 month blocks will be matched and randomized to either standard or facilitated implementation. In facilitated implementation, an outside expert works with site stakeholders through a site visit, regular calls, and performance data to guide the planning and address challenges. Standard implementation sites will receive a webinar and access the Office of Mental Health Services work group. The two conditions will be compared on PS workload data, fidelity to the PS model of service delivery, team functioning, and Veteran measures of activation, satisfaction, and functioning. Qualitative interviews will collect information on implementation barriers and facilitators.DiscussionThis evaluation will provide critical data to guide administrators and VHA policy makers on future deployment of PSs, as their role has been expanding beyond mental health. In addition, development of novel implementation strategies (facilitation tailored to PSs) and the use of new tools (peer fidelity) can be models for monitoring and supporting deployment of PSs throughout VHA.Trial registrationClinicalTrials.gov, NCT02732600 (URL:https:/clinicaltrials.gov/ct2/show/NCT02732600)", "venue": "Implementation Science", "year": 2017.0, "author_names": ["Matthew J Chinman", "Karin Elizabeth Daniels", "Jeff Smith", "Sharon McCarthy", "Deborah Medoff", "Amanda Peeples", "Richard W Goldberg"], "n_citations": 11, "n_key_citations": 1, "score": 0}, {"corpus_id": 72126953, "title": "Exploring New Frontiers: Recovery Oriented Peer Support Programming in a Psychiatric ED", "abstract": "Enhancing the diversity of roles for paid peer support specialists is a topic of increasing interest throughout the country. Peer specialist positions promote a renewed sense of hope for the possibility of recovery, while also offering unique and valuable competitive employment options for mental health consumers. As we strive toward local and national recovery oriented systems of care, we must continue to explore practical program applications and their associated benefits and challenges. The authors describe the development and implementation of a recovery oriented peer support team within the psychiatric service of an emergency department (psychiatric ED) located at an academic medical center in a northeastern state.", "venue": "", "year": 2011.0, "author_names": ["Scott Migdole", "Janis Tondora", "Michelle A Silva", "Alan D Barry", "Jane C Milligan", "Edward Mattison", "Wiley Rutledge", "Seth M Powsner"], "n_citations": 17, "n_key_citations": 2, "score": 0}, {"corpus_id": 227174897, "title": "Actionable Items to Address Challenges Incorporating Peer Support Specialists Within an Integrated Mental Health and Substance Use Disorder System: Co Designed Qualitative Study", "abstract": "Background Peer support specialists offering mental health and substance use support services have been shown to reduce stigma, hospitalizations, and health care costs. However, as peer support specialists are part of a fast growing mental health and substance use workforce in innovative integrated care settings, they encounter various challenges in their new roles and tasks. Objective The purpose of this study was to explore peer support specialists' experiences regarding employment challenges in integrated mental health and substance use workplace settings in New Hampshire, USA. Methods Using experience based co design, nonpeer academic researchers co designed this study with peer support specialists. We conducted a series of focus groups with peer support specialists (N=15) from 3 different integrated mental health and substance use agencies. Audio recordings were transcribed. Data analysis included content analysis and thematic analysis. Results We identified 90 final codes relating to 6 themes: (1) work role and boundaries, (2) hiring, (3) work life balance, (4) work support, (5) challenges, and (6) identified training needs. Conclusions The shared values of experience based co design and peer support specialists eased facilitation between peer support specialists and nonpeer academic researchers, and indicated that this methodology is feasible for nonpeer academic researchers and peer support specialists alike. Participants expressed challenges with agency restrictions, achieving work life balance, stigma, and low compensation. We present actionable items to address these challenges in integrated mental health and substance use systems to potentially offset workforce dissatisfaction and high turnover rates.", "venue": "Journal of participatory medicine", "year": 2020.0, "author_names": ["Margareth Aparecida Santini de Almeida", "Annie Day", "Bret Smith", "Cynthia L Bianco", "Karen L Fortuna"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 44909286, "title": "The role of the specialist acute oncology nurse in the new acute oncology services.", "abstract": "Specialist acute oncology nurses (AON) are appearing all over the UK, not just in England, where they are a recommendation of peer review, but also in Scotland, Northern Ireland and Wales. The role of the specialist AON is multifaceted and demands that these nurses show many skills, including leadership, innovation, negotiation, teaching and, importantly, expert clinical skills. Services are designed to be responsive to local need so the role can differ between cancer centres and units. It is also influenced by the presence of an emergency department or an acute medical admissions unit. It is clear that however local services are configured, there are core elements to the specialist AON role.", "venue": "Clinical oncology (Royal College of Radiologists (Great Britain)", "year": 2014.0, "author_names": ["Larry O'Neil Putt", "P Jones"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 233275423, "title": "Actionable Items to Address Challenges Incorporating Peer Support Specialists Within an Integrated Mental Health and Substance Use Disorder System: Co Designed Qualitative Study", "abstract": "Background: Peer support specialists offering mental health and substance use support services have been shown to reduce stigma, hospitalizations, and health care costs. However, as peer support specialists are part of a fast growing mental health and substance use workforce in innovative integrated care settings, they encounter various challenges in their new roles and tasks. Objective: The purpose of this study was to explore peer support specialists' experiences regarding employment challenges in integrated mental health and substance use workplace settings in New Hampshire, USA. Methods: Using experience based co design, nonpeer academic researchers co designed this study with peer support specialists. We conducted a series of focus groups with peer support specialists (N=15) from 3 different integrated mental health and substance use agencies. Audio recordings were transcribed. Data analysis included content analysis and thematic analysis. Results: We identified 90 final codes relating to 6 themes: (1) work role and boundaries, (2) hiring, (3) work life balance, (4) work support, (5) challenges, and (6) identified training needs. Conclusions: The shared values of experience based co design and peer support specialists eased facilitation between peer support specialists and nonpeer academic researchers, and indicated that this methodology is feasible for nonpeer academic researchers and peer support specialists alike. Participants expressed challenges with agency restrictions, achieving work life balance, stigma, and low compensation. We present actionable items to address these challenges in integrated mental health and substance use systems to potentially offset workforce dissatisfaction and high turnover rates. (J Participat Med 2020;12(4):e17053) doi:10.2196/17053", "venue": "", "year": 2021.0, "author_names": ["Adam Bouras", "Eduardo J Simoes", "S Boren", "Lanis L Hicks", "Iris Zachary", "Christoph Buck", "Satvinder S Dhingra", "Richard Ellis"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 205660184, "title": "Military veteran engagement with mental health and well being services: a qualitative study of the role of the peer support worker", "abstract": "Abstract Background: Many UK military veterans experiencing mental health and well being difficulties do not engage with support services to get the help they need. Some mental health clinics employ Peer Support Workers (PSWs) to help veteran patients engage, however it is not known how the role influences UK veteran engagement. Aims: To gain insight into the role of peer support in UK veteran engagement with mental health and well being services. Method: A qualitative study based on 18 semi structured interviews with veterans, PSWs and mental health clinicians at a specialist veteran mental health and well being clinic in Scotland. Results: Four themes of the PSW role as positive first impression, understanding professional friend, helpful and supportive connector, and an open door were identified across all participants. The PSWs' military connection, social and well being support and role in providing veterans with an easily accessible route to dis engage and re engage with the service over multiple engagement attempts were particularly crucial. Conclusions: The Peer Support role enhanced veteran engagement in the majority of instances. Study findings mirrored existing peer support literature, provided new evidence in relation to engaging UK veterans, and made recommendations for future veteran research and service provision.", "venue": "Journal of mental health", "year": 2017.0, "author_names": ["Bronagh Weir", "Margaret Cunningham", "Lucy Abraham", "Charlie Allanson-Oddy"], "n_citations": 15, "n_key_citations": 0, "score": 0}, {"corpus_id": 216459917, "title": "Specialist inflammatory bowel disease nursing in the UK: current situation and future proofing", "abstract": "Objective To determine the impact to date of the ongoing Crohn's Colitis UK inflammatory bowel disease (IBD) clinical nurse specialists (CNS) campaign. Methods A survey based design was used. 2 questionnaires were sent to the UK IBD nursing community and promoted via nursing and clinical networks. Respondents were asked to provide data at both an individual and trust level about their nursing services. Results 394 IBD CNS posts were identified across the UK, with a 32% increase in posts since the start of the campaign. 27% felt the campaign had been influential in securing new posts. Greater numbers of posts were reported in England when compared with the devolved nations. Most services remain below the UK standards recommendation of 2.5 IBD CNS per 250 000 patient population. Cross site working was reported in 59% of services. 45% of respondents were non medical prescribers, with 13% educated to MSc level. High levels of stress were reported by IBD CNS associated with managing advice line services. Conclusions Crohn's Colitis UK's 'More IBD Nurses Better Care' campaign has contributed to the numbers of CNS posts in IBD continuing to rise, but they remain lower than the recommended standard of 2.5 IBD CNS per 250 000. Educational and career pathways are not clearly defined, and aspects of the role such as advice line provision contribute to stress within the workforce. The ongoing aims of the charity campaign hope to address these issues by improving access to formal education pathways with peer support for IBD specialist nurses, and advice line training, in addition to supporting trusts and services throughout the UK to reduce the workforce deficit with effective business cases.", "venue": "Frontline Gastroenterology", "year": 2020.0, "author_names": ["Lisa Younge", "Isobel Mason", "Rukshana Kapasi"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 226714938, "title": "The experiences of peer support specialists supervised by non peer supervisors.", "abstract": "This qualitative study examines the experiences of peer support specialists (PSS) supervised by non peer supervisors (NPS) in adult community mental health settings. Participants completed a demographic survey designed to address inclusionary criteria. From those eligible, a random number generator was used to select participants who would be interviewed using a semi structured interview guide. The critical incident technique was used to elicit memorable experiences of supervision. Data was analyzed thematically. Twenty interviews were completed before saturation was reached. Thematic analysis revealed eight major themes which are best understood in the context of the ongoing transformation of mental health services from the traditional medical model to a recovery oriented model. Those eight themes were supervisor attitudes, role integration, trauma informed supervisory techniques, facilitative/supportive environment, perspective taking, mutual learning, opportunities for peer networking and the desire for a supervisor who was a more experienced peer support worker. The supervisor's attitude was a critical factor in providing what PSS perceived as adequate supervision. An attitude of respect for the peer role combined synergistically with positive nonjudgmental communication to create a facilitative/supportive environment. That environment supported autonomous functioning which in turn worked to address role integration and suggest trauma informed supervisory techniques. Peer Support Specialists are integrating into a mental health service system transitioning from a medical model to a recovery oriented model of care. PSS are the embodiment of recovery. The experiences of PSS reflect the challenges inherent in role THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 10 innovation. NPS are the necessary guides who assist the PSS in navigating a system not yet aligned with peer values. If the mental health system is going to successfully become recovery oriented, then NPS need a unique skill set to support those with lived experience whose recovery can help point the way. THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 11 Chapter I BACKGROUND In the past few years, peer support has become part of the mental health landscape (Cronise, Teixeira, Rogers, Harrington, 2016; Salzer, Schwenk, Brusilovsky, 2010) In step with the empowerment education movement, which sought to redefine the relationship of patients and healthcare providers, health systems began moving towards a system of care that included the patient as an active participant (Anderson Funnell, 2004; Wallerstein Bernstein, 1988; Wallerstein Bernstein, 1994) In a similar fashion, the location of peer support has migrated from its initial location in self help groups to free standing peer run agencies to peer agencies working alongside traditional mental health agencies and ultimately, in the last decade, a move towards integration of peers into both health and mental health systems. Evolution of Peer Support Peer support is generally defined as a way of giving and receiving help from people who have similar experiences (Davidson, Bellamy, Guy, Miller, 2012; Lammers Happell, 2003; Mead, 2003; President's New Freedom Commission, 2003; Repper Carter, 2011) More specifically, peer support is defined as a way of giving and receiving help from people who have similar experiences based on key principles of respect, shared responsibility and mutual agreement about what is helpful (Davidson, Bellamy, Guy, Miller, 2013; Lammers Happell, 2003; Mead, 2003; President's New Freedom Commission, 2003; Repper Carter, 2010) Peer support in mental health has evolved. Originally peer support was located within self help groups and later within THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 12 peer run agencies. Now, peers provide peer support in a variety of settings from independent peer agencies to case management teams to inclusion in more traditional settings like partial hospitalization programs, clubhouses and drop in centers (Salzer et al. 2010) The evolution has also been reflected in how peers refer to themselves. For example, early literature referred to such individuals as psychiatric survivors, a term connoting the low regard many had for the mental health system where they were treated (Chamberlin, 1978; Chamberlin, 1990) The moniker shifted to consumers and then to peers. Likewise, there has been a change in how peers who provide support for other peers refer to themselves. Titles have shifted from peer advocate, peer supporter, peer provider, peer support specialist, certified peer specialist and most recently to peer professional. For the purposes of this study, the term peer support specialist (PSS) will be used as it is a term commonly used as a job title for peers hired to support others with a mental health diagnosis. It is widely accepted that peer support is a critical element of a recovery oriented system of care (Anthony, 1993; Deegan, 1988; Lunt, 2002; President's New Freedom Commission, 2003; Ralph, 2000) Research suggests efforts to integrate into this disparate system of care bring both barriers and challenges. Given that PSS who are credentialed by their lived experience work among other professionals who are credentialed by their educational experience (Gates, Mandiberg, Akabas, 2010; Bennetts, Pinches, Paluch, Fossey, 2013; Budd, 1987; Chesler, 1990; Gartner Riesman, 1982; Kemp Henderson, 2012; Repper Carter, 2010; Smith et al. 2016; Vandewalle et al. 2016) Much of the current literature studying barriers and challenges THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 13 to integration suggests that supervision is a key component of successful integration. There is little known about the supervision of PSS in general or about PSS supervision by non peer professionals. There is currently no empirical literature which addresses the experiences of PSS receiving supervision. Data suggests that the trend of peers working as PSS alongside non peer mental health professionals continues to grow (Chapman, Blash, Chan, 2015) Given this trend, there is a need to understand whether supervision by a non peer meets the supervisory needs of a PSS when integrating into a clinical adult mental health team (Middleton, Stanton, Renouf, 2004) In this chapter, the history leading to the current mental health landscape, the empirical studies of peer support, differences in peer and professional perspectives, and establishment of what is currently understood about the challenges of peer integration will be presented. Additionally, the framework of clinical supervision and a 2015 report titled the Pillars of Peer Supervision (Daniels, Tunner, Powell, Fricks, Ashenden, 2015) are employed as ways of understanding the supervision of peers by non peer supervisors (NPS) Finally, the use of a qualitative research design is suggested as the methodology best suited to understand situations or experiences about which little is known, such as the non peer supervision of PSS (Creswell, 2013) If peer support is to provide an efficacious service element for persons with serious mental illness, we must understand more about what supports its success and how supervision can contribute. The Role of Recovery in the Historical Context The mental health recovery paradigm has become a significant philosophical influence on the delivery of mental health services; however, the use of the term recovery THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 14 has varied widely. As the possibility of recovery was introduced into mental health systems through the writings of proponents and through documents such as the President's New Freedom Commission (2003) mental health service providers slowly began to incorporate the language and tools of recovery into mental health settings. The term recovery is used differently in different settings. For some mental health practitioners, the term may refer to expected clinical outcomes or for other practitioners, it may refer to a philosophy or attitude that casts doubt about viewing all serious mental illness as a chronic condition (Anthony, 1993; Deegan, 1988; Harding, Brooks, Ashikaga, Strauss, Brieier, 1987) The concept of clinical recovery implies that a person is experiencing no signs or symptoms of mental illness, living independently, having a social life and working. In short, the individual is considered disease free. The philosophy of personal recovery as noted by Deegan (1988) and Anthony (1993) generally refers to a process whereby a person develops a new sense of self that encompasses the presence of mental illness and continues with their life. In essence, the mental illness becomes a long term condition that must be dealt with but which does not define the individual. Recovery in the substance abuse field, for example, generally reflects a philosophical stance suggesting acceptance of abstinence from addictive substances as a goal which is achieved one day at a time (White, 2007) In this study, we will refer to recovery as personal recovery, that is the ability to live a satisfying and contributing life irrespective of ongoing symptoms and disability which is the philosophical understanding promoted by Deegan (1988) and Anthony (1993) THE EXPERIENCES OF PEER SUPPORT SPECIALISTS SUPERVISED 15 Until longitudinal studies such as those conducted by Harding, Brooks, Ashikaga, Strauss, and Brieier (1987a) clinical recovery was generally not considered a likely outcome for individuals with serious mental illness. This prognostic conception of serious mental illness was likely reinforced by the seemingly chronic nature of the individuals in treatment facilities. This misconception was best explained by Cohen and Cohen's, The Clinician's Illusion (1984) Essentially, clinicians do not see persons who clinically recover from severe mental illness since they no lo", "venue": "", "year": 2020.0, "author_names": ["Joan Forbes"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "car door handle inner design", "session_id": 6663181305931034, "user_id": 2310679963477633, "candidates": [{"corpus_id": 159285992, "title": "Research on the use of lignocellulosic fibers reinforced bio polyamide 11 with composites for automotive parts: Car door handle case study", "abstract": "Abstract Most of the decisions taken during the early design and development steps of a new product compromise a large part of its cost, including its environmental footprint and energy consumption. This is of special interest for the automotive industry that has made an effort to increase its sustainability. Adjectives like bio based, recyclable or biodegradable are commonly used as synonyms of greener; nonetheless, such materials must achieve the requirements of the industry. This paper researches the use of alternative materials instead of glass fiber reinforced polypropylene, a commodity material. The authors propose using a wood fiber reinforced polyamide 11 composite as replacement. The research discussed the mechanical properties of such composites, obtaining values similar to the currently used materials. Moreover, a case study was performed to assess the behavior of the composites when used to manufacture a door car handle. The materials with reinforcement contents ranging from 40 to 60% showed its ability to replace the commodity materials. Furthermore, a preliminary LCA analysis was performed to evaluate the environmental footprint of the researched materials. In was found, that, in terms of energy and carbon footprint, the PA11 composites were penalized by the energy cost of the PA11 monomer production.", "venue": "Journal of Cleaner Production", "year": 2019.0, "author_names": ["Helena Oliver-Ortega", "Fernando Julian", "Francesc Xavier Espinach", "Quim Tarres", "Monica Ardanuy", "Pere Mutje"], "n_citations": 23, "n_key_citations": 1, "score": 1}, {"corpus_id": 114732132, "title": "A design for an autonomous car door handle", "abstract": "", "venue": "", "year": 2016.0, "author_names": ["Taa Tommie Perenboom"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 218919140, "title": "Modeling, Analysis and Simulation of Work for the Punching and Cutting Operations on Inner Plate of the Front Car Door", "abstract": "In this paper we displayed the problematic of modeling, analysis and simulating work process of specific tool used for punching and cutting operations on inner plate of the front car door. Basic principles of treating material with deformations are displayed, to easily understand operations that are performed during the work of the tool. Principles of geometric modeling, application of Boolean operations, parametrized individual standard parts and databases are very important factors for tool modeling. Beside modeling it is very important to calculate the specific parts based on standards which are used in car industry. After geometrical modeling of tool, simulation and work analysis have been performed to control the movement speed of the tool during production. Simulation also enables faster design process for the tool, increases work safety etc.", "venue": "", "year": 2020.0, "author_names": ["Isad Saric", "Enis Muratovic", "Harun Music"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 88500997, "title": "Design and Manufacturing of Real Scale Mockup Car Door Via 3d Printer", "abstract": "In this study, a door of a real scale Mocap electric car that we previously produced, was designed by using package design programs available for the production of the body elements. In this comprehensive design, connection interfaces between the door and the car body, the door handle, the elements for window mechanisms, the elements for door locking and the door hinge were designed in detail. Before the production of these designed parts, conformity analysis and tests were performed in the computer environment. Instead of the conventional car manufacturing processes, modern, fast and economical 3D printers were used for manufacturing. The door, door components and their 3D printer production data which were created and assembled in computer environment were further manufactured via 3D printer.", "venue": "", "year": 2018.0, "author_names": ["Kemal Ersan", "Yunus Ali Demiroglu", "Burhan Guldur"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219722161, "title": "Novel Design for Door Handle A Potential Technology to Reduce Hand Contamination in the COVID 19 Pandemic", "abstract": "Key words Coronavirus disease 2019; COVID 19; 3D printing; door handle", "venue": "The American Journal of Medicine", "year": 2020.0, "author_names": ["Kuan-Lin Chen", "Shyh-Jen Wang", "Chien Chuang", "Li-Ying Huang", "Fang-Yao Chiu", "Fu-Der Wang", "Yi-Tsung Lin", "Wei-Ming Chen"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 207906674, "title": "Three Dimension Model and Rapid Prototyping of Car Interior Handle Based on Reverse Engineering", "abstract": "The reverse engineering and rapid prototyping technology are used to analyze and improve the car interior handle. Digital information of the selected car interior handle are collected and sorted by hand held laser scanner due to its structural characteristics. By the reverse design idea, the reconstruction of the inner handle surface and the 3D solid modeling are completed based on Imageware and SolidWorks software Finally, the car interior handle is obtained through rapid prototyping technology. The application of reverse engineering and rapid prototyping technology can greatly shorten the design and development cycle and effectively improve the design efficiency of products.", "venue": "AIAM", "year": 2019.0, "author_names": ["Hong-jun Ni", "Weijia Tang", "Shuaishuai Lv", "Yu Zhu", "Xingxing Wang", "Kaixuan Wang", "Tiancheng Huang", "Jianhua Sun"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 145928076, "title": "Integrated outer door handle structure of car", "abstract": "An integrated outer door handle structure of a car comprises a base assembly, a handle assembly and a locking block, wherein the base assembly and the handle assembly are arranged on the two sides of a door sheet metal part respectively, and a third connecting hole is formed in the position, on the inner side of a base framework, of the locking block. The locking block forwards stretches to form a first clasping arm and a second clasping arm, guide strips are arranged on the two sides of the first clasping arm and the two sides of the second clasping arm, and the tail end of the first clasping arm and the tail end of the second clasping arm abut against rear clamping strips. A first connecting hole, a second connecting hole and a third connecting hole are in the same straight line and are fixed together through a set screw in a screw mode. The integrated outer door handle structure has the advantages that the car handle integrated structure and the locking block are adopted, compared with a traditional separated car handle and screw fixation, the number of parts is reduced, production efficiency and assembly efficiency are improved, the defect of chromatic aberration existing in the prior art is overcome, and the yield of products is increased.", "venue": "", "year": 2014.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 140900799, "title": "Inner plate windowsill reinforcing plate structure and car door with the same", "abstract": "The invention discloses an inner plate windowsill reinforcing plate structure and a car door with the same and belongs to the field of automobile parts. The inner plate windowsill reinforcing plate structure comprises a convex step surface, an upper welding edge at the upper part of the convex step surface, a lower welding edge at the lower part of the convex step surface, and lateral plates at two sides of the convex step surface and firmly connected with an inner plate, the lower welding edge is zigzag, each rack of the lower welding edge is provided with a welding spot welded with the inner plate, and the scrap edge of the other part of the lower welding edge is separated from the inner plate. The inner plate windowsill reinforcing plate structure has beneficial effects that the strip line rigidity of the inner plate is improved; the anti rust ability of the part is reinforced; the light design is realized through the part structure, and the manual welding operation convenience is improved.", "venue": "", "year": 2015.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 181806853, "title": "Optimization of Passenger Car Door Impact Beam using Quasi Static CAE Analysis", "abstract": "Automotive side impacts are particularly dangerous as location of impact is very close to the passenger, who can be immediately reached by the impacting vehicle. FMVSS 214 static is a US safety regulation for occupant safety during side impacts, in which the vehicle is tested at static loading conditions to measure its load baring capacity and integrity of side closures. The CAE load case, virtually simulating the test, was handled as a quasi static problem in this study. Impact beam is a component that helps in improving vehicle passive safety performance during side impacts by minimizing door intrusion to the occupant cabin. It plays an important role in achieving side impact regulatory norms. Through this study, a mass optimized front door impact beam design was developed for a passenger car with the help of CAE simulations; FMVSS 214S regulation norms are met. Component thickness, material and cross section shape were the design variables considered for the study. A methodology to perform the component level simulation of the impact beam loading such that it replicates component behaviour during full vehicle simulation was developed. This has helped in reducing the total problem calculation time in solver. This also has minimized the computational cost for the project. CAE simulations required for the study were done using LS DYNA. ANSA and PRIMER were used as pre processors and hyper graph and meta post were used for post processing.", "venue": "International Journal of Vehicle Structures and Systems", "year": 2019.0, "author_names": ["S P Sundar Singh Sivam", "Ganesh Babu Loganathan", "Krishnaswamy Saravanan", "V G Umasekar", "T P Mohammed Rameez"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 114113250, "title": "Numerical simulation of drawing process for laser tailor welded blank of car door inner plate", "abstract": "The stamping forming of laser tailor welded blank for car door inner panel was simulated based on the Dynaform,stamping process and the deflects that influence the product quality seriously in stamping process,such as the wrinkling,cracking,insufficient deformation and large quantities movement of weld line,were simulated and analyzed.After the comprehensive analysis of wrinkling and cracking,the drawing technological parameters and modular structure parameters were designed using orthogonal experiment.The prioritization schemes were proposed,such as optimum blank holder force,the height of drawing tendon,lubrication condition and so on,which could improve forming quality.The drawbacks of wrinkling,cracking,insufficient deformation and large quantities movement can be greatly improved,when adopting twice drawing forming technology.Practical process design and manufacture show that forming process and the result of numerical simulation experiment fit well.", "venue": "", "year": 2012.0, "author_names": ["Ouyang Bo-yi"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Nonlinear adaptive control of an underwater towed vehicle", "session_id": 1612159376575213, "user_id": 5949683804348593, "candidates": [{"corpus_id": 56077418, "title": "Nonlinear adaptive control of an underwater towed vehicle", "abstract": "This paper addresses the problem of simultaneous depth tracking and attitude control of an underwater towed vehicle. The system proposed uses a two stage towing arrangement that includes a long primary cable, a gravitic depressor, and a secondary cable. The towfish motion induced by wave driven disturbances in both the vertical and horizontal planes is described using an empirical model of the depressor motion and a spring damper model of the secondary cable. A nonlinear, Lyapunov based, adaptive output feedback control law is designed and shown to regulate pitch, yaw, and depth tracking errors to zero. The controller is designed to operate in the presence of plant parameter uncertainty. When subjected to bounded external disturbances, the tracking errors converge to a neighbourhood of the origin that can be made arbitrarily small. In the implementation proposed, a nonlinear observer is used to estimate the linear velocities used by the controller thus dispensing with the need for costly sensor suites. The results obtained with computer simulations show that the controlled system exhibits good performance about different operating conditions when subjected to sea wave driven disturbances and in the presence of sensor noise. The system holds promise for application in oceanographic missions that require depth tracking or bottom following combined with precise vehicle attitude control.", "venue": "", "year": 2010.0, "author_names": ["Francisco Curado Teixeira", "Antonio Pedro Aguiar", "Antonio Manuel Santos Pascoal"], "n_citations": 24, "n_key_citations": 1, "score": 1}, {"corpus_id": 110675054, "title": "Nonlinear adaptive depth tracking and attitude control of an underwater towed vehicle", "abstract": "Abstract This paper addresses the problem of simultaneous depth tracking and precise attitude control of an underwater towed vehicle integrated in a two stage towing arrangement. A nonlinear Lyapunov based output feedback controller is designed to operate in the presence of plant parameter uncertainty and proven to regulate pitch, yaw, and depth tracking errors to zero. When subjected to bounded external disturbances, the tracking errors converge to a neighborhood of the origin that can be made arbitrarily small. In the implementation proposed, a nonlinear observer is used to estimate the linear velocities used by the controller. The results obtained with computer simulations including sea wave driven disturbances and sensor noise, show that the controlled system exhibits good performance about different operating conditions and holds considerable potential for oceanographic missions that require simultaneous depth and attitude control.", "venue": "", "year": 2009.0, "author_names": ["Francisco Curado Teixeira", "Antonio Pedro Aguiar", "Antonio Manuel Santos Pascoal"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 42381245, "title": "NONLINEAR CONTROL OF AN UNDERWATER TOWED VEHICLE", "abstract": "This paper addresses the problem of pitch and depth control of an underwater towed vehicle. A nonlinear adaptive Lyapunov based controller is designed and proven to regulate the pitch and depth tracking errors to zero. When in the presence of external disturbances and parameter uncertainties, the errors are shown to converge to a neighbourhood of the origin that can be made arbitrarily small. We show through computer simulations that the controlled system exhibits good performance about difierent operating conditions when subjected to sea wave driven disturbances and in the presence of sensor noise.", "venue": "", "year": 2006.0, "author_names": ["Francisco Curado Teixeira", "Antonio Pedro Aguiar"], "n_citations": 7, "n_key_citations": 1, "score": 0}, {"corpus_id": 43845350, "title": "Tracking performance control of a cable communicated underwater vehicle using adaptive neural network controllers", "abstract": "In this paper, design, dynamic modeling and control of the fabricated underwater remotely operated vehicle have been considered. Dynamic model of the vehicle is presented for four degrees of freedom and an accurate representation of the dynamic effects of the towed cable is used for dynamic simulation and control design. A nonlinear adaptive neural network controller is developed and simulated. Multi layer and radial basis function neural networks are used for designing the adaptive controllers. Finally, the performance of the vehicle with neural network controllers is compared with a PD controller. The significant improvement is observed for tracking performance of the vehicle in all controllable degrees of freedom. Also, the simulation illustrated the robustness of controllers for the relative large distributions of the communication cable.", "venue": "Appl. Soft Comput.", "year": 2010.0, "author_names": ["Ahmad Bagheri", "T Karimi", "Nima Amanifard"], "n_citations": 50, "n_key_citations": 1, "score": 0}, {"corpus_id": 71149231, "title": "Diving Adaptive Position Tracking Control for Underwater Vehicles", "abstract": "This paper presents a robust position tracking control scheme for underwater vehicles moving in a vertical plane. The idea comes from the demand of underwater position tracking control for the newly borne Trans media Aerial Underwater Vehicle (TMAUV) Although position control of a TMAUV is still within the scope of autonomous underwater vehicles (AUVs) control, it has new features. An underwater reference path for the TMAUV could be characterized by a strong maneuver that many assumptions in the conventional AUV controller design could not be employed. In this paper, a Lyapunov based backstepping controller is developed for a nonlinear coupled input system releasing all constraints on the pitch angle, heave velocity, and angular velocity. Furthermore, neural networks and parameter estimation are employed to develop a robust controller in the presence of model uncertainties, parameter uncertainties, and external disturbances. This paper also solves the problem of adaptive estimation for the system parameters under the coupled input condition. Simulations are presented to demonstrate the feasibility and effectiveness.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Zongcheng Ma", "Junhua Hu", "Jinfu Feng", "An Liu"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 114935036, "title": "Design and Experimental Realization of Adaptive Control Schemes for an Autonomous Underwater Vehicle", "abstract": "Research on Autonomous Underwater Vehicle(AUV) has attracted increased attention of control engineering community in the recent years due to its many interesting applications such as in Defense organisations for underwater mine detection, region surveillance, oceanography studies, oil/gas industries for inspection of underwater pipelines and other marine related industries. However, for the realization of these applications, effective motion control algorithms need to be developed. These motion control algorithms require mathematical representation of AUV which comprises of hydrodynamic damping, Coriolis terms, mass and inertia terms etc. To obtain dynamics of an AUV, different analytical and empirical methods are reported in the literature such as tow tank test, Computational Fluid Dynamics (CFD) analysis and on line system identification. Among these methods, tow tank test and CFD analysis provide white box identified model of the AUV dynamics. Thus, the control design using these methods are found to be ineffective in situation of change in payloads of an AUV or parametric variations in AUV dynamics. On the other hand, control design using on line identification, the dynamics of AUV can be obtained at every sampling time and thus the aforesaid parametric variations in AUV dynamics can be handled effectively. In this thesis, adaptive control strategies are developed using the parameters of AUV obtained through on line system identification. The proposed algorithms are verified first through simulation and then through experimentation on the prototype AUV. Among various motion control algorithms, waypoint tracking has more practical significance for oceanographic surveys and many other applications. In order to implement, waypoint motion control schemes, Line of Sight (LoS) guidance law can be used which is computationally less expensive. In this thesis, adaptive control schemes are developed to implement LoS guidance for an AUV for practical realization of the control algorithm. Further, in order to realize the proposed control algorithms, a prototype AUV is developed in the laboratory. The developed AUV is a torpedo shaped in order to experience low drag force, underactuated AUV with a single thruster for forward motion and control planes for angular motion. Firstly, the AUV structure such as nose profile, tail profile, hull section and control planes are designed and developed. Secondly, the hardware configuration of the AUV such as sensors, actuators, computational unit, communication module etc. are appropriately selected. Finally, a software framework called Robot Operating System (ROS) is used for seamless integration of various sensors, actuators with the computational unit. ROS is a software platform which provides right platform for the implementation of the control algorithms using the sensor data to achieve autonomous capability of the AUV. In order to develop adaptive control strategies, the unknown dynamics of the AUV is identified using polynomial based Nonlinear Autoregressive Moving Average eXogenous (NARMAX) model structure. The parameters of this NARMAX model structure are identified online using Recursive Extended Least Square (RELS) method. Then an adaptive controller is developed for realization of the LoS guidance law for an AUV. Using the kinematic equation and the desired path parameters, a Lyapunov based backstepping controller is designed to obtain the reference velocities for the dynamics. Subsequently, a self tuning PID controller is designed for the AUV to track these reference velocities. Using an inverse optimal control technique, the gains of the selftuning PID controller are tuned on line. Although, this algorithm is computationally less expensive but there lie issues such as actuator constraints and state constraints which need to be resolved in view of practical realization of the control law. It is also observed that the proposed NARMAX structure of the AUV consists of redundant regressor terms. To alleviate the aforesaid limitations of the Inverse optimal self tuning control scheme, a constrained adaptive control scheme is proposed that employs a minimum representation of the NARMAX structure (MR NARMAX) for capturing AUV dynamics. The regressors of the MR NARMAX structure are identified using Forward Regressor Orthogonal Least Square algorithm. Further, the parameters of this MRNARMAX model structure of the AUV are identified at every sampling time using RELS algorithm. Using the desired path parameters and the identified dynamics, an error objective function is defined which is to be minimized. The minimization problem where the objective function with the state and actuator constraints is formulated as a convex optimization problem. This optimization problem is solved using quadratic programming technique. The proposed MR NARMAX based adaptive control is verified in the simulation and then on the prototype AUV. From the obtained results it is observed that this algorithm provides successful tracking of the desired heading. But, the proposed control algorithm is computational expensive, as an optimization problem is to be solved at each sampling instant. In order to reduce the computational time, an explicit model predictive control strategy is developed using the concept of multi parametric programming. A Lyapunov based backstepping controller is designed to generate desired yaw velocity in order to steer the AUV towards the desired path. This explicit model predictive controller is designed using the identified NARMAX model for tracking the desired yaw velocity. The proposed explicit MPC algorithm is implemented first in simulation and then in the prototype AUV. From the simulation and experimental results, it is found that this controller has less computation time and also it considers both the state and actuator constraints whilst exhibiting good tracking performance.", "venue": "", "year": 2016.0, "author_names": ["R Rout"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 199392510, "title": "THE UNDERWATER TOWED SYSTEM BEHAVIOR DURING SHIP TURNING MANEUVERS", "abstract": "The depth and attitude of underwater towed system show complex behavior during the towing ship turning maneuvers. However, it remains unclear how the two indexes change when the coupling relationship between the cable and the towed vehicle is considered. Here, to solve this issue, and we develop a new numerical method that can be used to predict the behavior of towed system during towing ship turning maneuvers. Specially, a finite difference method and six degree of freedom equations are used to describe the motion of towed cable and towed vehicle respectively. Based on the center finite difference method, the partial differential equations and differential equations are transformed to nonlinear algebra equations then the Newton iteration method is used to solve the nonlinear equations. Then, we simulate the transient behaviors of towed system during the towing ship making 180 and 360 with different turning radius, and find that the depth and attitude of towed system are affected by the towing ship turning maneuvers. We show that the smaller of the turning radius, the variations of depth and attitude are larger. Moreover, the new steady state can be achieved easily during the 360 turning maneuver. The numerical method and result that we derived can be applied to design the towing ship turning maneuvers, towed system and control method.", "venue": "", "year": 2017.0, "author_names": ["Zhi-jiang Yuan", "Liang-an Jin", "Wei Chi", "Xiao-gang Jiang", "Zheng Zhi-Lin"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 59186537, "title": "Route optimizing and following for autonomous underwater vehicle ladder surveys", "abstract": "An autonomous underwater vehicle is able to conduct coverage detections, such as sea terrain mapping and submerged objects detection, using sonar. This work addresses the task of both optimizing and following routes that present a ladder shape. First, a planning method to determine a nearly optimal coverage route is designed. The track spacing is optimized considering the seabed type and the sonar range for the purpose of increasing detection probability. It also adds adaptability to confined water, such as harbors, by decomposing the geometrically concave mission region during the processing of the environmental data. Next, a decoupled and two layered structure is adopted to design the following controller. The route is followed in the form of sequenced lines tracking. A proportion integral derivative algorithm with fuzzy parameters adjustment is employed to calculate a reference heading angle according to the transverse position deviation in designing the guidance controller. An adaptive nonlinear S surface law is adopted to design the yaw control. The route following the method is demonstrated with sonar (including side scanning sonar and multi beam echo sounder) imagery collected in terrain mapping and object detection through sea trials.", "venue": "", "year": 2018.0, "author_names": ["Yanqing Jiang", "Ye Li", "Yumin Su", "Ziye Zhou", "Teng Ma", "Li An", "Jiayu He"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 221970205, "title": "Secure distributed adaptive optimal coordination of nonlinear cyber physical systems with attack diagnosis", "abstract": "This paper studies the problem of distributed optimal coordination (DOC) for a class of nonlinear large scale cyber physical systems (CPSs) in the presence of cyber attacks. A secure DOC architecture with attack diagnosis is proposed that guarantees the attack free subsystems to achieve the output consensus which minimizes the sum of their objective functions, while the attacked subsystems converge to preset secure states. A two layer DOC structure is established with emphasis on the interactions between cyber and physical layers, where a command driven control law is designed that generates provable optimal output consensus. Differing from the existing fault diagnosis methods which are generally applicable to given failure types, the focus of the attack diagnosis is to achieve detection and isolation for arbitrary malicious behaviors. To this end, double coupling residuals are generated by a carefully designed distributed filter. The adaptive thresholds with prescribed performance are designed to enhance the detectability and isolability. It is theoretically guaranteed that any attack signal cannot bypass the designed attack diagnosis methodology to destroy the convergence of the DOC algorithm, and the locally occurring detectable attack can be isolated from the propagating attacks from neighboring subsystems. Simulation results for the motion coordination of multiple remotely operated underwater vehicles illustrate the effectiveness of the proposed architecture.", "venue": "ArXiv", "year": 2020.0, "author_names": ["Liwei An", "Guang-Hong yang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 222129717, "title": "Securedistributedadaptive optimal coordinationof nonlinear cyber physical systemswithattackdiagnosis", "abstract": "This paper studies the problem of distributed optimal coordination (DOC) for a class of nonlinear large scale cyber physical systems (CPSs) in the presence of cyber attacks. A secure DOC architecture with attack diagnosis is proposed that guarantees the attack free subsystems to achieve the output consensus which minimizes the sum of their objective functions, while the attacked subsystems converge to preset secure states. A two layer DOC structure is established with emphasis on the interactions between cyber and physical layers, where a command driven control law is designed that generates provable optimal output consensus. Differing from the existing fault diagnosis methods which are generally applicable to given failure types, the focus of the attack diagnosis is to achieve detection and isolation for arbitrary malicious behaviors. To this end, double coupling residuals are generated by a carefully designed distributed filter. The adaptive thresholds with prescribed performance are designed to enhance the detectability and isolability. It is theoretically guaranteed that any attack signal cannot bypass the designed attack diagnosis methodology to destroy the convergence of the DOC algorithm, and the locally occurring detectable attack can be isolated from the propagating attacks from neighboring subsystems. Simulation results for the motion coordination of multiple remotely operated underwater vehicles illustrate the effectiveness of the proposed architecture.", "venue": "", "year": 2020.0, "author_names": ["Liwei An", "Guang-Hong yang"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "deep graph infomax", "session_id": 4522539891928705, "user_id": 2868542350631663, "candidates": [{"corpus_id": 52877454, "title": "Deep Graph Infomax", "abstract": "We present Deep Graph Infomax (DGI) a general approach for learning node representations within graph structured data in an unsupervised manner. DGI relies on maximizing mutual information between patch representations and corresponding high level summaries of graphs both derived using established graph convolutional network architectures. The learnt patch representations summarize subgraphs centered around nodes of interest, and can thus be reused for downstream node wise learning tasks. In contrast to most prior approaches to unsupervised learning with GCNs, DGI does not rely on random walk objectives, and is readily applicable to both transductive and inductive learning setups. We demonstrate competitive performance on a variety of node classification benchmarks, which at times even exceeds the performance of supervised learning.", "venue": "ICLR", "year": 2019.0, "author_names": ["Petar Velickovic", "William Fedus", "William L Hamilton", "Pietro Lio'", "Yoshua Bengio", "R Devon Hjelm"], "n_citations": 481, "n_key_citations": 144, "score": 1}, {"corpus_id": 208175960, "title": "Heterogeneous Deep Graph Infomax", "abstract": "Graph representation learning is to learn universal node representations that preserve both node attributes and structural information. The derived node representations can be used to serve various downstream tasks, such as node classification and node clustering. When a graph is heterogeneous, the problem becomes more challenging than the homogeneous graph node learning problem. Inspired by the emerging information theoretic based learning algorithm, in this paper we propose an unsupervised graph neural network Heterogeneous Deep Graph Infomax (HDGI) for heterogeneous graph representation learning. We use the meta path structure to analyze the connections involving semantics in heterogeneous graphs and utilize graph convolution module and semantic level attention mechanism to capture local representations. By maximizing local global mutual information, HDGI effectively learns high level node representations that can be utilized in downstream graph related tasks. Experiment results show that HDGI remarkably outperforms state of the art unsupervised graph representation learning methods on both classification and clustering tasks. By feeding the learned representations into a parametric model, such as logistic regression, we even achieve comparable performance in node classification tasks when comparing with state of the art supervised end to end GNN models.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Yuxiang Ren", "Bo Liu", "Chao Huang", "Peng Dai", "Liefeng Bo", "Jiawei Zhang"], "n_citations": 26, "n_key_citations": 6, "score": 0}, {"corpus_id": 119309186, "title": "Spatio Temporal Deep Graph Infomax", "abstract": "Spatio temporal graphs such as traffic networks or gene regulatory systems present challenges for the existing deep learning methods due to the complexity of structural changes over time. To address these issues, we introduce Spatio Temporal Deep Graph Infomax (STDGI) a fully unsupervised node representation learning approach based on mutual information maximization that exploits both the temporal and spatial dynamics of the graph. Our model tackles the challenging task of node level regression by training embeddings to maximize the mutual information between patches of the graph, at any given time step, and between features of the central nodes of patches, in the future. We demonstrate through experiments and qualitative studies that the learned representations can successfully encode relevant information about the input graph and improve the predictive performance of spatio temporal auto regressive forecasting models.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Felix L Opolka", "Aaron Solomon", "Catalina Cangea", "Petar Velickovic", "Pietro Lio'", "R Devon Hjelm"], "n_citations": 8, "n_key_citations": 4, "score": 0}, {"corpus_id": 216520451, "title": "Deep multiplex graph infomax: Attentive multiplex network embedding using global information", "abstract": "Abstract Network embedding has recently garnered attention due to the ubiquity of the networked data in the real world. A network is useful for representing the relationships among objects, and these network include social network, publication network, and protein protein interaction network. Most existing network embedding methods assume that only a single type of relation exists between nodes. However, we focus on the fact that two nodes in a network can be connected by multiple types of relations; such a network is called multi view network or multiplex network. Although several existing work consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph. In this work, we present an unsupervised network embedding method for attributed multiplex network called DMGI inspired by Deep Graph Infomax (DGI) that maximizes the mutual information between local patches of a graph, and the global representation of the entire graph. Building on top of DGI, we devise a systematic way to jointly integrate the node embeddings from multiple graphs by introducing (1) the consensus regularization framework that minimizes the disagreements among the relation type specific node embeddings, and (2) the universal discriminator that discriminates true samples regardless of the relation types. We also show that the attention mechanism infers the importance of each relation type, and thus can be useful for filtering unnecessary relation types as a preprocessing step. We perform comprehensive experiments not only on unsupervised downstream tasks, such as clustering and similarity search, but also a supervised downstream task, i.e. node classification, and demonstrate that DMGI outperforms the state of the art methods, even though DMGI is fully unsupervised. The source code is can be found here https:/github.com/pcy1302/DMGI", "venue": "Knowl. Based Syst.", "year": 2020.0, "author_names": ["Chanyoung Park", "Jiawei Han", "Hwanjo Yu"], "n_citations": 9, "n_key_citations": 1, "score": 0}, {"corpus_id": 208076828, "title": "Unsupervised Attributed Multiplex Network Embedding", "abstract": "Nodes in a multiplex network are connected by multiple types of relations. However, most existing network embedding methods assume that only a single type of relation exists between nodes. Even for those that consider the multiplexity of a network, they overlook node attributes, resort to node labels for training, and fail to model the global properties of a graph. We present a simple yet effective unsupervised network embedding method for attributed multiplex network called DMGI, inspired by Deep Graph Infomax (DGI) that maximizes the mutual information between local patches of a graph, and the global representation of the entire graph. We devise a systematic way to jointly integrate the node embeddings from multiple graphs by introducing 1) the consensus regularization framework that minimizes the disagreements among the relation type specific node embeddings, and 2) the universal discriminator that discriminates true samples regardless of the relation types. We also show that the attention mechanism infers the importance of each relation type, and thus can be useful for filtering unnecessary relation types as a preprocessing step. Extensive experiments on various downstream tasks demonstrate that DMGI outperforms the state of the art methods, even though DMGI is fully unsupervised.", "venue": "AAAI", "year": 2020.0, "author_names": ["Chanyoung Park", "Donghyun Kim", "Jiawei Han", "Hwanjo Yu"], "n_citations": 33, "n_key_citations": 9, "score": 0}, {"corpus_id": 202539732, "title": "Deep Graph Library: Towards Efficient and Scalable Deep Learning on Graphs", "abstract": "Accelerating research in the emerging field of deep graph learning requires new tools. Such systems should support graph as the core abstraction and take care to maintain both forward (i.e. supporting new research ideas) and backward (i.e. integration with existing components) compatibility. In this paper, we present Deep Graph Library (DGL) DGL enables arbitrary message handling and mutation operators, flexible propagation rules, and is framework agnostic so as to leverage high performance tensor, autograd operations, and other feature extraction modules already available in existing frameworks. DGL carefully handles the sparse and irregular graph structure, deals with graphs big and small which may change dynamically, fuses operations, and performs auto batching, all to take advantages of modern hardware. DGL has been tested on a variety of models, including but not limited to the popular Graph Neural Networks (GNN) and its variants, with promising speed, memory footprint and scalability.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Minjie Wang", "Lingfan Yu", "Da Zheng", "Quan Gan", "Yujie Gai", "Zihao Ye", "Mufei Li", "Jinjing Zhou", "Qi Huang", "Chao Ma", "Ziyue Huang", "Qipeng Guo", "Hao Zhang", "Haibin Lin", "Junbo Zhao", "Jinyang Li", "Alex Smola", "Zheng Zhang"], "n_citations": 258, "n_key_citations": 35, "score": 0}, {"corpus_id": 57573752, "title": "LanczosNet: Multi Scale Deep Graph Convolutional Networks", "abstract": "We propose the Lanczos network (LanczosNet) which uses the Lanczos algorithm to construct low rank approximations of the graph Laplacian for graph convolution. Relying on the tridiagonal decomposition of the Lanczos algorithm, we not only efficiently exploit multi scale information via fast approximated computation of matrix power but also design learnable spectral filters. Being fully differentiable, LanczosNet facilitates both graph kernel learning as well as learning node embeddings. We show the connection between our LanczosNet and graph based manifold learning methods, especially the diffusion maps. We benchmark our model against several recent deep graph networks on citation networks and QM8 quantum chemistry dataset. Experimental results show that our model achieves the state of the art performance in most tasks. Code is released at: \\url{this https URL}", "venue": "ICLR", "year": 2019.0, "author_names": ["Renjie Liao", "Zhizhen Zhao", "Raquel Urtasun", "Richard S Zemel"], "n_citations": 105, "n_key_citations": 12, "score": 0}, {"corpus_id": 90263007, "title": "Learning Combinatorial Embedding Networks for Deep Graph Matching", "abstract": "Graph matching refers to finding node correspondence between graphs, such that the corresponding node and edge's affinity can be maximized. In addition with its NP completeness nature, another important challenge is effective modeling of the node wise and structure wise affinity across graphs and the resulting objective, to guide the matching procedure effectively finding the true matching against noises. To this end, this paper devises an end to end differentiable deep network pipeline to learn the affinity for graph matching. It involves a supervised permutation loss regarding with node correspondence to capture the combinatorial nature for graph matching. Meanwhile deep graph embedding models are adopted to parameterize both intra graph and cross graph affinity functions, instead of the traditional shallow and simple parametric forms e.g. a Gaussian kernel. The embedding can also effectively capture the higher order structure beyond second order edges. The permutation loss model is agnostic to the number of nodes, and the embedding model is shared among nodes such that the network allows for varying numbers of nodes in graphs for training and inference. Moreover, our network is class agnostic with some generalization capability across different categories. All these features are welcomed for real world applications. Experiments show its superiority against state of the art graph matching learning methods.", "venue": "2019 IEEE/CVF International Conference on Computer Vision (ICCV)", "year": 2019.0, "author_names": ["Runzhong Wang", "Junchi Yan", "Xiaokang Yang"], "n_citations": 74, "n_key_citations": 18, "score": 0}, {"corpus_id": 174799303, "title": "Break the Ceiling: Stronger Multi scale Deep Graph Convolutional Networks", "abstract": "Recently, neural network based approaches have achieved significant improvement for solving large, complex, graph structured problems. However, their bottlenecks still need to be addressed, and the advantages of multi scale information and deep architectures have not been sufficiently exploited. In this paper, we theoretically analyze how existing Graph Convolutional Networks (GCNs) have limited expressive power due to the constraint of the activation functions and their architectures. We generalize spectral graph convolution and deep GCN in block Krylov subspace forms and devise two architectures, both with the potential to be scaled deeper but each making use of the multi scale information in different ways. We further show that the equivalence of these two architectures can be established under certain conditions. On several node classification tasks, with or without the help of validation, the two new architectures achieve better performance compared to many state of the art methods.", "venue": "NeurIPS", "year": 2019.0, "author_names": ["Sitao Luan", "Mingde Zhao", "Xiao-Wen Chang", "Doina Precup"], "n_citations": 55, "n_key_citations": 4, "score": 0}, {"corpus_id": 131777802, "title": "PAN: Path Integral Based Convolution for Deep Graph Neural Networks", "abstract": "Convolution operations designed for graph structured data usually utilize the graph Laplacian, which can be seen as message passing between the adjacent neighbors through a generic random walk. In this paper, we propose PAN, a new graph convolution framework that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. PAN generalizes the graph Laplacian to a new transition matrix we call \\emph{maximal entropy transition} (MET) matrix derived from a path integral formalism. Most previous graph convolutional network architectures can be adapted to our framework, and many variations and derivatives based on the path integral idea can be developed. Experimental results show that the path integral based graph neural networks have great learnability and fast convergence rate, and achieve state of the art performance on benchmark tasks.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Zheng Ma", "Ming Li", "Yuguang Wang"], "n_citations": 14, "n_key_citations": 2, "score": 0}]} -{"query": "Kinematic and dynamic vehicle models for autonomous driving control design", "session_id": 691423168885503, "user_id": 1397749986796135, "candidates": [{"corpus_id": 207012925, "title": "Kinematic and dynamic vehicle models for autonomous driving control design", "abstract": "We study the use of kinematic and dynamic vehicle models for model based control design used in autonomous driving. In particular, we analyze the statistics of the forecast error of these two models by using experimental data. In addition, we study the effect of discretization on forecast error. We use the results of the first part to motivate the design of a controller for an autonomous vehicle using model predictive control (MPC) and a simple kinematic bicycle model. The proposed approach is less computationally expensive than existing methods which use vehicle tire models. Moreover it can be implemented at low vehicle speeds where tire models become singular. Experimental results show the effectiveness of the proposed approach at various speeds on windy roads.", "venue": "2015 IEEE Intelligent Vehicles Symposium (IV)", "year": 2015.0, "author_names": ["Jason Kong", "Mark Pfeiffer", "Georg Schildbach", "Francesco Borrelli"], "n_citations": 322, "n_key_citations": 16, "score": 1}, {"corpus_id": 231735584, "title": "A Path Planning and Tracking Control for Autonomous Vehicle With Obstacle Avoidance", "abstract": "This paper presents a path planning and tracking framework to implement obstacle avoidance for autonomous car. The safe driving area model, which is made up of a serious of nodes, designed by using the longitudinal and lateral motion relationship of the vehicle, is proposed to describe the location of automobile with sideslip constraint in driving and position of the obstacles on the road, and Q learning algorithm is used to learn the optimal strategy on each node and the optimal path is obtained under the predefined rules. In order to execute precise path tracking control, linear output regulation method is introduced to design the tracking controller based on both the kinematic and dynamic vehicle models. CarSim simulations are conducted with different scenarios and the effectiveness of the proposed framework is demonstrated.", "venue": "2020 Chinese Automation Congress (CAC)", "year": 2020.0, "author_names": ["Xin Wang", "Xinghu Yu", "Weichao Sun"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 3863565, "title": "Fuzzy model based H dynamic output feedback control with feedforward for autonomous vehicle path tracking", "abstract": "In this paper, the problem of H dynamic output feedback control with feedforward is presented for path tracking in autonomous vehicle. To achieve the desired performance during high speed driving, both the kinematic and dynamic models of the vehicle are considered. Within the fuzzy model based framework, an approach to the fuzzy dynamic output feedback controller design is proposed. The desired yaw rate of the vehicle is regarded as the external disturbance of the vehicle lateral dynamics, which are norm bounded uncertainties and can be approximated by the curvature of the reference path. By adopting the dynamic model in terms of error with respect to centerline of the reference road and Lyapunov stability theory, a sufficient condition to derive the fuzzy H dynamic output feedback controller is developed. Moreover, to further improve the performance of the overall closed loop system, the feedforward loop is introduced by estimating the desired steering angle under certain vehicle velocity. The effectiveness of the proposed approach is demonstrated by the co simulation between Matlab/Simulink and Carsim.", "venue": "2017 International Conference on Fuzzy Theory and Its Applications (iFUZZY)", "year": 2017.0, "author_names": ["Hong Sun", "Changzhu Zhang", "Guangyong An", "Qijun J Chen", "Chengju Liu"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 196152875, "title": "Takagi Sugeno Fault Tolerant Control of an Autonomous Vehicle", "abstract": "This work proposes a solution for the longitudinal and lateral control problem of urban autonomous vehicles using a gain scheduling Takagi Sugeno (TS) control approach. Using the kinematic and dynamic vehicle models, a TS representation is adopted and a cascade control methodology is proposed for controlling both vehicle behaviours. In particular, for the control design, the use of both models separately will lead to solve two TS LMI LQR problems. Furthermore, to achieve the desired levels of performance, an approach based on cascade design of the the kinematic and dynamic controllers has been proposed. This cascade control scheme is based on the idea that the dynamic closed loop behaviour is designed to be faster than the kinematic closed loop one. The obtained gain scheduling TS control approach, jointly with a trajectory generation module, has presented suitable results in a simulated city driving scenario.", "venue": "", "year": 2018.0, "author_names": ["Konstantinos Spanos"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 214248347, "title": "A simple and reliable technique to design kinematic based sideslip estimators", "abstract": "Abstract This paper proposes two novel vehicle sideslip estimators, that aim at achieving ease of implementation and tuning, low computational cost and robustness, using only the most common automotive measurements, like vehicle position, acceleration and rotational velocity. The two estimators are only based on the unicycle kinematic model, thus they do not require any knowledge of uncertain or time varying parameters, like vehicle parameters, or of road conditions, as it usually happens when dynamic models are adopted, and they have been derived by recasting an estimation problem into a linear control problem. Different experiments, ranging from standard driving manoeuvres to drifting driving and autonomous driving, have been performed to demonstrate the effectiveness of the proposal even in particularly critical scenarios, like driving at the limits of vehicle's handling. A comparison with a state of the art sideslip estimator, using simulation and experimental data, is presented, as well.", "venue": "Control Engineering Practice", "year": 2020.0, "author_names": ["Luca Bascetta", "Marco Baur", "Gianni Ferretti"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 44671726, "title": "Gain Scheduling LPV Control Scheme for the Autonomous Guidance Problem using a Dynamic Modelling Approach", "abstract": "This work proposes a solution for the longitudinal and lateral control problem of urban autonomous vehicles using a gain scheduling LPV control approach. Using the kinematic and dynamic vehicle models, a linear parameter varying (LPV) representation is adopted and a cascade control methodology is proposed for controlling both vehicle behaviours. In particular, for the control design, the use of both models separately lead to solve two LPV LMI LQR problems. Furthermore, to achieve the desired levels of performance, an approach based on cascade design of the the kinematic and dynamic controllers has been proposed. This cascade control scheme is based on the idea that the dynamic closed loop behaviour is designed to be faster than the kinematic closed loop one. The obtained gain scheduling LPV control approach, jointly with a trajectory generation module, has presented suitable results in a simulated city driving scenario.", "venue": "ArXiv", "year": 2017.0, "author_names": ["Eugenio Alcala", "Vicenc Puig", "Joseba Quevedo", "Teresa Escobet"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 221137581, "title": "Kinematic and Dynamic Controller Design for Autonomous Driving of Car like Mobile Robot", "abstract": "Autonomous driving technologies are taken into account for numerous applications such as spraying agricultural chemicals, mowing golf course grass, unmanned military operations and commercial applications such as self driving cars. This paper presents a kinematic and dynamic controller design of autonomous mobile robot for a car like vehicle to self drive using GPS. In this paper, a simple kinematic bicycle model is introduced and Model Predictive Controller (MPC) is derived for controlling the mobile robot. Computational simulation results show that the robot can successfully navigate and drive toward a final destination reacting to the changes in the environment. Finally this study presents experimental results to show the effectiveness of the proposed controller.", "venue": "", "year": 2020.0, "author_names": ["Ki-Won Yeom"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 225155150, "title": "Lateral Acceleration Based Vehicle Models Blending for Automated Driving Controllers", "abstract": "Model based trajectory tracking has become a widely used technique for automated driving system applications. A critical design decision is the proper selection of a vehicle model that achieves the best trade off between real time capability and robustness. Blending different types of vehicle models is a recent practice to increase the operating range of model based trajectory tracking control applications. However, current approaches focus on the use of longitudinal speed as the blending parameter, with a formal procedure to tune and select its parameters still lacking. This work presents a novel approach based on lateral accelerations, along with a formal procedure and criteria to tune and select blending parameters, for its use on model based predictive controllers for autonomous driving. An electric passenger bus traveling at different speeds over urban routes is proposed as a case study. Results demonstrate that the lateral acceleration, which is proportional to the lateral forces that differentiate kinematic and dynamic models, is a more appropriate model switching enabler than the currently used longitudinal velocity. Moreover, the advanced procedure to define blending parameters is shown to be effective. Finally, a smooth blending method offers better tracking results versus sudden model switching ones and non blending techniques.", "venue": "", "year": 2020.0, "author_names": ["Jose Angel Matute-Peaspan", "Mauricio Marcano", "Sergio E Diaz", "Asier Zubizarreta", "Joshue Perez"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 219461155, "title": "Integral vector field control for three dimensional path following of autonomous underwater vehicle", "abstract": "A new control method for autonomous underwater vehicle (AUV) is developed to achieve three dimensional (3D) path following under ocean current disturbances when AUV lacks lateral and vertical driving forces. To make the control system more convenient for practical application and reduce the energy consumption of AUV, the controller is designed with relative velocities. To avoid the singularity problem of curve path following, the Serret Frenet frame is introduced as virtual target and the error model is defined. The adaptive law of the virtual target and an integral vector field (IVF) guidance law are designed in the kinematic controller. The back stepping method and adaptive dynamical sliding mode control (BADSMC) technology are applied in the design of dynamic controller. The Lyapunov theory and nonlinear cascade system theory are applied to prove the closed loop stability. Finally, the performances of the IVF BADSMC and ILOS PID algorithm are compared through simulation. Simulation results demonstrate that the proposed controller can realize the path following of AUV under disturbances of ocean current and can improve the following quality.", "venue": "Journal of Marine Science and Technology", "year": 2020.0, "author_names": ["Xuliang Yao", "Xiaowei Wang"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 211045692, "title": "Path Following Based on Waypoints and Real Time Obstacle Avoidance Control of an Autonomous Underwater Vehicle", "abstract": "This paper studies three dimensional (3D) straight line path following and obstacle avoidance control for an underactuated autonomous underwater vehicle (AUV) without lateral and vertical driving forces. Firstly, the expected angular velocities are designed by using two different methods in the kinematic controller. The first one is a traditional method based on Line of sight (LOS) guidance law, and the second one is an improved method based on model predictive control (MPC) At the same time, a penalty item is designed by using the obstacle information detected by onboard sensors, which can realize the real time obstacle avoidance of the unknown obstacle. Then, in order to overcome the uncertainty of the dynamics model and the saturation of actual control input, the dynamic controller is designed by using sliding mode control (SMC) technology. Finally, in the simulation experiment, the performance of the improved control method is verified by comparison with two traditional control methods based on LOS guidance law. Since the constraint of an AUV's angular velocities are considered in MPC, simulation results show that the improved control method uses MPC, and SMC not only improves the tracking quality of the AUV when switching paths near the waypoints and realizes real time obstacle avoidance but also effectively reduces the mean square error (MSE) and saturation rate of the rudder angle. Therefore, this control method is more conducive to the system stability and saves energy.", "venue": "Sensors", "year": 2020.0, "author_names": ["Xuliang Yao", "Xiaowei Wang", "Feng Wang", "Le Zhang"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "Relevant sarscov2 genome variation", "session_id": 5999113626080401, "user_id": 5406932255946813, "candidates": [{"corpus_id": 235811170, "title": "Relevant SARS CoV 2 Genome Variation through Six Months of Worldwide Monitoring", "abstract": "Real time genome monitoring of the SARS CoV 2 pandemic outbreak is of utmost importance for designing diagnostic tools, guiding antiviral treatment and vaccination strategies. In this study, we present an accurate method for temporal and geographical comparison of mutational events based on GISAID database genome sequencing. Among 42523 SARS CoV 2 genomes analyzed, we found 23202 variants compared to the reference genome. The Ti/Tv (transition/transversion) ratio was used to filter out possible false positive errors. Transition mutations generally occurred more frequently than transversions. Our clustering analysis revealed remarkable hotspot mutation patterns for SARS CoV 2. Mutations were clustered based on how their frequencies changed over time according to each geographical location. We observed some clusters showing a clear variation in mutation frequency and continuously evolving in the world. However, many mutations appeared in specific periods without a clear pattern over time. Various important nonsynonymous mutations were observed, mainly in Oceania and Asia. More than half of these mutations were observed only once. Four hotspot mutations were found in all geographical locations at least once: T265I (NSP2) P314L (NSP12) D614G (S) and Q57H (ORF3a) The current analysis of SARS CoV 2 genomes provides valuable information on the geographical and temporal mutational evolution of SARS CoV 2.", "venue": "BioMed research international", "year": 2021.0, "author_names": ["Abdelmalek Hakmaoui", "Faisal M Khan", "Abdelhamid Liacini", "Amanjot Kaur", "Yacine Berka", "Safaa Machraoui", "Hafid Soualhine", "Noureddine Berka", "Hanane Rais", "Brahim Admou"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 83462492, "title": "Fine Scale Characterization of Genomic Structural Variation in the Human Genome Reveals Adaptive and Biomedically Relevant Hotspots", "abstract": "Abstract Genomic structural variants (SVs) are distributed nonrandomly across the human genome. The \"hotspots\" of SVs have been implicated in evolutionary innovations, as well as medical conditions. However, the evolutionary and biomedical features of these hotspots remain incompletely understood. Here, we analyzed data from 2,504 genomes to construct a refined map of 1,148 SV hotspots in human genomes. We confirmed that segmental duplication related nonallelic homologous recombination is an important mechanistic driver of SV hotspot formation. However, to our surprise, we also found that a majority of SVs in hotspots do not form through such recombination based mechanisms, suggesting diverse mechanistic and selective forces shaping hotspots. Indeed, our evolutionary analyses showed that the majority of SV hotspots are within gene poor regions and evolve under relaxed negative selection or neutrality. However, we still found a small subset of SV hotspots harboring genes that are enriched for anthropologically crucial functions and evolve under geography specific and balancing adaptive forces. These include two independent hotspots on different chromosomes affecting alpha and beta hemoglobin gene clusters. Biomedically, we found that the SV hotspots coincide with breakpoints of clinically relevant, large de novo SVs, significantly more often than genome wide expectations. For example, we showed that the breakpoints of multiple large SVs, which lead to idiopathic short stature, coincide with SV hotspots. Therefore, the mutational instability in SV hotpots likely enables chromosomal breaks that lead to pathogenic structural variation formations. Overall, our study contributes to a better understanding of the mutational and adaptive landscape of the genome.", "venue": "Genome biology and evolution", "year": 2019.0, "author_names": ["Yen-Lung Lin", "Omer Gokcumen"], "n_citations": 19, "n_key_citations": 0, "score": 0}, {"corpus_id": 34744252, "title": "Clinically Relevant Variants Identifying, Collecting, Interpreting, and Disseminating: The 2013 Annual Scientific Meeting of the Human Genome Variation Society", "abstract": "The dramatic advances in genetic sequencing technologies used in research laboratories are now entering the clinic, and applications of whole genome and whole exome sequencing to disease diagnosis, predisposition, and treatment will soon be commonplace. However, the standards and methods for identifying clinically relevant variants are currently being debated and defined. Multiple agencies worldwide have recognized that we have reached an exciting and critical transition point into the clinic, and many important issues are being discussed that impact how genetic variation data in the clinic will be interpreted and used. The 2013 annual scientific meeting of the Human Genome Variation Society (HGVS) had as its main theme the discovery, interpretation, and dissemination of clinically relevant DNA variants. The meeting featured the continuously developing technology of databasing genetic variation and computational tools for allelic variant discovery. Attention was given to curating and integrating these data with clinical findings, including approaches to distinguish between functional alleles underlying clinical phenotypes and benign sequence variants and making data sources interoperable and functional for clinical diagnostic utility, citing examples in specific diseases.", "venue": "Human mutation", "year": 2014.0, "author_names": ["Christine M Stanley", "Shamil R Sunyaev", "Marc S Greenblatt", "William S Oetting"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 90194111, "title": "Fine scale characterization of genomic structural variation in the human genome reveals adaptive and biomedically relevant hotspots", "abstract": "Genomic structural variants (SVs) are distributed nonrandomly across the human genome. These \"hotspots\" have been implicated in critical evolutionary innovations, as well as serious medical conditions. However, the evolutionary and biomedical features of these hotspots remain incompletely understood. In this study, we analyzed data from 2,504 genomes from the 1000 Genomes Project Consortium and constructed a refined map of 1,148 SV hotspots in human genomes. By studying the genomic architecture of these hotspots, we found that both nonallelic homologous recombination and non homologous mechanisms act as mechanistic drivers of SV formation. We found that the majority of SV hotspots are within gene poor regions and evolve under relaxed negative selection or neutrality. However, we found that a small subset of SV hotspots harbor genes that are enriched for anthropologically crucial functions, including blood oxygen transport, olfaction, synapse assembly, and antigen binding. We provide evidence that balancing selection may have maintained these SV hotspots, which include two independent hotspots on different chromosomes affecting alpha and beta hemoglobin gene clusters. Biomedically, we found that the SV hotspots coincide with breakpoints of clinically relevant, large de novo SVs, significantly more often than genome wide expectations. As an example, we showed that the breakpoints of multiple large de novo SVs, which lead to idiopathic short stature, coincide with SV hotspots. As such, the mutational instability in SV hotpots likely enables chromosomal breaks that lead to pathogenic structural variation formations. Our study contributes to a better understanding of the mutational landscape of the genome and implicates both mechanistic and adaptive forces in the formation and maintenance of SV hotspots.", "venue": "", "year": 2018.0, "author_names": ["Yen-Lung Lin", "Omer Gokcumen"], "n_citations": 4, "n_key_citations": 1, "score": 0}, {"corpus_id": 226292326, "title": "Massive gene presence absence variation shapes an open pan genome in the Mediterranean mussel", "abstract": "Background The Mediterranean mussel Mytilus galloprovincialis is an ecologically and economically relevant edible marine bivalve, highly invasive and resilient to biotic and abiotic stressors causing recurrent massive mortalities in other bivalves. Although these traits have been recently linked with the maintenance of a high genetic variation within natural populations, the factors underlying the evolutionary success of this species remain unclear. Results Here, after the assembly of a 1.28 Gb reference genome and the resequencing of 14 individuals from two independent populations, we reveal a complex pan genomic architecture in M. galloprovincialis with a core set of 45,000 genes plus a strikingly high number of dispensable genes (20,000) subject to presence absence variation, which may be entirely missing in several individuals. We show that dispensable genes are associated with hemizygous genomic regions affected by structural variants, which overall account for nearly 580 Mb of DNA sequence not included in the reference genome assembly. As such, this is the first study to report the widespread occurrence of gene presence absence variation at a whole genome scale in the animal kingdom. Conclusions Dispensable genes usually belong to young and recently expanded gene families enriched in survival functions, which might be the key to explain the resilience and invasiveness of this species. This unique pan genome architecture is characterized by dispensable genes in accessory genomic regions that exceed by orders of magnitude those observed in other metazoans, including humans, and closely mirror the open pan genomes found in prokaryotes and in a few non metazoan eukaryotes.", "venue": "Genome biology", "year": 2020.0, "author_names": ["Marco Gerdol", "Rebeca Moreira", "Fernando Cruz", "Jessica Gomez-Garrido", "Anna Vlasova", "Umberto Rosani", "Paola Venier", "Miguel A Naranjo-Ortiz", "Maria Murgarella", "Samuele Greco", "Pablo Balseiro", "Andre Corvelo", "Leonor Frias", "Marta Gut", "Toni Gabaldon", "Alberto Pallavicini", "Carlos Canchaya", "Beatriz Novoa", "Tyler S Alioto", "David Posada", "Antonio Figueras"], "n_citations": 27, "n_key_citations": 4, "score": 0}, {"corpus_id": 84841550, "title": "Resolving the full spectrum of human genome variation using Linked Reads.", "abstract": "Large scale population analyses coupled with advances in technology have demonstrated that the human genome is more diverse than originally thought. To date, this diversity has largely been uncovered using short read whole genome sequencing. However, these short read approaches fail to give a complete picture of a genome. They struggle to identify structural events, cannot access repetitive regions, and fail to resolve the human genome into haplotypes. Here, we describe an approach that retains long range information while maintaining the advantages of short reads. Starting from ~1 ng of high molecular weight DNA, we produce barcoded short read libraries. Novel informatic approaches allow for the barcoded short reads to be associated with their original long molecules producing a novel data type known as \"Linked Reads\" This approach allows for simultaneous detection of small and large variants from a single library. In this manuscript, we show the advantages of Linked Reads over standard short read approaches for reference based analysis. Linked Reads allow mapping to 38 Mb of sequence not accessible to short reads, adding sequence in 423 difficult to sequence genes including disease relevant genes STRC, SMN1, and SMN2 Both Linked Read whole genome and whole exome sequencing identify complex structural variations, including balanced events and single exon deletions and duplications. Further, Linked Reads extend the region of high confidence calls by 68.9 Mb. The data presented here show that Linked Reads provide a scalable approach for comprehensive genome analysis that is not possible using short reads alone.", "venue": "Genome research", "year": 2019.0, "author_names": ["Patrick Marks", "Sarah Garcia", "Alvaro Martinez Barrio", "Kamila Belhocine", "Jorge A Bernate", "Rajiv Bharadwaj", "Keith P Bjornson", "Claudia Catalanotti", "Josh Delaney", "Adrian N Fehr", "Ian T Fiddes", "Brendan Galvin", "Haynes Heaton", "Jill Herschleb", "Christopher M Hindson", "Esty Holt", "Cassandra B Jabara", "Susanna Jett", "Nikka Keivanfar", "Sofia Kyriazopoulou-Panagiotopoulou", "Monkol Lek", "Bill Lin", "Adam J Lowe", "Shazia S Mahamdallie", "Shamoni Maheshwari", "Tony Makarewicz", "Jamie L Marshall", "Francesca Meschi", "Christopher J O'Keefe", "Heather S Ordonez", "Pranav Patel", "Andrew Price", "Ariel E Royall", "Elise Ruark", "Sheila Seal", "Michael Schnall-Levin", "Preyas Shah", "David Stafford", "Stephen R Williams", "Indira Wu", "Andrew Wei Xu", "Nazneen Rahman", "Daniel G MacArthur", "Deanna M Church"], "n_citations": 102, "n_key_citations": 11, "score": 0}, {"corpus_id": 3303134, "title": "Demographic history and biologically relevant genetic variation of Native Mexicans inferred from whole genome sequencing", "abstract": "Understanding the genetic structure of Native American populations is important to clarify their diversity, demographic history, and to identify genetic factors relevant for biomedical traits. Here, we show a demographic history reconstruction from 12 Native American whole genomes belonging to six distinct ethnic groups representing the three main described genetic clusters of Mexico (Northern, Southern, and Maya) Effective population size estimates of all Native American groups remained below 2,000 individuals for up to 10,000 years ago. The proportion of missense variants predicted as damaging is higher for undescribed 30% than for previously reported variants 15% Several variants previously associated with biological traits are highly frequent in the Native American genomes. These findings suggest that the demographic and adaptive processes that occurred in these groups shaped their genetic architecture and could have implications in biological processes of the Native Americans and Mestizos of today.People of Mexico have diverse historical and genetic background. Here, Romero Hidalgo and colleagues sequence whole genomes of Native Americans of Mexico, and show demographic history and genetic variation shared among subgroups of Native Americans.", "venue": "Nature Communications", "year": 2017.0, "author_names": ["Sandra Romero-Hidalgo", "Adrian Ochoa-Leyva", "Alejandro Garciarrubio", "Victor Acuna-Alonzo", "Erika Antunez-Arguelles", "Martha Balcazar-Quintero", "Rodrigo Barquera-Lozano", "Alessandra Carnevale", "Fernanda Cornejo-Granados", "Juan Carlos Fernandez-Lopez", "Rodrigo Garcia-Herrera", "Humberto Garcia-Ortiz", "Angeles Granados-Silvestre", "Julio Granados", "Fernando Guerrero-Romero", "Enrique Hernandez-Lemus", "Paola Leon-Mimila", "Gaston Macin-Perez", "Angelica Martinez-Hernandez", "Marta Menjivar", "Enrique Morett", "Lorena Orozco", "Guadalupe Ortiz-Lopez", "Fernando Perez-Villatoro", "Javier Rivera-Morales", "Fernando Riveros-Mckay", "Marisela Villalobos-Comparan", "Hugo Villamil-Ramirez", "Teresa Villarreal-Molina", "Samuel Canizales-Quinteros", "Xavier Soberon"], "n_citations": 27, "n_key_citations": 2, "score": 0}, {"corpus_id": 30165405, "title": "DNA Damage Follows Repair Factor Depletion and Portends Genome Variation in Cancer Cells after Pore Migration", "abstract": "Migration through micron size constrictions has been seen to rupture the nucleus, release nuclear localized GFP, and cause localized accumulations of ectopic 53BP1 a DNA repair protein. Here, constricted migration of two human cancer cell types and primary mesenchymal stem cells (MSCs) increases DNA breaks throughout the nucleoplasm as assessed by endogenous damage markers and by electrophoretic \"comet\" measurements. Migration also causes multiple DNA repair proteins to segregate away from DNA, with cytoplasmic mis localization sustained for many hours as is relevant to delayed repair. Partial knockdown of repair factors that also regulate chromosome copy numbers is seen to increase DNA breaks in U2OS osteosarcoma cells without affecting migration and with nucleoplasmic patterns of damage similar to constricted migration. Such depletion also causes aberrant levels of DNA. Migration induced nuclear damage is nonetheless reversible for wild type and sub cloned U2OS cells, except for lasting genomic differences between stable clones as revealed by DNA arrays and sequencing. Gains and losses of hundreds of megabases in many chromosomes are typical of the changes and heterogeneity in bone cancer. Phenotypic differences that arise from constricted migration of U2OS clones are further illustrated by a clone with a highly elongated and stable MSC like shape that depends on microtubule assembly downstream of the transcription factor GATA4. Such changes are consistent with reversion to a more stem like state upstream of cancerous osteoblastic cells. Migration induced genomic instability can thus associate with heritable changes.", "venue": "Current Biology", "year": 2017.0, "author_names": ["Jerome Irianto", "Yuntao Xia", "Charlotte R Pfeifer", "Avathamsa Athirasala", "Jiazheng Ji", "Cory Alvey", "Manu Tewari", "Rachel R Bennett", "Shane M Harding", "Andrea J Liu", "Roger A Greenberg", "Dennis E Discher"], "n_citations": 176, "n_key_citations": 11, "score": 0}, {"corpus_id": 12999345, "title": "The international Genome sample resource (IGSR) A worldwide collection of genome variation incorporating the 1000 Genomes Project data", "abstract": "The International Genome Sample Resource (IGSR; http:/www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38) and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data previously only browseable through our FTP site by focusing on particular samples, populations or data sets of interest.", "venue": "Nucleic Acids Res.", "year": 2017.0, "author_names": ["Laura Clarke", "Susan Fairley", "Xiangqun Zheng Bradley", "Ian Streeter", "Emily Perry", "Ernesto Lowy-Gallego", "Anne Marie Tasse", "Paul Flicek"], "n_citations": 102, "n_key_citations": 2, "score": 0}, {"corpus_id": 4522152, "title": "Genome variation across cancers scales with tissue stiffness an invasion mutation mechanism and implications for immune cell infiltration.", "abstract": "Many different types of soft and solid tumors have now been sequenced, and meta analyses suggest that genomic variation across tumors scales with the stiffness of the tumors' tissues of origin. The opinion expressed here is based on a review of current genomics data, and it considers multiple 'mechanogenomics' mechanisms to potentially explain this scaling of mutation rate with tissue stiffness. Since stiff solid tissues have higher density of fibrous collagen matrix, which should decrease tissue porosity, cancer cell proliferation could be affected and so could invasion into stiff tissues as the nucleus is squeezed sufficiently to enhance DNA damage. Diversification of a cancer genome after constricted migration is now clear. Understanding genome changes that give rise to neo antigens is important to selection as well as to the development of immunotherapies, and we discuss engineered monocytes/macrophages as particularly relevant to understanding infiltration into solid tumors.", "venue": "Current opinion in systems biology", "year": 2017.0, "author_names": ["Charlotte R Pfeifer", "Cory M Alvey", "Jerome Irianto", "Dennis E Discher"], "n_citations": 37, "n_key_citations": 1, "score": 0}]} -{"query": "The Composite Steel Reinforced Concrete Column Under Axial and Seismic Loads: A Review", "session_id": 3973289931889774, "user_id": 5512173545272523, "candidates": [{"corpus_id": 199654021, "title": "The Composite Steel Reinforced Concrete Column Under Axial and Seismic Loads: A Review", "abstract": "The composite steel reinforced concrete (SRC) columns with the form of partial or full encasement of the steel section in the reinforced concrete (RC) have attracted pervasive attention due to their advantages compared to the conventional RC columns. This paper aims to summarize the representative publications regarding the SRC columns. Firstly, the analytical studies of the SRC columns, including comparative studies between available codes to address the philosophy of design and the limits in the available codes of design, bond slip behavior, analytical confinement material models, and finite element analysis, are addressed. In addition, the discussion and summary of the axial behavior of the SRC columns and the important parameters affecting the axial behavior of these types of columns were included. It also attempts to cover the parameters affecting the seismic behavior of the SRC columns. Important progress has been made by the previous studies in the SRC columns under the axial load and the combination of axial and seismic loads, but they fundamentally focused on the columns with the simple arrangement of steel section, and a few attention was paid to the new type of SRC columns with rotated cross shaped steel section whose webs coincide with the diagonal lines of the columns' section. Due to the lack of study and the brittle failure of the columns with lightweight and high strength concrete, more studies should still be made to know the behavior of the SRC columns. The paper concludes with suggestions for the future studies to enhance the effectiveness of the SRC columns.", "venue": "International Journal of Steel Structures", "year": 2019.0, "author_names": ["Mostafa Mahmoud Mostafa", "Tao Wu", "Xi Liu", "Bo Fu"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 139858758, "title": "CFRP reinforced concrete filled steel tubes with timber core under axial loading", "abstract": "Abstract This paper experimentally investigates the effect of Carbon Fibre Reinforced Polymer (CFRP) on the axial capacity of concrete filled steel tubes with different timber cores. The stub columns were designed having the size of the infill and the CFRP as the main two variables. The structural response including failure, ductility, stiffness and structural efficiency were evaluated and discussed. It was found that the effect of timber core is significant in lightening the weight of the composite, although the total capacity of the composite depends upon the material properties of the core versus the grade of concrete and steel employed in the composite columns. For the structural applications, where the weight reduction and ductility are crucial, the development of this innovative composite is highly recommended. To quantify this, the ratio of ductility index to weight is introduced including these two crucial parameters in the seismic design. An equation is also developed to estimate the axial capacity of timber concrete filled steel tubes. Furthermore, the environmental and sustainability assessment are touched on briefly to pave a path for the future work aiming at possibly reducing the contribution of the concrete in the construction.", "venue": "Composite Structures", "year": 2019.0, "author_names": ["Amin Nabati", "Tohid Ghanbari-Ghazijahani", "Ching-Tai Ng"], "n_citations": 12, "n_key_citations": 0, "score": 0}, {"corpus_id": 62830886, "title": "The Increase of Circular Concrete Column Strength Confined with Carbon Fiber Reinforced Polymer", "abstract": "On the structural components, the use of a combination of two high performance materials is a natural thing and It can not be avoided anymore, such a combination use of concrete which is accompanied by the use of high quality steel for transversal reinforcement that useful as a confinement reinforcement on the column or using for a polymer fiber as external confinement material on the column. Column is a very important structural component in ensuring a structure is not a total failure. In designing earthquake resistant structure, the column must have sufficient strength and ductility to behave ductile to absorb and emit seismic energy. Design faults and damage caused by the earthquake cause column have to enhance axial load capacity and flexural capacity. Carbon Fiber Reinforced Polymer (CFRP) is a new composite material for strengthening method that should be considered as an alternative, because of its light weight and high tensile strength. Confinement of externally circular concrete columns makes the column more strongly to the flexural and axial load because it has a very high tensile strength. The analysis performed shows an enhancement in compressive strength caused by FRP confinement and it's also show that the increases of maximum axial load capacity are significant. Maximum axial load of confined concrete and confined concrete with tensile FRP tensile compared with unconfined concrete are 10.367% and 58.35 higher respectively and the maximum axial load of confined concrete with full FRP is 91.18 higher than the one of unconfined concrete for circular column. (c) 2012 Published by Elsevier Ltd. Selection and/or peer review under responsibility of Department of Civil Engineering, Sebelas Maret University", "venue": "", "year": 2017.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 233524873, "title": "Experimental seismic behavior of ultra high performance concrete columns with high strength steel reinforcement", "abstract": "Abstract Ultra High Performance Concrete (UHPC) is an advanced cementitious composite material that exhibits high ductility and durability with superior mechanical properties, e.g. compressive strength in excess of 22 ksi (150 MPa) and sustained post cracking tensile strength greater than 0.7 ksi (5 MPa) These characteristics have promoted UHPC to be considered for new construction applications specially for next generation bridge structures. Currently, UHPC is commonly used in limited structural applications, such as joints and connections between precast structural elements. However, there is a growing interest in larger UHPC applications and new designs of full structural members as the UHPC market keeps growing and material becomes more available. One potential application for structural elements is full UHPC columns, which is the focus of this study. This paper presents an experimental investigation of the structural and seismic behavior of large scale full UHPC columns reinforced with conventional and high strength steel reinforcement. The UHPC columns were tested under combined axial and quasi static cyclic lateral loading at the University of Nevada, Reno. The testing program included four columns where the seismic performance of three different columns with grade 100 reinforcement is compared with a reference UHPC column with regular grade 60 reinforcement. Thus, the varied experimental parameters included the longitudinal reinforcement ratio and grade as well as the transverse reinforcement ratio. Results demonstrate that UHPC columns have reasonable ductility and drift capacity. Moreover, higher reinforcement ratio or grade is needed to better utilize the superior mechanical properties of UHPC and recommended to inform future design.", "venue": "", "year": 2021.0, "author_names": ["Mahmoud Aboukifa", "Mohamed A Moustafa"], "n_citations": 2, "n_key_citations": 0, "score": 1}, {"corpus_id": 139013893, "title": "Experimental study on short rubberized concrete filled steel tubes under cyclic loading", "abstract": "Abstract This paper presents an experimental investigation on the cyclic behaviour of short steel tubes filled with rubberized concrete (RuC) a composite material that mixes concrete with rubber particles. A brief literature review on the cyclic behaviour of CFST columns, the mechanical properties of RuC and recent research on RuC filled steel tubes (RuCFSTs) is presented. Then, the tested specimens are characterized, comprising three cross section shapes (square, rectangular, circular) three steel grades (S235, S275, S355) three concrete mixes (0% 5% 15% of rubber particles content) and two axial load levels (10% 20% of axial plastic load) After that, the loading protocol, test rig and experimental procedure are described in detail. The experimental results are extensively discussed, focusing on the columns' cyclic strength, failure modes, hysteretic and envelope curves, as well as on the energy based ductility factors. Finally, conclusions are drawn regarding all these parameters. The most relevant achievement is that a concrete mix with a low content (5% of rubber particles leads simultaneously to the lowest decrease (5% in the cyclic strength and the highest increase (52% in the ductility of RuCFST columns, thus being the most suitable mix to use in seismic areas, where ductility and energy dissipation requirements are mandatory.", "venue": "", "year": 2016.0, "author_names": ["A P C Duarte", "Bruna Silva", "Nuno Silvestre", "Jorge de Brito", "Eduardo Julio", "Jose Miguel Castro"], "n_citations": 35, "n_key_citations": 0, "score": 0}, {"corpus_id": 137151829, "title": "Retrofit of Circular Reinforced Concrete Columns using FRP, Steel and Concrete Jackets", "abstract": "A large number of reinforced concrete (RC) buildings and bridges is deemed structurally deficient. This is either because the infrastructure continues to age and deteriorate or the strength or deformation capacity of the existing older infratructure does not meet the current code requiremetns, e.g. in high seismic regions. Thus, the need for more efficient retrofit methods has increased in recent years. Currently, there are only a few methods used for strengthening or retrofitting columns. Steel jackets and Fiber Reinforced Polymer (FRP) composites are the two most commonly used methods. In this study, along with these two retrofit methods, concrete jackets reinforced with spiral rebar, Welded Wire Fabric (WWF) and a new steel reinforcement termed PCS are investigated under different axial load conditions.", "venue": "", "year": 2007.0, "author_names": ["Halil Sezen", "Eric A Miller"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 137218674, "title": "Seismic Performance of Aramid Fiber Square Tubed Concrete Columns with Metallic and/or Non Metallic Reinforcement", "abstract": "In prefabricated Aramid Fiber Reinforced Polymer (AFRP) tubed concrete columns with metallic and/or nonmetallic reinforcement, the AFRP square tube performs the dual functions of stay in place formwork and transverse reinforcement. Nine column specimens including a hybrid concrete column with steel bars or AFRP rods for longitudinal reinforcement, were tested under cyclic lateral forces while simultaneously subjected to a constant axial load. Double confinement due to an aramid fiber tube and steel hoops with cross ties can provide a much greater transverse confinement effect. The ply of composite laminates can be designed such that when the shear and the bond strengths exceed the flexural strengths, adequate seismic performance of the hybrid RC columns can be ensured.", "venue": "", "year": 2003.0, "author_names": ["Tetsuo Yamakawa", "Peng Zhong", "Allison A Ohama"], "n_citations": 25, "n_key_citations": 0, "score": 0}, {"corpus_id": 15304599, "title": "FRP Composites Strengthening of Concrete Columns under Various Loading Conditions", "abstract": "This paper provides a review of some of the progress in the area of fiber reinforced polymers (FRP) strengthening of columns for several loading scenarios including impact load. The addition of FRP materials to upgrade deficiencies or to strengthen structural components can save lives by preventing collapse, reduce the damage to infrastructure, and the need for their costly replacement. The retrofit with FRP materials with desirable properties provides an excellent replacement for traditional materials, such as steel jacket, to strengthen the reinforced concrete structural members. Existing studies have shown that the use of FRP materials restore or improve the column original design strength for possible axial, shear, or flexure and in some cases allow the structure to carry more load than it was designed for. The paper further concludes that there is a need for additional research for the columns under impact loading senarios. The compiled information prepares the ground work for further evaluation of FRP strengthening of columns that are deficient in design or are in serious need for repair due to additional load or deterioration.", "venue": "", "year": 2014.0, "author_names": ["Azadeh Parvin", "David Brighton"], "n_citations": 77, "n_key_citations": 2, "score": 0}, {"corpus_id": 198905666, "title": "FIRE RESISTANCE OF INTERNALLY OR EXTERNALLY CONFINED REINFORCED CONCRETE COLUMNS", "abstract": "The aim of this investigation is firstly to evaluate the different methods used for confining the reinforced concrete (R.C) columns either internally or externally. Secondly, the effect of overheating on the performance of confining methods is studied using the computer program \"ANSYS 5.4\" Beside the traditional transverse steel ties, the internal confinement was satisfied by steel fibers or a cage of expanded metal mesh inside the ties, while external confinement was achieved by wrapping the studied columns with ferrocement layers or GFRP sheets. Six R. C columns were prepared, namely, the control column reinforced traditionally with transverse ties only, two columns containing 1% and 2% steel fibers, one column reinforced additionally with a cage of expanded metal mesh, two columns wrapped with either ferrocement laminates or glass fiber reinforced plastics (GFRP) The columns were tested under axial loads to evaluate the effect of the different confining methods on the ultimate capacity and ductility. It was found that adding 2% steel fibers or reinforcing the column with a cage of expanded metal mesh inside the ties gave almost similar results (26% increase in the ultimate capacity compared with that of the control column) Despite that the ultimate capacity of the column wrapped with GFRP was the highest among the studied columns (37% increase in the ultimate capacity) its ductility was the lowest. The parametric study using ANSYS 5.4 showed that the R.C columns containing steel fibers were less affected by fire than the other columns. It was also found that the ultimate capacity of R.C columns wrapped with GFRP was reduced by fire to a high degree (approximately 53% reduction in the ultimate capacity) \" \" \" ,5.4 ANSYS /0 1 2 1 3 5 6 7 8 8 6 3 2 7 59 \" 2 2A 2+ *2 6 C 6 8 A 5 3 7 E 1F 5 3 GFRP E 3 6 $F 6 G 2+ H 3( 3 3 C 6 I 6 3 GFRP 6 I 6 3 \" 5C 3 A 7 \" \" 2 2 2 E 3 $F 3 8 $F 6 \" 3 K/ LC 1 8 A 3 8 H I $F 6 \" M C' *2 GFRP \" G ,3 1 8: (6 C \" 1 A \" 6 5 7 1 Associate Professor, Civil Engineering Department, Banha University 2 Associate Professor, Civil Engineering Department, Cairo University, Fayoum Branch INTRODUCTION The key factors for resistance of reinforced concrete (R.C. columns to gravity and lateral loads are the ultimate capacity and ductility of such columns. Satisfying these key factors were normally achieved by proper internal confinement of R.C. columns using the traditional tie \"transverse\" reinforcement. Recently, internal confinement using short steel fibers or a cage of wire mesh [1 and 2] and external confinement using wire mesh imbedded in mortar (ferrocement) or wraps of advanced composite sheets [3 and 4] greatly improved the performance of R.C columns, in buildings under gravity or seismic loads. Harajli and Rteil [1] reported that internal confinement using steel fibers in the columns improved performance similar to external confinement with carbon fiber sheets. Razvi and Saatcioglu [2] Furlong et al. [5] Mau et al. [6] and Aikhionbare and Tabsh [7] found that the columns internally confined by welded wire fabric inside or outside the transverse ties showed a significant improvement in strength and ductility compared with those designed according to ACI Building Code [8] with closely spaced ties and longitudinal reinforcement only. Saatcioglu and Grira [9] found that welded grids offer an economic alternative to conventional ties with reduced construction time, especially for earthquake resistant construction where the tie details may be prohibitively complex. Fahmy et al [3] studied the external wrapping of ferrocement laminates in repairing of reinforced concrete. It was found that the results of all repaired specimens demonstrated better behavior and load carrying capacity compared to their original behavior. Saadatmanesh [4] found that the stress strain models for concrete confined with composite straps indicate significant increases in compressive strength and strain at failure when compared with the stress strain behavior of unconfined concrete. Naguib and Mirmiran [10] studied the effect of lateral confinement on the behavior of the concrete columns. Harajli and Rteil [1] studied the seismic performance of concrete columns reinforced with ordinary transverse steel and those confined externally with carbon fiber reinforced polymer (CFRP) sheets. They found that confinement with the traditional transverse steel enhanced the seismic behavior and increased the energy absorption capacities of the columns but not as effective as confinement externally with CFRP. The different techniques for internal or external confinement of R.C columns are affected by the elevated temperature to different degrees. Poon and Lam [11] reported that the addition of steel fibers (internal confinement) is more effective in minimizing the degradation of compressive strength for the concrete after exposure to the elevated temperatures compared to traditional reinforcement using transverse ties only. In addition, steel fiber reinforced concretes showed the highest energy absorption capacity after the high temperature exposure, although they suffered a quick loss of this capacity. Very little information is available for the effect of fire resistance on the wire mesh used as additional reinforcement in R. C. columns. However, it was reported that expanded metal reinforcement is better than welded wire reinforcement in fire resistance because the welds at the wire junctions melt at a temperature much smaller than that of steel wires [12] ACI committee 549R 97 [12] stated the potentially poor fire resistance of ferrocement sections because of the inherent thinness of its structural forms and the abnormally low cover to the reinforcement. This is also applicable for the use of ferrocement as external confining tool. However, Naaman [13] proofed that ferrocement compares very favorably with GFRP when exposed to fire. Bisby et al. [14] reported that although the structural effectiveness of FRP materials can be maintained during fire, the fire behavior of FRP wrapped columns can be dramatically improved by providing supplemental insulation for the FRP. The objective of this research is firstly to evaluate the different methods used in confining square R.C columns, internally using expanded metal mesh or steel fibers, and externally using ferrocement laminates or GFRP, in improving the behavior of concrete columns compared with traditional reinforcement with transverse ties. Secondly, the effect of fire on the performance of the confining methods was conducted using the ready package computer program \"ANSYS 5.4\" [15] EXPERIMENTAL PROGRAM Test Specimens Six R. C columns were prepared, namely, the control column reinforced traditionally with transverse ties only, two columns containing 1% and 2% steel fibers, one column reinforced additionally with a cage of expanded steel mesh (ESM) two columns wrapped with either ferrocement laminates or glass fiber reinforced plastics (GFRP) (see Figure 1) The columns were tested under axial loads concerning both strength and behavior to evaluate the effect of the different confining methods on the ultimate capacity and ductility. All columns were square in shape of height \"h\" 100 cm and width \"b\" 15 cm in order to achieve height to width ratio \" h/b\" 6.6 to avoid buckling. The details of the tested specimens are listed in Table 1. Materials and Mix Design All concrete constituents used conformed to the relevant Egyptian Standard Specifications [16] Sand and coarse aggregate (basalt) used were from a local pit. Two different size distributions of the basalt (14 and 5mm) were blended to obtain a uniform distribution. A superplasticizer based on polynaphthalene sulphate was used to improve the workability of the concrete mix. The actual cube compressive strengths for different specimens are given in Table 1. Table 2 provides the mix proportions. The steel used for the longitudinal reinforcement consisted of four 10 mm diameter high grade steel (reinforcement ratio of 2.8% The characteristic yield strength for steel used was 360 MPa. Column tie reinforcement was fabricated from 6 mm mild steel round bars, spaced at 20 cm, of characteristic yield strength 240 MPa. Hooked end steel fibers of varying amounts, having yield strength of 400 MPa, were added in the mixes of columns with steel fibers. The aspect ratio of fibers was constant (Lf Df 60 mm /1 mm 60) Expanded steel mesh (ESM) used in this investigation was expanded metal lath conforming to ACI Committee 549, 1R 88 [17] The mesh has a diamond shape of wire diameter equals 1.5 mm and the yield strength was considered 260 MPa [17] For columns wrapped with GFRP, unidirectional E glass composite (resin fiber) tapes were used to wrap concrete columns. The properties of GFRP used were given by the manufacturer as follows; tensile strength =1103 MPa and modulus of elasticity 48.2 Gpa. Preparation of the Test Specimens All the column specimens including the control specimen, C1, were reinforced with four corner bars of diameter 10 mm as longitudinal reinforcement and transverse ties of 6 mm diameter spaced at 20 cm along the column. Electrical strain gauges were fixed on the longitudinal reinforcement to record the strains. For column specimens, C2 and C3, hooked end steel fibers of volume content 1 and 2% respectively, (see Table 1) were added during mixing of concrete. Column specimen, C4, was additionally reinforced by a cage of ESM of volume fraction, vf 2.2% in the core inside the transverse ties. Column specimens, C5 and C6, were wrapped after casting and curing with one layer of ferrocement (a cage of ESM imbedded in mortar around the column) or a layer of GFRP imbedded in a resin, respectively.", "venue": "", "year": 2005.0, "author_names": ["Hanaa I El-Sayad", "A A Shaheen"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 55737824, "title": "RESPONSE OF REPAIRED FUSE BEAMS TO DYNAMIC TESTING", "abstract": "Controlled energy distribution is desirable for structures subjected to severe earthquakes. EC8[1] prescribes (for steel structures) that reduced beam sections may behave like a fuse that protects beam to column connections against early fracture (cl. B.S.3. provided that they can develop the minimum rotations specified. Unlike common steel structures, innovative types of seismic resistant steel frames have been proposed, with dissipative fuses, where only damage will occur. Fuses are designed to be easily replaced. They usually consist of steel hollow sections. In the present paper, used hollow beams, that had suffered strength degradation of more than 50% were filled with cement based repair mortar (fcd =35 MPa) forming concrete filled composite beams (CFCBs) and were retested, without any other kind of repair. CFCBs are reported to have demonstrated higher axial load capacity, better ductility performance, larger energy absorption capacity and lower strength degradation than conventional reinforced concrete and steel hollow section columns. This became apparent by the results or the tests achieved by the experiments; bearing capacity of fuse beam was practically restored to its initial value when the damaged side of the initial beam was subjected to tension, while it increased to about 2.5 times its initial value, when the damaged side is subjected to compression. Increase of energy dissipation per loading cycle was also remarkable (increase about 150% Calculation of the stiffness of the fuse beam is performed, as it varies with the imposed displacement. Its effect on the eigenperiod of the main structure is discussed. The deformation limits that the cement based repair mortar has reached are calculated. Ideas for further research on the subject are proposed. 4270 Available online at www.eccomasproceedia.org Eccomas Proceedia COMPDYN (2017) 4270 4276 (c) 2017 The Authors. Published by Eccomas Proceedia. Peer review under responsibility of the organizing committee of COMPDYN 2017. doi: 10.7712/120117.5722.18233 Emmanouil Vougioukas, Stella Avgerinou and Konstantinos Theocharis", "venue": "", "year": 2017.0, "author_names": ["Emmanouil Vougioukas", "Stella Avgerinou", "Konstantinos Theocharis"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Understanding Me: Lectures and Interviews", "session_id": 5829904072194407, "user_id": 3776863851699000, "candidates": [{"corpus_id": 142787329, "title": "Understanding Me: Lectures and Interviews", "abstract": "In the last twenty years of his life, Marshall McLuhan published a series of books that established his reputation as a world renowned communications theorist and the pre eminent seer of the modern age. It was McLuhan who made the distinction between \"hot\" and \"cool\" media. And it was he who coined the phrases \"the medium is the message\" and \"the global village\" and popularized other memorable terms including \"feedback\" and \"iconic.\"McLuhan was far more than a pithy phrasemaker, however. He foresaw the development of personal computers at a time when computers were huge, unwieldy machines available only to institutions. He anticipated the wide ranging effects of the Internet. And he understood, better than any of his contemporaries, the transformations that would be wrought by digital technology in particular, the globalization of communications and the instantaneous simultaneous nature of the new, electric world. In many ways, we're still catching up to him forty years after the publication of Understanding Media.In Understanding Me, Stephanie McLuhan and David Staines have brought together nineteen previously unpublished lectures and interviews either by or with Marshall McLuhan. They have in common the informality and accessibility of the spoken word. In every case, the text has been transcribed from the original audio, film, or videotape of McLuhan's actual appearances. This is not what McLuhan wrote but what he said the spoken words of a surprisingly accessible public man. He comes across as outrageous, funny, perplexing, stimulating, and provocative. McLuhan will never seem quite the same again.The foreword by Tom Wolfe provides a twenty first century perspective on McLuhan's life and work, and co editor David Staines's insightful afterword offers a personal account of McLuhan as teacher and friend.", "venue": "", "year": 2003.0, "author_names": ["Marshall Mcluhan", "Stephanie McLuhan", "David Staines"], "n_citations": 77, "n_key_citations": 4, "score": 1}, {"corpus_id": 7790063, "title": "Understanding me: lectures and interviews, Marshall McLuhan", "abstract": "t is a fine autumn morning in a New England September, one that breaks your heart because somehow you are sitting in a darkened classroom listening to a dean or some vice president giving a Power Point presentation about the future of your department. Putting aside your fantasy of trading your tenured position with that of your postman who was whistling to himself as he made his rounds this bright day, you become puzzled by a conundrum hidden in this spiel you're getting. The lecturer is showing the gathering a Power Point slide of a bulleted list while a paper copy of this very document has been handed to everyone in the audience, and the speaker himself (believing that a roomful of Ph.D.s can't read) is telling you exactly what's on your paper. Bemused by this double redundancy, you're shaken out of your self pity; from somewhere in your dusty late middle aged memory comes a slogan of American campus life and pop culture of the 1960s: \"The Medium is the Message.\" It all makes sense Power Point is the medium, and the message is that this hot shot manager uses the latest technology. Score one for Marshall McLuhan, the medium/message inventor. McLuhan's stock has fallen rapidly since his death in 1980. He is now almost part of that forgotten pantheon of gurus of the 1960s and 1970's: Charles Reich, Timothy Leary, Abby Hoffman et al. It was an age of gurus. Best known now for his famous slogan equating medium and message, he has a resiliency lacking in these other \"famous long ago\" heroes. In 1993, Wired magazine designated him their \"patron saint.\" Now M.I.T. Press has recently published a collection of his interviews and speeches that date from 1967 to 1979 (he died in 1980) The editors of Understanding Me are Stephanie McLuhan and David Staines. The latter is an admiring ex student of the great man while the former apparently expects us to know that she is Marshall's daughter her contribution to the book is silent on the family connection. The coyness here is annoying, but her identity is easily verified in yet another recent M.I.T. Press publication: a critical biography, Marshall McLuhan: The Medium and the Messenger, by Philip Marchand (1998) McLuhan was for many years a Professor of Literature at the University of Toronto. He also presided over a Centre for Culture and Technology at the same institution an early example of the interdisciplinary college think tank partly floated with corporate money. Unlike most scholars of literature who got their education between the wars, McLuhan turned his attention early to popular culture. An entertaining book with a sardonic take on the sex pop culture advertising nexus, The Mechanical Bride (1951) brought him some minor fame. His major works however are The Gutenberg Galaxy (1962) and Understanding Media: The Extensions of Man (1964) The main ideas from the principal books are readily gleaned from the pronouncements contained in Understanding Me. Resisting the temptation to use a bulleted list, we may summarize them as follows:", "venue": "IEEE Technology and Society Magazine", "year": 2006.0, "author_names": ["Stephanie McLuhan", "David Staines"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 178581908, "title": "Understanding Me: Lectures and Interviews by Marshall McLuhan", "abstract": "", "venue": "", "year": 2006.0, "author_names": ["Donald Theal"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 197735735, "title": "Chinese students' perceptions of humour in British academic lectures", "abstract": "My PhD study explores humour in British academic lectures and Chinese students' perceptions of it. The research interest was derived from my personal experience as an international student in Britain, when I repeatedly encountered occasions on which the lecturers' jokes fell flat for me. Britain is one of the most popular destinations for international students, but there are hardly any investigations into humour in academic contexts or international students' understanding of it, and none on Chinese students' problems with humour in lectures. In my study, instances of humour, referred to as 'humour episodes' (REs) were identified and analysed in a large number of lectures recorded in the British Academic Spoken English (BASE) corpus and nine academic lectures recorded by me. Some Chinese students, non Chinese students and all of the lecturers at the lectures in my corpus, commented on selected REs in interviews and group discussions. Analysis of the REs was informed by interactional sociolinguistic and pragmatic theories. Major formal, semantic, and functional properties of humour in the lectures were identified. Humour arose from the incongruous interplay between these properties. The lecturers used humour to carry out teaching tasks and interpersonal activities. Humour heightened the lecturers' stances toward their topics. These stances embodied sociocultural values. The Chinese students had evident problems comprehending their lecturers' humour. Some expressed a feeling of alienation at having to laugh with other classmates without understanding the cause. The lecturers were often unaware of the Chinese students' perceptions of their humour, and sometimes appeared to be insensitive to their negative feelings. Expression of stance in the humour was particularly problematic to the Chinese students, but they tended to consider it peripheral to the main purpose of their studies. My study has implications for Chinese students' experience in British universities, and the internationalisation of British higher education.", "venue": "", "year": 2012.0, "author_names": "", "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 207514496, "title": "A multi professional full scale simulation course in the recognition and management of deteriorating hospital patients.", "abstract": "INTRODUCTION Recognition and management of deteriorating patients is often suboptimal, resulting in adverse events that may be avoided if a unified understanding of the signs and needs of deteriorating patients is secured through the education of staff. This paper describes the planning and evaluation of a multi professional, full scale simulation based course for hospital professionals. METHODS A systematic approach to course development was used and the programme was introduced on four general wards in a university hospital. Experts from the wards were trained as educators and participated in the course development. A needs assessment consisting of an observational study, questionnaires and interviews resulted in the creation of learning objectives to provide the road map for content and teaching methods. A 1 day multi professional ward specific educational programme with full scale simulations, mini lectures, case discussions and practical training was planned. Course material, a manual for educators and questionnaires for evaluation of the course were developed. RESULTS A 1 day full scale simulation based educational programme was developed and 50% of the medical staff and 70% of the nursing staff on four wards were trained in a 5 month period. The course was highly rated in terms of content and teaching methods. DISCUSSION The systematic approach for developing the course resulted in a relevant, highly rated course, deeply rooted in the wards, implying the opportunity to facilitate local improvements and adjust the content to local needs. CONCLUSION The use of a systematic approach was successful in the development of this multi professional full scale simulation based educational programme, which has proven to be easily applicable and usable.", "venue": "Resuscitation", "year": 2009.0, "author_names": ["Lone Fuhrmann", "Doris Ostergaard", "Anne Lippert", "Anders Perner"], "n_citations": 40, "n_key_citations": 2, "score": 0}, {"corpus_id": 42234229, "title": "Book Review: Marshall McLuhan reconsidered: review of reprinted editions, previously unpublished work, and two tributes", "abstract": "Eric McLuhan and William Kuhns (eds) The Book of Probes. Corte Madera, CA: Gingko Press, 2003. 573 pp. ISBN 1 58423 056 8, US$39.95 (hbk) Paul Benedetti and Nancy DeHart (eds) Forward Through the Rearview Mirror: Reflections on and by Marshall McLuhan. Cambridge, MA: MIT Press, 1997. 207 pp. ISBN 0 262 52233 0, US$25.00 (pbk) Marshall McLuhan, The Mechanical Bride: Folklore of Industrial Man (with an introduction by Philip B. Meggs) Corte Madera, CA: Gingko Press, 2002. 157 pp. ISBN 1 58423 050 9, US$35.00 (hbk) Stephanie McLuhan and David Staines (eds) Understanding Me: Lectures and Interviews. Cambridge, MA: MIT Press, 2003. 317 pp. ISBN 0 262 13442 X, US$27.95 hbk. W. Terrence Gordon (ed) Understanding Media: the Extensions of Man. Corte Madera, CA: Gingko Press, 2003[1964] 611 pp. ISBN 1 58423 073 8, US$52.90 (hbk)", "venue": "New Media Soc.", "year": 2005.0, "author_names": ["Wendy Robinson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 145175828, "title": "Original sin or saving grace? Speech in media ecology", "abstract": "Albrecht, R. (2004) Mediating the muse: A communications approach to music, media, and culture change. Creskill, NJ: Hampton Press. 450 pp. $85.00 (hardcover) $34.50 (paper) Cassidy, M. (2004) BookEnds: The changing media environment of American classrooms. Cresskill, NJ: Hampton Press. 335 pp. $65.00 (hardcover) $29.95 (paper) Levinson, P. (2004) Cellphone. New York: Palgrave MacMillan. 221 pp. $29.95 (hardcover) Heyer, P. (2003) Harold Innis. Lanham, MD: Rowman and Littlefield. 133 pp. $21.95 (paper) McLuhan, M. Carson, D. (2003) The book of probes. Madera, CA: Ginko Press. 574 pp. $39.95 (hardcover) McLuhan, M. (2003) Understanding media (critical edition with a new introduction by W. Terrence Gordon) Madera, CA: Gingko Press. 611 pp. $24.95 (hardcover) (Original work published 1964) McLuhan, S. Staines, D. (Eds. (2003) Understanding me: Lectures and interviews (with a foreword by Tom Wolfe) Cambridge, MA: MIT Press. 315 pp. $27.95 (hardcover)", "venue": "", "year": 2004.0, "author_names": ["Lance Haynes"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 220061384, "title": "Given that the detailed original criteria for deliberate practice have not changed, could the understanding of this complex concept have improved over time? A response to Macnamara and Hambrick (2020)", "abstract": "In their commentary, Macnamara and Hambrick (Psychol Res, 2017) accused my colleagues and me of systematically changing the definition of the concept of deliberate practice. Deliberate practice was the result of a search for characteristics of effective practice in the laboratory that was shown to improve expert professional performance in domains, such as music. In this reply, I will first describe five different criteria that defined the original concept of deliberate practice and each of them is presented with directly supporting quotes from Ericsson, Krampe, and Tesch Romer (Psychol Rev 100:396 406, 10.1037/0033 295X.87.3.215, 1993) paper. Unfortunately, Macnamara, Hambrick, and Oswald (Psychol Sci 25:1608 1618, 10.1177/0956797614535810, 2014) misinterpreted our concept of deliberate practice, and defined it much more broadly: \"as engagement in structured activities created specifically to improve performance in a domain\" (p. 914) This definition led them to include activities, such as attending lectures, studying alone by students, and group activities led by a coach, where each activity does not meet one or more of our criteria for deliberate practice. In this commentary, I will argue that Macnamara and Hambrick (2020) became aware of some of the original criteria for deliberate practice, such as the role of individualized training by a teacher, and these discoveries misled them to assume that we had changed our definition. The intended meaning of sentences that Macnamara and Hambrick (2020) had carefully selected is shown to have an appropriate interpretation in Standard English that is consistent with our original definition of deliberate practice. In conclusion, I will give a proposal for how the different perspectives can be reconciled.", "venue": "Psychological research", "year": 2020.0, "author_names": ["K Anders Ericsson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 141509323, "title": "Brainstorming: Views and Interviews on the Mind", "abstract": "Gallagher presents a collection of dialogues between himself and a number of neuroscientists, including Michael Gazzaniga, Marc Jeannerod, and Chris Frith, on the relation between the mind and brain. I did not write this book, I constructed it. And in regard to its content, let me admit at the beginning that in this book I beg, borrow, and steal (well maybe not steal, since I have observed copyrights) as much wisdom as I can from some of the best minds of our time. These are people who think about brains and minds professionally. Although this is a book about the philosophy of mind, it is also interdisciplinary, so I have made use not only of philosophers, but also of neuropsychologists and neuroscientists, people who have gained their understanding of how brain and behavior and mental experience go together through experimentation. I've borrowed from people in person in a series of interviews, many of which have been published in the Journal of Consciousness Studies. I've borrowed by means of e mail exchanges that I've had with numerous people over the past several years. And of course, I've borrowed from books. This book includes interviews, but is not strictly a collection of interviews. I have mixed in explanations and descriptions that are meant to clarify and explicate the issues under discussion. More specifically, this book is intended to be an unorthodox but very accessible introduction to certain themes that cut across the philosophy of mind and psychology. This might rightfully seem a contradiction. An introduction to a certain subject matter is supposed to be orthodox, if nothing else. That is, if one intends to introduce someone to a subject matter, one normally intends to review the established and received views that define the field. So in what sense can this be at the same time an introduction and unorthodox? Well first, the genre of this book is not standard for introductory textbooks since it consists in large parts of interviews rather than straight explanatory discourses. In addition, I can honestly say that there was no preconceived plan to the book, although this does not mean that a plan did not emerge in its construction. The topics and themes that we cover have emerged from the interviews themselves. But this is also why this can be considered an introductory text. The interview style, I believe, makes the various topics and themes very accessible, in the way that conversation tends to be more accessible than formal lecture. And as in a conversation, topics tend to emerge on their own and can be deeply engaging. Furthermore, the fact that these are the topics that emerged in conversations with some of the most important researchers in the field means that we will be exploring views that are close to the cutting edge of contemporary philosophy and science. So what we find expressed here are not so much the received and established views but a set of ongoing questions and discussions that define the field. If these are the issues that the leading researchers are concerned about and find exciting, it seems appropriate to think that these are the most appropriate issues to begin with, and that these are the issues that beginning students, or even experts who are approaching these topics from different fields, might find the most interesting.", "venue": "", "year": 2008.0, "author_names": ["Shaun Gallagher"], "n_citations": 14, "n_key_citations": 2, "score": 0}, {"corpus_id": 232019367, "title": "A moment just for me parents' experiences of an intervention for person centred information in paediatric oncology.", "abstract": "PURPOSE Information can help parents of children with cancer by reducing uncertainty and giving them a sense of control in a chaotic situation. Although providing information to parents is a core activity of paediatric oncology nursing, few studies focus on interventions for informing parents. Thus, the aim of this study is to evaluate parents' experiences after participating in a person centred information intervention for parents of children with cancer. METHOD This study is part of a process evaluation of a person centred informational intervention in paediatric oncology for patients' parents. Qualitative semi structured interviews with 13 parents who had taken part in the intervention were analysed using qualitative content analysis. RESULTS An opening for healing emerged as the overarching theme, consisting of three categories. Gaining a deeper understanding of the entire situation describes how parents benefitted from processing current topics and moving forward by learning. Caring reflections in a safe space describes how parents appreciated having a moment just for themselves and feeling better by venting their feelings. Meeting a competent and compassionate nurse describes how parents experienced trust and being listened to. CONCLUSION Having individual information meetings integrated as a primary nursing responsibility, mediated by competent and compassionate nurses also responsible for the care of the child, could enhance person centred care and individualise parental education.", "venue": "European journal of oncology nursing the official journal of European Oncology Nursing Society", "year": 2021.0, "author_names": ["Anders Ringner", "Cecilia Olsson", "Emma Eriksson", "Ida From", "Maria Bjork"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Attention and perceptual decision making.", "session_id": 601465813738200, "user_id": 3850914259204752, "candidates": [{"corpus_id": 151456087, "title": "Attention and Perceptual Decision Making", "abstract": "Abstract Selective attention has a direct influence on perceptual decision making. This chapter reviews how attention biases or facilitates judgments of sensory stimuli by examining decision theoretic models, such as the signal detection model and sequential sampling models. These models assume that the processing order of multiple signals is invariant to attentional influence. By contrast, the relative saliency hypothesis suggests that attention affects how multiple signals are accumulated for perceptual decision making. To support this suggestion, studies using Systems Factorial Technology (SFT, Townsend Nozawa, 1995 are reviewed to examine the impact of attentional manipulations (e.g. spatial cueing, contingency, attentional instruction, payoff) on perceptual decisions in a redundant target detection task. Results highlight the flexibility of the perceptual decision mechanism, the role of top down attentional control, and conscious awareness in selecting a decision strategy to optimize detection performance. Finally, the concept of processing capacity is discussed in relation to attentional capacity.", "venue": "", "year": 2017.0, "author_names": ["Cheng-Ta Yang"], "n_citations": 5, "n_key_citations": 0, "score": 1}, {"corpus_id": 76666045, "title": "Disentangling expectation from selective attention during perceptual decision making.", "abstract": "A large body of work has investigated the effects of attention and expectation on early sensory processing to support decision making. In a recent paper published in The Journal of Neuroscience, Rungratsameetaweemana et al. (Rungratsameetaweemana N, Itthipuripat S, Salazar A, Serences JT. J Neurosci 38: 5632 5648, 2018) found that expectations driven by implicitly learned task regularities do not modulate neural markers of early visual processing. Here, we discuss these findings and propose several lines of follow up analyses and experiments that could expand on these findings in the broader perceptual decision making literature.", "venue": "Journal of neurophysiology", "year": 2019.0, "author_names": ["Alexander J Simon", "Jessica Schachtner", "Courtney L Gallen"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 228936564, "title": "Cue reliability modulates interdependency between space and feature based attention during perceptual decision making", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["Guangsheng Liang", "Miranda Scolari"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 21989959, "title": "How attention influences perceptual decision making: Single trial EEG correlates of drift diffusion model parameters.", "abstract": "Perceptual decision making can be accounted for by drift diffusion models, a class of decision making models that assume a stochastic accumulation of evidence on each trial. Fitting response time and accuracy to a drift diffusion model produces evidence accumulation rate and non decision time parameter estimates that reflect cognitive processes. Our goal is to elucidate the effect of attention on visual decision making. In this study, we show that measures of attention obtained from simultaneous EEG recordings can explain per trial evidence accumulation rates and perceptual preprocessing times during a visual decision making task. Models assuming linear relationships between diffusion model parameters and EEG measures as external inputs were fit in a single step in a hierarchical Bayesian framework. The EEG measures were features of the evoked potential (EP) to the onset of a masking noise and the onset of a task relevant signal stimulus. Single trial evoked EEG responses, P200s to the onsets of visual noise and N200s to the onsets of visual signal, explain single trial evidence accumulation and preprocessing times. Within trial evidence accumulation variance was not found to be influenced by attention to the signal or noise. Single trial measures of attention lead to better out of sample predictions of accuracy and correct reaction time distributions for individual subjects.", "venue": "Journal of mathematical psychology", "year": 2017.0, "author_names": ["Michael D Nunez", "Joachim Vandekerckhove", "Ramesh Srinivasan"], "n_citations": 70, "n_key_citations": 7, "score": 0}, {"corpus_id": 204368620, "title": "Dissociable effects of attention and expectation on perceptual decision making", "abstract": "", "venue": "Journal of Vision", "year": 2019.0, "author_names": ["Nuttida Rungratsameetaweeman", "Sirawaj Itthipuripat", "John T Serences"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 215410426, "title": "A Causal Role for Mouse Superior Colliculus in Visual Perceptual Decision Making", "abstract": "The superior colliculus (SC) is arguably the most important visual structure in the mouse brain and is well known for its involvement in innate responses to visual threats and prey items. In other species, the SC plays a central role in voluntary as well as innate visual functions, including crucial contributions to selective attention and perceptual decision making. In the mouse, the possible role of the SC in voluntary visual choice behaviors has not been established. Here, we demonstrate that the mouse SC of both sexes plays a causal role in visual perceptual decision making by transiently inhibiting SC activity during an orientation change detection task. First, unilateral SC inhibition induced spatially specific deficits in detection. Hit rates were reduced, and reaction times increased for orientation changes in the contralateral but not ipsilateral visual field. Second, the deficits caused by SC inhibition were specific to a temporal epoch coincident with early visual burst responses in the SC. Inhibiting SC during this 100 ms period caused a contralateral detection deficit, whereas inhibition immediately before or after did not. Third, SC inhibition reduced visual detection sensitivity. Psychometric analysis revealed that inhibiting SC visual activity significantly increased detection thresholds for contralateral orientation changes. In addition, effects on detection thresholds and lapse rates caused by SC inhibition were larger in the presence of a competing visual stimulus, indicating a role for the mouse SC in visual target selection. Together, our results demonstrate that the mouse SC is necessary for the normal performance of voluntary visual choice behaviors. SIGNIFICANCE STATEMENT The mouse superior colliculus (SC) has become a popular model for studying the circuit organization and development of the visual system. Although the SC is a fundamental component of the visual pathways in mice, its role in visual perceptual decision making is not clear. By investigating how temporally precise SC inhibition influenced behavioral performance during a visually guided orientation change detection task, we identified a 100 ms temporal epoch of SC visual activity that is crucial for the ability of mice to detect behaviorally relevant visual changes. In addition, we found that SC inhibition also caused deficits in visual target selection. Thus, our findings highlight the importance of the SC for visual perceptual choice behavior in the mouse.", "venue": "The Journal of Neuroscience", "year": 2020.0, "author_names": ["Lupeng Wang", "Kerry McAlonan", "Sheridan Goldstein", "Charles R Gerfen", "Richard J Krauzlis"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 203454561, "title": "Evidence accumulation during perceptual decision making is sensitive to the dynamics of attentional selection", "abstract": "The ability to select and combine multiple sensory inputs in support of accurate decisions is a hallmark of adaptive behaviour. Attentional selection is often needed to prioritize stimuli that are task relevant and to attenuate potentially distracting sources of sensory information. As most studies of perceptual decision making to date have made use of task relevant stimuli only, relatively little is known about how attention modulates decision making. To address this issue, we developed a novel 'integrated' decision making task, in which participants judged the average direction of successive target motion signals while ignoring concurrent and spatially overlapping distractor motion signals. In two experiments that varied the role of attentional selection, we used linear regression to quantify the influence of target and distractor stimuli on behaviour. Using electroencephalography, we characterised the neural correlates of decision making, attentional selection and feature specific responses to target and distractor signals. While targets strongly influenced perceptual decisions and associated neural activity, we also found that concurrent and spatially coincident distractors exerted a measurable bias on both behaviour and brain activity. Our findings suggest that attention operates as a real time but imperfect filter during perceptual decision making by dynamically modulating the contributions of task relevant and irrelevant sensory inputs.", "venue": "NeuroImage", "year": 2020.0, "author_names": ["Dragan Rangelov", "Jason B Mattingley"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 18116447, "title": "Individual differences in attention influence perceptual decision making", "abstract": "Sequential sampling decision making models have been successful in accounting for reaction time (RT) and accuracy data in two alternative forced choice tasks. These models have been used to describe the behavior of populations of participants, and explanatory structures have been proposed to account for between individual variability in model parameters. In this study we show that individual differences in behavior from a novel perceptual decision making task can be attributed to (1) differences in evidence accumulation rates, (2) differences in variability of evidence accumulation within trials, and (3) differences in non decision times across individuals. Using electroencephalography (EEG) we demonstrate that these differences in cognitive variables, in turn, can be explained by attentional differences as measured by phase locking of steady state visual evoked potential (SSVEP) responses to the signal and noise components of the visual stimulus. Parameters of a cognitive model (a diffusion model) were obtained from accuracy and RT distributions and related to phase locking indices (PLIs) of SSVEPs with a single step in a hierarchical Bayesian framework. Participants who were able to suppress the SSVEP response to visual noise in high frequency bands were able to accumulate correct evidence faster and had shorter non decision times (preprocessing or motor response times) leading to more accurate responses and faster response times. We show that the combination of cognitive modeling and neural data in a hierarchical Bayesian framework relates physiological processes to the cognitive processes of participants, and that a model with a new (out of sample) participant's neural data can predict that participant's behavior more accurately than models without physiological data.", "venue": "Front. Psychol.", "year": 2015.0, "author_names": ["Michael D Nunez", "Ramesh Srinivasan", "Joachim Vandekerckhove"], "n_citations": 50, "n_key_citations": 2, "score": 0}, {"corpus_id": 220072185, "title": "Evidence accumulation during perceptual decision making is sensitive to the dynamics of attentional selection", "abstract": "The ability to select and combine multiple sensory inputs in support of accurate decisions is a hallmark of adaptive behaviour. Attentional selection is often needed to prioritize task relevant stimuli relative to irrelevant, potentially distracting stimuli. As most studies of perceptual decision making to date have made use of task relevant stimuli only, relatively little is known about how attention modulates decision making. To address this issue, we developed a novel 'integrated' decision making task, in which participants judged the average direction of successive target motion signals while ignoring concurrent and spatially overlapping distractor motion signals. In two experiments that varied the role of attentional selection, we used regression to quantify the influence of target and distractor stimuli on behaviour. Using electroencephalography, we characterised the neural correlates of decision making, attentional selection and feature specific responses to target and distractor signals. While targets strongly influenced perceptual decisions and associated neural activity, we also found that concurrent and spatially coincident distractors exerted a measurable bias on both behaviour and brain activity. Our findings suggest that attention operates as a real time but imperfect filter during perceptual decision making by dynamically modulating the contributions of task relevant and irrelevant sensory inputs.", "venue": "NeuroImage", "year": 2020.0, "author_names": ["Dragan Rangelov", "Jason B Mattingley"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 224812988, "title": "Systematic and random sources of variability in perceptual decision making: Comment on Ratcliff, Voskuilen, and McKoon (2018)", "abstract": "A key assumption of models of human cognition is that there is variability in information processing. Evidence accumulation models (EAMs) commonly assume 2 broad variabilities in information processing: within trial variability, which is thought to reflect moment to moment fluctuations in perceptual processes, and between trial variability, which is thought to reflect variability in slower changing processes like attention, or systematic variability between the stimuli on different trials. Recently, Ratcliff, Voskuilen, and McKoon (2018) claimed to \"provide direct evidence that external noise is, in fact, required to explain the data from five simple two choice decision tasks\" (p. 33) suggesting that at least some portion of the between trial variability in information processing is due to \"noise.\" However, we argue that Ratcliff et al. (2018) failed to distinguish between 2 different potential sources of between trial variability: random (i.e. \"external noise\" and systematic (e.g. item effects) Contrary to the claims of Ratcliff et al. (2018) we show that \"external noise\" is not required to explain their findings, as the same trends of data can be produced when only item effects are present. Furthermore, we contend that the concept of \"noise\" within cognitive models merely serves as a convenience parameter for sources of variability that we know exist but are unable to account for. Therefore, we question the usefulness of experiments aimed at testing the general existence of \"random\" variability and instead suggest that future research should attempt to replace the random variability terms within cognitive models with actual explanations of the process. (PsycInfo Database Record (c) 2020 APA, all rights reserved)", "venue": "Psychological review", "year": 2020.0, "author_names": ["Nathan J Evans", "Gabriel Tillman", "Eric-Jan Wagenmakers"], "n_citations": 4, "n_key_citations": 0, "score": 0}]} -{"query": "semantic segmentation 2021", "session_id": 2558961262215545, "user_id": 1832809619762987, "candidates": [{"corpus_id": 67788034, "title": "Deep Multi Modal Object Detection and Semantic Segmentation for Autonomous Driving: Datasets, Methods, and Challenges", "abstract": "Recent advancements in perception for autonomous driving are driven by deep learning. In order to achieve robust and accurate scene understanding, autonomous vehicles are usually equipped with different sensors (e.g. cameras, LiDARs, Radars) and multiple sensing modalities can be fused to exploit their complementary properties. In this context, many methods have been proposed for deep multi modal perception problems. However, there is no general guideline for network architecture design, and questions of \"what to fuse\" \"when to fuse\" and \"how to fuse\" remain open. This review paper attempts to systematically summarize methodologies and discuss challenges for deep multi modal object detection and semantic segmentation in autonomous driving. To this end, we first provide an overview of on board sensors on test vehicles, open datasets, and background information for object detection and semantic segmentation in autonomous driving research. We then summarize the fusion methodologies and discuss challenges and open questions. In the appendix, we provide tables that summarize topics and methods. We also provide an interactive online platform to navigate each reference: https:/boschresearch.github.io/multimodalperception/", "venue": "IEEE Transactions on Intelligent Transportation Systems", "year": 2021.0, "author_names": ["Di Feng", "Christian Haase-Schuetz", "Lars Rosenbaum", "Heinz Hertlein", "Fabian Duffhauss", "Claudius Glaser", "Werner Wiesbeck", "Klaus C J Dietmayer"], "n_citations": 161, "n_key_citations": 7, "score": 0}, {"corpus_id": 53726367, "title": "CGNet: A Light Weight Context Guided Network for Semantic Segmentation", "abstract": "The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state of the art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models follow the spirit of classification network and ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet) which is a light weight and efficient network for semantic segmentation. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context effectively and efficiently, and further improves the joint feature with the global context. Based on the CG block, we develop CGNet which captures contextual information in all stages of the network. CGNet is specially tailored to exploit the inherent property of semantic segmentation and increase the segmentation accuracy. Moreover, CGNet is elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing light weight segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post processing and multi scale testing, the proposed CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters.", "venue": "IEEE Transactions on Image Processing", "year": 2021.0, "author_names": ["Tianyi Wu", "Sheng Tang", "Rui Zhang", "Yongdong Zhang"], "n_citations": 92, "n_key_citations": 16, "score": 0}, {"corpus_id": 201058657, "title": "Semi Supervised Semantic Segmentation With High and Low Level Consistency", "abstract": "The ability to understand visual information from limited labeled data is an important aspect of machine learning. While image level classification has been extensively studied in a semi supervised setting, dense pixel level classification with limited data has only drawn attention recently. In this work, we propose an approach for semi supervised semantic segmentation that learns from limited pixel wise annotated samples while exploiting additional annotation free images. The proposed approach relies on adversarial training with a feature matching loss to learn from unlabeled images. It uses two network branches that link semi supervised classification with semi supervised segmentation including self training. The dual branch approach reduces both the low level and the high level artifacts typical when training with few labels. The approach attains significant improvement over existing methods, especially when trained with very few labeled samples. On several standard benchmarks PASCAL VOC 2012, PASCAL Context, and Cityscapes the approach achieves new state of the art in semi supervised learning.", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "year": 2021.0, "author_names": ["Sudhanshu Mittal", "Maxim Tatarchenko", "Thomas Brox"], "n_citations": 72, "n_key_citations": 14, "score": 1}, {"corpus_id": 212634017, "title": "Rectifying Pseudo Label Learning via Uncertainty Estimation for Domain Adaptive Semantic Segmentation", "abstract": "This paper focuses on the unsupervised domain adaptation of transferring the knowledge from the source domain to the target domain in the context of semantic segmentation. Existing approaches usually regard the pseudo label as the ground truth to fully exploit the unlabeled target domain data. Yet the pseudo labels of the target domain data are usually predicted by the model trained on the source domain. Thus, the generated labels inevitably contain the incorrect prediction due to the discrepancy between the training domain and the test domain, which could be transferred to the final adapted model and largely compromises the training process. To overcome the problem, this paper proposes to explicitly estimate the prediction uncertainty during training to rectify the pseudo label learning for unsupervised semantic segmentation adaptation. Given the input image, the model outputs the semantic segmentation prediction as well as the uncertainty of the prediction. Specifically, we model the uncertainty via the prediction variance and involve the uncertainty into the optimization objective. To verify the effectiveness of the proposed method, we evaluate the proposed method on two prevalent synthetic to real semantic segmentation benchmarks, i.e. GTA5 \\rightarrow Cityscapes and SYNTHIA \\rightarrow Cityscapes, as well as one cross city benchmark, i.e. Cityscapes \\rightarrow Oxford RobotCar. We demonstrate through extensive experiments that the proposed approach (1) dynamically sets different confidence thresholds according to the prediction variance, (2) rectifies the learning from noisy pseudo labels, and (3) achieves significant improvements over the conventional pseudo label learning and yields competitive performance on all three benchmarks.", "venue": "Int. J. Comput. Vis.", "year": 2021.0, "author_names": ["Zhedong Zheng", "Yi Wei Yang"], "n_citations": 58, "n_key_citations": 6, "score": 0}, {"corpus_id": 232372493, "title": "Improved Scan Matching Performance in Snowy Environments Using Semantic Segmentation", "abstract": "Inclement weather conditions such as snowy environments poses a lot of challenge for autonomous driving. Because of the dynamic changes in the environment, there will be difference between the prior map obtained from a LiDAR system and current sensor data. To overcome this problem, in this study, we present a semantic segmentation based method to recognize the snow cover from the camera images and project the results on the LiDAR point cloud to distinguish the differences. Our results shows improved localization accuracy in snow environment.", "venue": "2021 IEEE/SICE International Symposium on System Integration (SII)", "year": 2021.0, "author_names": ["Masahiro Obuchi", "Takanori Emaru", "Ankit A Ravankar"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 227225684, "title": "Domain Adaptive Knowledge Distillation for Driving Scene Semantic Segmentation", "abstract": "Practical autonomous driving systems face two crucial challenges: memory constraints and domain gap issues. In this paper, we present a novel approach to learn domain adaptive knowledge in models with limited memory, thus bestowing the model with the ability to deal with these issues in a comprehensive manner. We term this as \"Domain Adaptive Knowledge Distillation \" and address the same in the context of unsupervised domain adaptive semantic segmentation by proposing a multi level distillation strategy to effectively distil knowledge at different levels. Further, we introduce a novel cross entropy loss that leverages pseudo labels from the teacher. These pseudo teacher labels play a multifaceted role towards: (i) knowledge distillation from the teacher network to the student network (ii) serving as a proxy for the ground truth for target domain images, where the problem is completely unsupervised. We introduce four paradigms for distilling domain adaptive knowledge and carry out extensive experiments and ablation studies on real to real as well as synthetic to real scenarios. Our experiments demonstrate the profound success of our proposed method.", "venue": "2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW)", "year": 2021.0, "author_names": ["Divya Kothandaraman", "Athira M Nambiar", "Anurag Mittal"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 233332334, "title": "Multi Scale Voxel Class Balanced ASPP for LIDAR Pointcloud Semantic Segmentation", "abstract": "This paper explores efficient techniques to improve PolarNet model performance to address the real time semantic segmentation of LiDAR point clouds. The core framework consists of an encoder network, Atrous spatial pyramid pooling (ASPP)/Dense Atrous spatial pyramid pooling (DenseASPP) followed by a decoder network. Encoder extracts multi scale voxel information in a top down manner while decoder fuses multiple feature maps from various scales in a bottom up manner. In between encoder and decoder block, an ASPP/DenseASPP block is inserted to enlarge receptive fields in a very dense manner. In contrast to PolarNet model, we use weighted cross entropy in conjunction with Lovasz softmax loss to improve segmentation accuracy. Also this paper accelerates training mechanism of PolarNet model by incorporating learning rate schedulers in conjunction with Adam optimizer for faster convergence with fewer epochs without degrading accuracy. Extensive experiments conducted on challenging SemanticKITTI dataset shows that our high resolution grid model obtains competitive state of art result of 60.6 mIOU @21fps whereas our low resolution grid model obtains 54.01 mIOU @35fps thereby balancing accuracy/speed trade off.", "venue": "2021 IEEE Winter Conference on Applications of Computer Vision Workshops (WACVW)", "year": 2021.0, "author_names": ["K S Chidanand Kumar", "Samir Al-Stouhi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 233226828, "title": "Automatic Detection of Mounting Behavior in Cattle using Semantic Segmentation and Classification", "abstract": "In cattle farming sector, the accurate detection of estrus plays a vital role because incorrect timing for artificial insemination affects the cattle business. The noticeable sign of estrus is the standing heat, where the cattle standing to be mounted by other cows for a couple of seconds. In this paper, we proposed cattle region detection using deep learning semantic segmentation model and automatic detection of mounting behavior with machine learning classification methods. Based on the conducted experiment, the results show that a mean Intersection of Union (IoU) of 98% on the validation set. The pixel wise accuracy for two classes (cattle and background) was found to be both 98% respectively. For the classification, the proposed method compares the four supervised machine learning methods which can detect with the accuracy rate of Support Vector Machine, Naive Bayes, Logistic Regression and Linear Regression are 87% 96% 90% and 80% respectively. Among them, Naive Bayes algorithm perform the best. The novelty of this work noticeably implies that deep learning semantic segmentation could be effectively employed as a preprocessing step in segmenting the cattle and background prior to using various classification models.", "venue": "2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech)", "year": 2021.0, "author_names": ["Su Myat Noe", "Thi Thi Zin", "Pyke Tin", "Ikuo Kobayashi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 231915434, "title": "Auto generated Wires Dataset for Semantic Segmentation with Domain Independence", "abstract": "In this work, we present a procedure to automatically generate an high quality training dataset of cable like objects for semantic segmentation. The proposed method is explained in detail using the recognition of electric wires as a use case. These particular objects are commonly used in an extremely wide set of industrial applications, since they are of information and communication infrastructures, they are used in construction, industrial manufacturing and power distribution. The proposed approach uses an image of the target object placed in front of a monochromatic background. By employing the chroma key technique, we can easily obtain the training masks of the target object and replace the background to produce a domain independent dataset. How to reduce the reality gap is also investigated in this work by correctly choosing the backgrounds, augmenting the foreground images exploiting masks. The produced dataset is experimentally validated by training two algorithms and testing them on a real image set. Moreover, they are compared to a baseline algorithm specifically designed to recognise deformable linear objects.", "venue": "2021 International Conference on Computer, Control and Robotics (ICCCR)", "year": 2021.0, "author_names": ["Riccardo Zanella", "Alessio Caporali", "Kalyan Tadaka", "Daniele De Gregorio", "Gianluca Palli"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 233196330, "title": "Semantic segmentation based on DeeplabV3+ with multiple fusions of low level features", "abstract": "For the problem that DeeplabV3+ semantic segmentation model uses downsampling operation several times in the encoder process, a large amount of object boundary information is lost, resulting in inaccurate segmentation at the object boundary location, this paper proposes a new semantic segmentation model that fuses low level features several times based on DeeplabV3+ algorithm. Firstly, we use the encoding module of DeeplabV3+ algorithm to extract the high level semantic information of the object; then we use the method of this paper to extract the low level features several times to obtain the detailed information of the object boundary; finally, we fuse the high level semantic information and the detailed information of the object to obtain the optimized semantic segmentation result. The experimental results on the current open source dataset PASCAL VOC 2012 show that compared with the DeeplabV3+ algorithm, the algorithm model proposed in this paper has better semantic segmentation results by fusing the low level features multiple times, especially at the boundary position of the object, with a pixel accuracy of 94.1% and a mean intersection over union of 77.5% The overall performance of the algorithm model proposed in this paper has reached the current leading level.", "venue": "2021 IEEE 5th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC)", "year": 2021.0, "author_names": ["Jiang Libiao", "Zhou Wenchao", "Lin Changyu"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Withdrawing from school", "session_id": 2874626252214071, "user_id": 2220919421871823, "candidates": [{"corpus_id": 1367009, "title": "Withdrawing From School", "abstract": "Research on dropping out of school has focused on characteristics of the individual or institution that correlate with the dropout decision. Many of these characteristics are nonmanipulable, and all are measured at one point in time, late in the youngster's school career. This paper describes two models for understanding dropping out as a developmental process that may begin in the earliest grades. The frustration self esteem model has been used for years in the study of juvenile delinquency; it identifies school failure as the starting point in a cycle that may culminate in the student's rejecting, or being rejected by, the school. The participation identification model focuses on students' \"involvement in schooling,\" with both behavioral and emotional components. According to this formulation, the likelihood that a youngster will successfully complete 12 years of schooling is maximized if he or she maintains multiple, expanding forms of participation in school relevant activities. The failure of a youngster to participate in school and class activities, or to develop a sense of identification with school, may have significant deleterious consequences. The ability to manipulate modes of participation poses promising avenues for further research as well as for intervention efforts.", "venue": "", "year": 1989.0, "author_names": ["Jeremy D Finn"], "n_citations": 2439, "n_key_citations": 336, "score": 1}, {"corpus_id": 210827446, "title": "First Year Students' Reasons for Withdrawing From College", "abstract": "First Year Students' Reasons for Withdrawing From College by Margaret Ann Nelson MBA, Long Island University, 2010 MS, Long Island University, 2007 BS, Long Island University, 2007 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Education Higher Education and Adult Learning Walden University June 2019 Abstract Retention of first year students was a problem at a private 4 year university in the Southeastern United States. The purpose of this qualitative case study was to examine the reasons entering first year students who were part of the Promise Program withdrew from the university during their first year. Tinto's model of student attrition provided the conceptual framework for the study. Research questions addressed students' rationale for selecting the school, their perspectives on the main causes of first year attrition, their expectations of campus support services, and their recommendations for how to decrease student attrition. Data were collected from semistructured interviews with 7 students from the spring 2016 and fall 2016 semesters. The interviews were transcribed and analyzed using manual coding and coding software. Findings indicated that students' sense of belonging was the most influential factor in their decision to withdraw from college. Recommendations included a training program for administrators and staff on customer service techniques. This study can bring positive social change to the profession by seeking out systemic changes to promote entering freshmen's college completion. Conclusively, the implications of positive social change is most benefical to studentsRetention of first year students was a problem at a private 4 year university in the Southeastern United States. The purpose of this qualitative case study was to examine the reasons entering first year students who were part of the Promise Program withdrew from the university during their first year. Tinto's model of student attrition provided the conceptual framework for the study. Research questions addressed students' rationale for selecting the school, their perspectives on the main causes of first year attrition, their expectations of campus support services, and their recommendations for how to decrease student attrition. Data were collected from semistructured interviews with 7 students from the spring 2016 and fall 2016 semesters. The interviews were transcribed and analyzed using manual coding and coding software. Findings indicated that students' sense of belonging was the most influential factor in their decision to withdraw from college. Recommendations included a training program for administrators and staff on customer service techniques. This study can bring positive social change to the profession by seeking out systemic changes to promote entering freshmen's college completion. Conclusively, the implications of positive social change is most benefical to students when more students are able to earn a degree, and better their livelihood. The university would benefit by graduating more students and the success of their college graduates could be seen as their own success of addressing student's social and academic needs. Finally, the positive social change for externalities would benefit from the investment in human beings and human capital as a critical input for change and innovations to society. First Year Students' Reasons for Withdrawing From College by Margaret Ann Nelson MBA, Long Island University, 2010 MS, Long Island University, 2007 BS, Long Island University, 2007 Doctoral Study Submitted in Partial Fulfillment of the Requirements for the Degree of Doctor of Education Higher Education and Adult Learning", "venue": "", "year": 2019.0, "author_names": ["Mary Ann Nelson"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 198828146, "title": "Withdrawing the No detention Policy: Punishing Children for the System's Failure", "abstract": "The Right to Education Act, 2009 that came into existence after a decade long struggle by civil society organisations, mandates that no children shall be detained till they complete their elementary education, that is, Class 8. However, an amendment to the Act, The Second Amendment Bill, 2017, on the Right of Children to Free and Compulsory Education, 2009, amends this provision by stating that regular examinations should be held in Class 5 and Class 8. If the child fails in the examination, s/he will be given additional instructions to take a re examination within two months and if the child again fails, then the state government will have the discretion to detain the child in the same class. There are differing views on whether children should be detained for failing examinations in elementary school. Some argue that an automatic promotion reduces incentives for children to learn and for teachers to teach. Others point out that detention demotivates children and results in increased dropouts and shifts the focus away from the systemic factors that affect learning such as the availability of trained qualified teachers, adequate infrastructure, textbooks, safety and security in schools.", "venue": "Social Change", "year": 2019.0, "author_names": ["Ambarish Kumar Rai", "S Majumder"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 145404335, "title": "Improving Public Schools Through the Dissent of Parents: Opting Out of Tests, Demanding Alternative Curricula, Invoking Parent Trigger Laws, and Withdrawing Entirely", "abstract": "Some parents and caregivers, frustrated by low academic performance of their local school, emphasis on testing, or the content of the curriculum, have worked independently or formed parent groups to speak out and demand improvements. Parents and families enact solutions such as opting out of tests, developing alternative curricula, invoking parent trigger laws, and withdrawing their children from public schools. When engaged well, these outcries of family dissent can be used to improve public schools and to keep them truly public. In this article, I define good dissent and show how it keeps schools healthy. I examine the actions, publications, and web sites of major parent organizations and individual parents to argue that some parents are demonstrating good and admirable dissent that can help improve school quality, parent satisfaction with schools, and student experiences in them; others not only fail to employ good dissent, but may actually be hurting the viability of our public schools and the type of graduate they produce.", "venue": "", "year": 2015.0, "author_names": ["Sarah Marie Stitzlein"], "n_citations": 13, "n_key_citations": 3, "score": 0}, {"corpus_id": 233924766, "title": "Mobile Money and School Participation: Evidence from Africa", "abstract": "This paper shows that mobile money technology an electronic wallet service that allows users to deposit, transfer, and receive money using their mobile phones is positively correlated with increased school participation of children in school age. By using data from 4 African countries, we argue that, by reducing transaction costs, and by making it easier and less expensive to receive remittances, mobile money reduces the need for coping strategies that are detrimental to child development, such as withdrawing children from school and sending them to work. We find that mobile money increases the chances of children attending school. This finding is robust to different empirical models. In a nutshell, our results show that 1 million children could start attending school in low income countries if mobile money was available to all.", "venue": "", "year": 2021.0, "author_names": ["Valentina Rotondi", "Francesco C Billari"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 96475830, "title": "Stereotype Threat, Identification with Academics, and Withdrawal from School: Why the most successful students of colour might be most likely to withdraw", "abstract": "Claude Steele's stereotype threat hypothesis posits that when there are negative stereotypes about the intellectual capacity of certain (stigmatised) groups, members of that group suffer aversive consequences; group members who are most strongly identified with the stigmatised domain in question (e.g. intellectual or academic ability) are those most likely to suffer the effects of stereotype threat. In education, it is widely held that personal investment in schooling should lead to more positive outcomes. However, highly invested individuals will most keenly experience the negative effects of stigma. Thus those most at risk for withdrawing from school among students of colour (who suffer a stigma of intellectual inferiority) could be those most invested in schooling. This hypothesis was tested by measuring identification with academics among a group of incoming students at a racially diverse inner city high school in the Midwest USA. Regardless of race, the students who most strongly identified with academics (they valued and considered academics central to the self) had higher GPAs, lower levels of absenteeism, and fewer behavioural referrals. However, among students of colour the most strongly identified were more likely to withdraw, while identification with academics did not significantly influence the withdrawal of Caucasian students. These results highlight the importance of providing a supportive environment that diffuses stereotype threat for all students, even those who appear to be academically successful.", "venue": "", "year": 2006.0, "author_names": ["Jason W Osborne", "Christopher O Walker"], "n_citations": 175, "n_key_citations": 13, "score": 0}, {"corpus_id": 150195345, "title": "Social Interactive Behavioral Problems of Social Studies Students of Cabiao National High School", "abstract": "Student misbehavior is a problem affecting schools across the nation and around the world. The study focused on students' misbehavior in the context of personal, emotional, social, spiritual, economic and psychological influences and the degree of seriousness of aggressive, delinquent, withdrawing and non compliant behavior of Social Studies students of Cabiao National High School, Cabiao, Nueva Ecija. Descriptive research was employed using questionnaire, personal interviews and observation in gathering data. It employed both quantitative and qualitative processes. Samples were composed of students and teachers drawn from 6,730 total populations of students and 20 Social Studies teachers. The teachers assess the degree of seriousness of the different behavioral problems manifested by the student. The students manifest serious or intense aggressive, delinquent, withdrawing and non compliant behaviors. There is no significant difference between factors that contribute to the behavioral problems of Social Studies students in personal, emotional, social, spiritual, economic and psychological factors.", "venue": "", "year": 2018.0, "author_names": ["Bernardo Asperin Zabala", "Claire Ann Zabala Penol"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 165511141, "title": "WWII POW gets surprise diploma 75 years after leaving high school to join Army", "abstract": "Seventy five years after withdrawing from high school to serve the U.S. as an Army soldier during World War II, Vito Trause had one last (surprise) mission to complete.", "venue": "", "year": 2018.0, "author_names": ["Nicole Darrah"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 153963317, "title": "Withdrawing federal funding for public schooling would exacerbate two tiered system", "abstract": "Fairfax press has reported the federal government's green paper on reforming the federation has suggested four possible scenarios for school funding: 1.Give states and territories complete funding responsibility 2.The federal government to fund independent schools, while states and territories fully fund public schools 3.Reduce overall federal involvement in schools 4.The federal government to become the major funder of schools. Given there is nearly a A$30 billion shortfall in school funding from 2018 in this year's federal budget, it can be assumed that number 4 is the most unlikely scenario. Given the Coalition's commitment to small central government, it is most likely they would support divesting in school funding, pushing back onto the states and territories.", "venue": "", "year": 2015.0, "author_names": ["Stewart Riddle"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 71874833, "title": "Medical Residents' Perception and Emotional Stress on Withdrawing Life Sustaining Therapy", "abstract": "Background: In order to promote the dignity of terminal patients, and improve end of life care (EOL care) in Korea, consensus guidelines to the withdrawal of life sustaining therapies (LST) were published in October, 2009. The aim of this study was to assess the current perception of the guideline among internal medicine residents and to identify barriers to the application of the guidelines. Methods: The study was designed prospectively on the basis of data from e mail survey. We surveyed 98 medical residents working in 19 medical centers. Results: 75.5% of respondents agreed with withdrawing (WD) of LST and 33.3% (33/98) of respondents were unaware of the guideline. Although 58.1% of all respondents had taken an EOL care class in medical school, about 30% of residents did feel uncomfortable with communicating with patients and surrogates. The most important obstacle for decision of WD of LST was the resident's psychological stress. 39.8% of medical residents felt guilty or failure after a patient's death, and 41.8% became often or always depressed in a patient's dying. Conclusions: In order to protect and enhance the dignity and autonomy of terminal patients, the improvement of the medical training program in the hospitals and the more concern of educational leaders are urgent.", "venue": "", "year": 2012.0, "author_names": ["Jae Young Moon", "Hee Young Lee", "Chae Man Lim", "Younsuck Koh"], "n_citations": 5, "n_key_citations": 0, "score": 0}]} -{"query": "Deep continuous fusion for multi-sensor 3d object detection", "session_id": 5861573354150662, "user_id": 5391699474499438, "candidates": [{"corpus_id": 52211898, "title": "Deep Continuous Fusion for Multi sensor 3D Object Detection", "abstract": "In this paper, we propose a novel 3D object detector that can exploit both LIDAR as well as cameras to perform very accurate localization. Towards this goal, we design an end to end learnable architecture that exploits continuous convolutions to fuse image and LIDAR feature maps at different levels of resolution. Our proposed continuous fusion layer encode both discrete state image features as well as continuous geometric information. This enables us to design a novel, reliable and efficient end to end learnable 3D object detector based on multiple sensors. Our experimental evaluation on both KITTI as well as a large scale 3D object detection benchmark shows significant improvements over the state of the art.", "venue": "ECCV", "year": 2018.0, "author_names": ["Ming Liang", "Binh Yang", "Shenlong Wang", "Raquel Urtasun"], "n_citations": 346, "n_key_citations": 32, "score": 1}, {"corpus_id": 198146138, "title": "MCF3D: Multi Stage Complementary Fusion for Multi Sensor 3D Object Detection", "abstract": "We present MCF3D, a multi stage complementary fusion three dimensional (3D) object detection network for autonomous driving, robot navigation, and virtual reality. This is an end to end learnable architecture, which takes both LIDAR point clouds and RGB images as inputs and utilizes a 3D region proposal subnet and second stage detector(s) subnet to achieve high precision oriented 3D bounding box prediction. To fully exploit the strength of multimodal information, we design a series of fine and targeted fusion methods based on the attention mechanism and prior knowledge, including \"pre fusion,\" \"anchor fusion,\" and \"proposal fusion.\" Our proposed RGB Intensity form encodes the reflection intensity onto the input image to strengthen the representational power. Our designed proposal element attention module allows the network to be guided to focus more on efficient and critical information with negligible overheads. In addition, we propose a cascade enhanced detector for small classes, which is more selective against close false positives. The experiments on the challenging KITTI benchmark show that our MCF3D method produces state of the art results while running in near real time with a low memory footprint.", "venue": "IEEE Access", "year": 2019.0, "author_names": ["Jiarong Wang", "Ming Zhu", "Deyao Sun", "Bo Wang", "Wen Long Gao", "Hua Wei"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 208006295, "title": "PI RCNN: An Efficient Multi sensor 3D Object Detector with Point based Attentive Cont conv Fusion Module", "abstract": "LIDAR point clouds and RGB images are both extremely essential for 3D object detection. So many state of the art 3D detection algorithms dedicate in fusing these two types of data effectively. However, their fusion methods based on Birds Eye View (BEV) or voxel format are not accurate. In this paper, we propose a novel fusion approach named Point based Attentive Cont conv Fusion(PACF) module, which fuses multi sensor features directly on 3D points. Except for continuous convolution, we additionally add a Point Pooling and an Attentive Aggregation to make the fused features more expressive. Moreover, based on the PACF module, we propose a 3D multi sensor multi task network called Pointcloud Image RCNN(PI RCNN as brief) which handles the image segmentation and 3D object detection tasks. PI RCNN employs a segmentation sub network to extract full resolution semantic feature maps from images and then fuses the multi sensor features via powerful PACF module. Beneficial from the effectiveness of the PACF module and the expressive semantic features from the segmentation module, PI RCNN can improve much in 3D object detection. We demonstrate the effectiveness of the PACF module and PI RCNN on the KITTI 3D Detection benchmark, and our method can achieve state of the art on the metric of 3D AP.", "venue": "AAAI", "year": 2020.0, "author_names": ["Liang Xie", "Chao Xiang", "Zhengxu Yu", "Guodong Xu", "Zheng Yang", "Deng Cai", "Xiaofei He"], "n_citations": 14, "n_key_citations": 2, "score": 0}, {"corpus_id": 225042600, "title": "LiDAR Camera Based Deep Dense Fusion for Robust 3D Object Detection", "abstract": "For the camera LiDAR based three dimensional (3D) object detection, image features have rich texture descriptions and LiDAR features possess objects' 3D information. To fully fuse view specific feature maps, this paper aims to explore the two directional fusion of arbitrary size camera feature maps and LiDAR feature maps in the early feature extraction stage. Towards this target, a deep dense fusion 3D object detection framework is proposed for autonomous driving. This is a two stage end to end learnable architecture, which takes 2D images and raw LiDAR point clouds as inputs and fully fuses view specific features to achieve high precision oriented 3D detection. To fuse the arbitrary size features from different views, a multi view resizes layer (MVRL) is born. Massive experiments evaluated on the KITTI benchmark suite show that the proposed approach outperforms most state of the art multi sensor based methods on all three classes on moderate difficulty (3D/BEV) Car (75.60%/88.65% Pedestrian (64.36%/66.98% Cyclist (57.53%/57.30% Specifically, the DDF3D greatly improves the detection accuracy of hard difficulty in 2D detection with an 88.19% accuracy for the car class.", "venue": "ICIC", "year": 2020.0, "author_names": ["Li-hua Wen", "Kang-Hyun Jo"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 102492236, "title": "A 3D Object Detection Based on Multi Modality Sensors of USV", "abstract": "Unmanned Surface Vehicles (USVs) are commonly equipped with multi modality sensors. Fully utilized sensors could improve object detection of USVs. This could further contribute to better autonomous navigation. The purpose of this paper is to solve the problems of 3D object detection of USVs in complicated marine environment. We propose a 3D object detection Depth Neural Network based on multi modality data of USVs. This model includes a modified Proposal Generation Network and Deep Fusion Detection Network. The Proposal Generation Network improves feature extraction. Meanwhile, the Deep Fusion Detection Network enhances the fusion performance and can achieve more accurate results of object detection. The model was tested on both the KITTI 3D object detection dataset (A project of Karlsruhe Institute of Technology and Toyota Technological Institute at Chicago) and a self collected offshore dataset. The model shows excellent performance in a small memory condition. The results further prove that the method based on deep learning can give good accuracy in conditions of complicated surface in marine environment.", "venue": "Applied Sciences", "year": 2019.0, "author_names": ["Yingying Wu", "Huacheng Qin", "Tao Liu", "Hao Liu", "Zhiqiang Wei"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 215817676, "title": "Object Recognition Based Interpolation With 3D LIDAR and Vision for Autonomous Driving of an Intelligent Vehicle", "abstract": "An algorithm has been developed for fusing 3D LIDAR (Light Detection and Ranging) systems that receive objects detected in deep learning based image sensors and object data in the form of 3D point clouds. 3D LIDAR represents 3D point data in a planar rectangular coordinate system with a 360deg representation of the detected object surface, including the front face. However, only the direction and distance data of the object can be obtained, and point cloud data cannot be used to create a specific definition of the object. Therefore, only the movement of the point cloud data can be tracked using probability and classification algorithms based on image processing. To overcome this limitation, the study matches 3D LIDAR data with 2D image data through the fusion of hybrid level multi sensors. First, because 3D LIDAR data represents all objects in the sensor's detection range as dots, all unnecessary data, including ground data, is filtered out. The 3D Random Sample Consensus (RANSAC) algorithm enables the extraction of ground data perpendicular to the reference estimation 3D plane and data at both ends through ground estimation. Classified environmental data facilitates the labeling of all objects within the viewing angle of 3D LIDAR based on the presence or absence of movement. The path of motion of the platform can be established by detecting whether objects within the region of interest are movable or static. Because LIDAR is based on 8 and 16 channel rotation mechanisms, real time data cannot be used to define objects. Instead, point clouds can be used to detect obstacles in the image through deep learning in the preliminary processing phase of the classification algorithm. By matching the labeling information of defined objects with the classified object cloud data obtained using 3D LIDAR, the exact dynamic trajectory and position of the defined objects can be calculated. Consequently, to process the acquired object data efficiently, we devised an active region of interest technique to ensure a fast processing speed while maintaining a high detection rate.", "venue": "IEEE Access", "year": 2020.0, "author_names": ["Ihn-Sik Weon", "Soon-Geul Lee", "Jae-Kwan Ryu"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 218834419, "title": "Multimodal Deep Learning for Object Recognition Combining Camera and LIDAR Data", "abstract": "Object detection and recognition is a key component of autonomous robotic vehicles, as evidenced by the continuous efforts made by the robotic community on areas related to object detection and sensory perception systems. This paper presents a study on multisensor (camera and LIDAR) late fusion strategies for object recognition. In this work, LIDAR data is processed as 3D points and also by means of a 2D representation in the form of depth map (DM) which is obtained by projecting the LIDAR 3D point cloud into a 2D image plane followed by an upsampling strategy which generates a high resolution 2D range view. A CNN network (Inception V3) is used as classification method on the RGB images, and on the DMs (LIDAR modality) A 3D network (the PointNet) which directly performs classification on the 3D point clouds, is also considered in the experiments. One of the motivations of this work consists of incorporating the distance to the objects, as measured by the LIDAR, as a relevant cue to improve the classification performance. A new range based average weighting strategy is proposed, which considers the relationship between the deep models' performance and the distance of objects. A classification dataset, based on the KITTI database, is used to evaluate the deep models, and to support the experimental part. We report extensive results in terms of single modality i.e. using RGB and LIDAR models individually, and late fusion multimodality approaches.", "venue": "2020 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC)", "year": 2020.0, "author_names": ["Gledson Melotti", "Cristiano Premebida", "Nuno Goncalves"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 133036450, "title": "The research of autonomous obstacle avoidance of mobile robot based on multi sensor integration", "abstract": "The object of this study is the bionic quadruped mobile robot. The study has proposed a system design plan for mobile robot obstacle avoidance with the binocular stereo visual sensor and the self control 3D Lidar integrated with modified ant colony optimization path planning to realize the reconstruction of the environmental map. Because the working condition of a mobile robot is complex, the result of the 3D reconstruction with a single binocular sensor is undesirable when feature points are few and the light condition is poor. Therefore, this system integrates the stereo vision sensor blumblebee2 and the Lidar sensor together to detect the cloud information of 3D points of environmental obstacles. This paper proposes the sensor information fusion technology to rebuild the environment map. Firstly, according to the Lidar data and visual data on obstacle detection respectively, and then consider two methods respectively to detect the distribution of obstacles. Finally fusing the data to get the more complete, more accurate distribution of obstacles in the scene. Then the thesis introduces ant colony algorithm. It has analyzed advantages and disadvantages of the ant colony optimization and its formation cause deeply, and then improved the system with the help of the ant colony optimization to increase the rate of convergence and precision of the algorithm in robot path planning. Such improvements and integrations overcome the shortcomings of the ant colony optimization like involving into the local optimal solution easily, slow search speed and poor search results. This experiment deals with images and programs the motor drive under the compiling environment of Matlab and Visual Studio and establishes the visual 2.5D grid map. Finally it plans a global path for the mobile robot according to the ant colony algorithm. The feasibility and effectiveness of the system are confirmed by ROS and simulation platform of Linux.", "venue": "SPIE/COS Photonics Asia", "year": 2016.0, "author_names": ["Ming Zhao", "Baoling Han"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 221160983, "title": "Optical Flow Estimation and Denoising of Video Images Based on Deep Learning Models", "abstract": "In order to effectively extract image features highly related to visual perception quality, improve the image quality evaluation method, under the framework of deep learning, combining the optical flow method and the edge detection algorithm, a multi feature fusion motion based on improved optical flow is proposed target detection algorithm. First, a video fluid model is proposed. The fluid model decomposes the video object area changes into sub area zoom, rotation and translation movements. The rigid body area and the area hierarchy describe the spatial relationship of pixels, and the rigid body motion describes the time domain relationship of pixels. It provides a region based video processing. The associated spatiotemporal is association method. Secondly, a video fluid model is proposed. The video fluid model treats all pixels of the same surface imaged in the video as a fluid, using streamlines to represent the regional motion of the object, and streamlines to represent the pixel motion of the video object, using rotators and translation lines to simplify the streamlines when necessary. The streamline of the same fluid is smooth in the time domain, and the flow pattern is smooth in both the time domain and the space domain. Finally, the top down deep learning generation model conversion is carried out, and finally through continuous adjustment between different levels, the generation model can reconstruct the original sample with lower error, so that the essential characteristics of this sample are obtained, namely the highest abstract representation of the depth model. After processing the deep learning model, the sample features after dimensionality reduction can be obtained, and the recognition module is used on this basis. Experiments show that the optical flow estimation method based on deep learning and multi grid, optical flow field estimation method based on variational model and desiccation method proposed in this paper are effective, and it is suitable for moving image analysis, target tracking and 3D reconstruction Such research has certain theoretical significance and practical application value.", "venue": "IEEE Access", "year": 2020.0, "author_names": ["Ang Li", "Bao-yu Zheng", "L Li", "Chen Zhang"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 211118341, "title": "ADoCW: An Automated method for Detection of Concealed Weapon", "abstract": "In technologically advanced era, surveillance is a proven method for the monitoring of the individual's activity in the crowd. Security of infrastructure, as well as individual, is one of the major concerns because of the influential growth of radical elements or suspicious persons in the society. Continuous manual monitoring of the CCTV surveillance is difficult and monotonous task, so there is an urgent requirement to develop an automated surveillance systems. The security surveillance system has potential to detect any kind of concealed object (like firearms or any weapon including knife, scissors etc. which may pose a threat to the security. In this paper, we propose a novel framework for the detection and classification of concealed weapons through analysis of CCTV stream data. The classification framework is developed with the categorization of various concealed weapons through deep learning based object detection and classification techniques. For the detection of concealed weapon, multi sensor stream data capturing framework is designed using sensor fusion techniques and also embedded with the feature extraction and segmentation of images module. Faster R CNN (Region based Convolutional Neural Network) model is trained for classification of weapons over collected dataset. Finally, several directions of work and tasks are provided as future work for the various research communities.", "venue": "2019 Fifth International Conference on Image Information Processing (ICIIP)", "year": 2019.0, "author_names": ["Gaurav Raturi", "Priya Rani", "Sanjay Madan", "Sonia Dosanjh"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "Selection process in some cpanies during pandemic", "session_id": 1588923678170, "user_id": 3108470359867816, "candidates": [{"corpus_id": 220699494, "title": "New challenges and mitigation strategies for resident selection during the coronavirus disease pandemic", "abstract": "Dear Editor, The coronavirus disease (COVID 19) pandemic has presented challenges to resident education, and we foresee a unique challenge in the residency selection for emergency medicine (EM) programs. There are approximately 120 applicants to the Royal College of Physicians and Surgeons of Canada program and 120 to the College of Family Physicians of Canada EM program each year at our institution (personal communication from University of Ottawa program directors) Programs will be forced to deviate from conventional norms in the resident selection process. With some foresight, most of these challenges can be mitigated. Most teaching hospitals in Canada have either greatly limited or completely halted visiting elective students and residents. This poses a number of issues to the programs, as well as the candidates, which we feel can be classified under two main categories:", "venue": "CJEM", "year": 2020.0, "author_names": ["Hans Rosenberg", "Avik Nath", "Jennifer Leppard", "Shahbaz Syed"], "n_citations": 5, "n_key_citations": 0, "score": 1}, {"corpus_id": 222271294, "title": "Selection of the barriers of supply chain management in Indian manufacturing sectors due to COVID 19 impacts", "abstract": "The coronavirus (COVID 19) pandemic is having a clear impact on the supply chains of virtually all manufacturers. Whether frozen foods and grocery items or emergency item, or even the services the supply chain has been facing multiple obstacles. For manufacturing industries with complex supply chains, it is indeed critical to identify strategies to deal with such a crisis. With demand high and supply unavailable some products became more desirable causing price hikes and price extorting because the manufacturing sectors are facing some barriers during lockdown. This research has identified the five essential barriers of supply chain such as lack of man power, local laws enforcement, lack of transportation, scarcity of raw materials and deficiency in cash flow for Indian manufacturing sectors during lockdown. This paper proposed a methodology based on fuzzy analytical hierarchy process (Fuzzy AHP) with use of triangular fuzzy numbers for the pairwise comparison matrices. It has been seen that lack of man power is higher weight barrier than others. Moreover, the managerial implication about the results is also provided, which will be useful for manufacturing sectors to take suitable decisions to overcome these obstacles.", "venue": "", "year": 2020.0, "author_names": ["Tapas K Biswas", "Manik Chandra Das"], "n_citations": 10, "n_key_citations": 1, "score": 0}, {"corpus_id": 219980861, "title": "Transforming laparoendoscopic surgical protocols during the COVID 19 pandemic; big data analytics, resource allocation and operational considerations", "abstract": "Abstract The current dreadful pandemic of coronavirus disease (COVID 19) is playing havoc with humanity, socio communal systems and economic reservoirs worldwide. Certain countries have managed to curtail COVID 19 crisis to some extent, however, a great majority still remains helpless in containing this outbreak. Rapidly evolving disease patterns and complex epidemiology of COVID 19 necessitate a tailored approach by medical experts in dealing with this devastating outbreak. Similar to other medical disciplines, surgical associations and societies have developed a tailored approach of patients' selection and planning with improvised endolaparoscopic practice during the COVID 19 pandemic. Non essential and non urgent surgical procedures are deferred till this outbreak is abated. Benefits of delaying elective and non urgent surgery outweighs the risk of performing surgical procedures on patients with asymptomatic or active COVID 19 disease. Laparoendoscopic procedures increase the risk of aerosol exposure, disease transmission and contamination. Limiting the number of operating room personnel, use of disposable instruments, small trocar incisions, negative pressure environment, and setting energy devices at low modes can help reduce disease transmission during laparoendocsopic procedures. This write up provides a brief account of the impact of the COVID 19, big data analytics of response of medical personnel in curtailing and understanding the disease process and the consensus guidelines for carrying out laparoscopic and endoscopic procedures.", "venue": "International Journal of Surgery", "year": 2020.0, "author_names": ["Salman Yousuf Guraya"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 224847237, "title": "Travel related control measures to contain the COVID 19 pandemic: a rapid review.", "abstract": "BACKGROUND In late 2019, first cases of coronavirus disease 2019, or COVID 19, caused by the novel coronavirus SARS CoV 2, were reported in Wuhan, China. Subsequently COVID 19 spread rapidly around the world. To contain the ensuing pandemic, numerous countries have implemented control measures related to international travel, including border closures, partial travel restrictions, entry or exit screening, and quarantine of travellers. OBJECTIVES To assess the effectiveness of travel related control measures during the COVID 19 pandemic on infectious disease and screening related outcomes. SEARCH METHODS We searched MEDLINE, Embase and COVID 19 specific databases, including the WHO Global Database on COVID 19 Research, the Cochrane COVID 19 Study Register, and the CDC COVID 19 Research Database on 26 June 2020. We also conducted backward citation searches with existing reviews. SELECTION CRITERIA We considered experimental, quasi experimental, observational and modelling studies assessing the effects of travel related control measures affecting human travel across national borders during the COVID 19 pandemic. We also included studies concerned with severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS) as indirect evidence. Primary outcomes were cases avoided, cases detected and a shift in epidemic development due to the measures. Secondary outcomes were other infectious disease transmission outcomes, healthcare utilisation, resource requirements and adverse effects if identified in studies assessing at least one primary outcome. DATA COLLECTION AND ANALYSIS One review author screened titles and abstracts; all excluded abstracts were screened in duplicate. Two review authors independently screened full texts. One review author extracted data, assessed risk of bias and appraised study quality. At least one additional review author checked for correctness of all data reported in the 'Risk of bias' assessment, quality appraisal and data synthesis. For assessing the risk of bias and quality of included studies, we used the Quality Assessment of Diagnostic Accuracy Studies (QUADAS 2) tool for observational studies concerned with screening, ROBINS I for observational ecological studies and a bespoke tool for modelling studies. We synthesised findings narratively. One review author assessed certainty of evidence with GRADE, and the review author team discussed ratings. MAIN RESULTS We included 40 records reporting on 36 unique studies. We found 17 modelling studies, 7 observational screening studies and one observational ecological study on COVID 19, four modelling and six observational studies on SARS, and one modelling study on SARS and MERS, covering a variety of settings and epidemic stages. Most studies compared travel related control measures against a counterfactual scenario in which the intervention measure was not implemented. However, some modelling studies described additional comparator scenarios, such as different levels of travel restrictions, or a combination of measures. There were concerns with the quality of many modelling studies and the risk of bias of observational studies. Many modelling studies used potentially inappropriate assumptions about the structure and input parameters of models, and failed to adequately assess uncertainty. Concerns with observational screening studies commonly related to the reference test and the flow of the screening process. Studies on COVID 19 Travel restrictions reducing cross border travel Eleven studies employed models to simulate a reduction in travel volume; one observational ecological study assessed travel restrictions in response to the COVID 19 pandemic. Very low certainty evidence from modelling studies suggests that when implemented at the beginning of the outbreak, cross border travel restrictions may lead to a reduction in the number of new cases of between 26% to 90% (4 studies) the number of deaths (1 study) the time to outbreak of between 2 and 26 days (2 studies) the risk of outbreak of between 1% to 37% (2 studies) and the effective reproduction number (1 modelling and 1 observational ecological study) Low certainty evidence from modelling studies suggests a reduction in the number of imported or exported cases of between 70% to 81% (5 studies) and in the growth acceleration of epidemic progression (1 study) Screening at borders with or without quarantine Evidence from three modelling studies of entry and exit symptom screening without quarantine suggests delays in the time to outbreak of between 1 to 183 days (very low certainty evidence) and a detection rate of infected travellers of between 10% to 53% (low certainty evidence) Six observational studies of entry and exit screening were conducted in specific settings such as evacuation flights and cruise ship outbreaks. Screening approaches varied but followed a similar structure, involving symptom screening of all individuals at departure or upon arrival, followed by quarantine, and different procedures for observation and PCR testing over a period of at least 14 days. The proportion of cases detected ranged from 0% to 91% (depending on the screening approach) and the positive predictive value ranged from 0% to 100% (very low certainty evidence) The outcomes, however, should be interpreted in relation to both the screening approach used and the prevalence of infection among the travellers screened; for example, symptom based screening alone generally performed worse than a combination of symptom based and PCR screening with subsequent observation during quarantine. Quarantine of travellers Evidence from one modelling study simulating a 14 day quarantine suggests a reduction in the number of cases seeded by imported cases; larger reductions were seen with increasing levels of quarantine compliance ranging from 277 to 19 cases with rates of compliance modelled between 70% to 100% (very low certainty evidence) AUTHORS' CONCLUSIONS With much of the evidence deriving from modelling studies, notably for travel restrictions reducing cross border travel and quarantine of travellers, there is a lack of 'real life' evidence for many of these measures. The certainty of the evidence for most travel related control measures is very low and the true effects may be substantially different from those reported here. Nevertheless, some travel related control measures during the COVID 19 pandemic may have a positive impact on infectious disease outcomes. Broadly, travel restrictions may limit the spread of disease across national borders. Entry and exit symptom screening measures on their own are not likely to be effective in detecting a meaningful proportion of cases to prevent seeding new cases within the protected region; combined with subsequent quarantine, observation and PCR testing, the effectiveness is likely to improve. There was insufficient evidence to draw firm conclusions about the effectiveness of travel related quarantine on its own. Some of the included studies suggest that effects are likely to depend on factors such as the stage of the epidemic, the interconnectedness of countries, local measures undertaken to contain community transmission, and the extent of implementation and adherence.", "venue": "The Cochrane database of systematic reviews", "year": 2020.0, "author_names": ["Jacob Burns", "Ani Movsisyan", "Jan M Stratil", "Michaela Coenen", "Karl M F Emmert-Fees", "Karin Geffert", "Sabine Hoffmann", "Olaf Horstick", "Michael Laxy", "Lisa Maria Pfadenhauer", "Peter von Philipsborn", "Kerstin Sell", "Stephan Voss", "Eva A Rehfuess"], "n_citations": 33, "n_key_citations": 0, "score": 0}, {"corpus_id": 219322508, "title": "The attitudes, perceptions and experiences of medical school applicants following the closure of schools and cancellation of public examinations in 2020 due to the COVID 19 pandemic", "abstract": "Objective To describe medical applicants' experiences of education and their views on changes to medical school admissions, including the awarding of calculated grades, following the 2020 closure of schools and universities, and the cancellation of public examinations in the United Kingdom due to the COVID 19/coronavirus pandemic. To understand how applicants from diverse social backgrounds might differ in these regards. Design Cross sectional questionnaire study forming part of the longitudinal United Kingdom Medical Applicant Cohort Study (UKMACS) Setting United Kingdom medical school admissions. Participants 2887 participants (68% female; 64% with at least one degree educated parent; 63% with at least one parent in the highest socioeconomic group) completed an online questionnaire between 8th and 22nd April 2020. To be invited to complete the questionnaire, participants had to have registered to take the University Clinical Admissions Test (UCAT) in 2019 and to have agreed to be invited to take part in the study, or they needed to have completed one or more previous UKMACS questionnaires. They also need to have been seriously considering applying to study medicine in the UK for entry in 2020 between May and October 2019, and be resident in the UK or Islands/Crown Dependencies. Main outcome measures Views on calculated grades, views on potential changes to medical school admissions and teaching in 2020 and 2021, reported experiences of education following the closure of educational institutions in March 2020. Results Respondents had concerns about the calculated grades that will replace A level examinations, especially female applicants and applicants from Black Asian and Minority Ethnic (BAME) backgrounds who felt teachers would find it difficult to grade and rank students accurately, as well as those from non selective state schools and those living in deprived areas who had some concerns about the grade standardisation process. Calculated grades were not considered fair enough by a majority to use in the acceptance or rejection of medical offer holders, but several measures including interview and aptitude test scores were considered fair enough to use in combination. Respondents from non selective state (public) schools reported less use of and less access to educational resources compared to their counterparts at private/selective schools. In particular they reported less online teaching in real time, and reported spending less time studying during the lockdown. Conclusions The coronavirus pandemic will have significant and long term impacts on the selection, education and performance of our future medical workforce. It is important that the views and experiences of medical applicants from diverse backgrounds are taken into consideration in decisions affecting their futures and the future of the profession.", "venue": "medRxiv", "year": 2020.0, "author_names": ["Katherine Woolf", "David Harrison", "I Chris McManus"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 221505112, "title": "Trends and targets of various types of stem cell derived transfusable RBC substitution therapy: Obstacles that need to be converted to opportunity", "abstract": "A shortage of blood during the pandemic outbreak of COVID 19 is a typical example in which the maintenance of a safe and adequate blood supply becomes difficult and highly demanding. So far, human RBCs have been produced in vitro using diverse sources: hematopoietic stem cells (SCs) embryonic SCs and induced pluripotent SCs. The existing, even safest core of conventional cellular bioproducts destined for transfusion have some shortcoming in respects to: donor dependency variability in terms of hematological /immunological and process/ storage period issues. SCs derived transfusable RBC bioproducts, as one blood group type for all, were highly complex to work out. Moreover, the strategies for their successful production are often dependent upon the right selection of starting source materials and the composition and the stability of the right expansion media and the strict compliance to GMP regulatory processes. In this mini review we highlight some model studies, which showed that the efficiency and the functionality of RBCs that could be produced by the various types of SCs, in relation to the in vitro culture procedures are such that they may, potentially, be used at an industrial level. However, all cultured products do not have an unlimited life due to the critical metabolic pathways or the metabolites produced. New bioreactors are needed to remove these shortcomings and the development of a new mouse model is required. Modern clinical trials based on the employment of regenerative medicine approaches in combination with novel large scale bioengineering tools, could overcome the current obstacles in artificial RBC substitution, possibly allowing an efficient RBC industrial production.", "venue": "Transfusion and Apheresis Science", "year": 2020.0, "author_names": ["Francesco Lanza", "Jerard M J Seghatchian"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 222157239, "title": "Finding the \"right\" Canadian Neurology Residency Program During the COVID 19 Era", "abstract": "The coronavirus disease 2019 (COVID 19) pandemic has disrupted life worldwide, and neurology residency programs and clerkship training are no exceptions. Many institutions have been able to rapidly respond to the ever changing care needs and delivery models required for effective and safe patient care while continuing to provide neurologic training in a safe and effective manner. Pandemic related clerkship disruptions mean medical students face new challenges in navigating the residency selection process. In this article, we describe barriers and opportunities created by the COVID 19 pandemic for applicants in ranking potential Canadian neurology residency programs. Our purpose is to provide actionable advice for medical students entering upcoming application cycle(s) for adult or pediatric neurology residency programs amidst the unforeseen changes due to COVID 19 (summarized in Table 1) This article may also be of interest to Canadian neurology residency programs who wish to be proactive in preparing for the upcoming application process. We use our collective background in medical education and residency training, at Western University Schulich School of Medicine and Dentistry, to provide recommendations on how applicants may obtain information that will help them to choose the \"right\" neurology residency program. The examples we offer are not exhaustive but highlight some of the ways that programs and learners might adapt to current challenges. In any application cycle, applicants are encouraged to evaluate a number of specific factors when selecting or ranking a neurology residency program. Some of these considerations are program specific and remain \"fixed\" as they are not influenced by the COVID 19 pandemic. This broadly includes but is not limited to the availability of subspecialty and generalist expertise, population catchment area, physical location, and facultyto trainee ratio. In our experience, the breadth of neurological clinical experiences is enhanced by diverse subspecialty representation among faculty balanced with a generalist approach to the most common neurologic presentations, high patient volumes, and low faculty trainee ratios. Yet, a good neurology training program need not feature all these characteristics. A broad training experience in clinical neurology still remains fundamental to acquiring the knowledge and clinical skills necessary for independent practice as a neurologist. Other factors are unique to the applicant and influenced by personal preferences such as proximity to a partner and/or family, size of the city, and personal finances and the city's cost of living, to name a few. The article will not expand on these further but will focus on how the applicant can navigate the application cycle during the COVID 19 pandemic.", "venue": "Canadian Journal of Neurological Sciences Journal Canadien des Sciences Neurologiques", "year": 2020.0, "author_names": ["Ario Mirian", "Mary E Jenkins", "Christopher J Watling", "Shannon L Venance", "Anita Florendo-Cumbermack"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 585683, "title": "Predicting the Antigenic Structure of the Pandemic (H1N1) 2009 Influenza Virus Hemagglutinin", "abstract": "The pandemic influenza virus (2009 H1N1) was recently introduced into the human population. The hemagglutinin (HA) gene of 2009 H1N1 is derived from \"classical swine H1N1\" virus, which likely shares a common ancestor with the human H1N1 virus that caused the pandemic in 1918, whose descendant viruses are still circulating in the human population with highly altered antigenicity of HA. However, information on the structural basis to compare the HA antigenicity among 2009 H1N1, the 1918 pandemic, and seasonal human H1N1 viruses has been lacking. By homology modeling of the HA structure, here we show that HAs of 2009 H1N1 and the 1918 pandemic virus share a significant number of amino acid residues in known antigenic sites, suggesting the existence of common epitopes for neutralizing antibodies cross reactive to both HAs. It was noted that the early human H1N1 viruses isolated in the 1930s 1940s still harbored some of the original epitopes that are also found in 2009 H1N1. Interestingly, while 2009 H1N1 HA lacks the multiple N glycosylations that have been found to be associated with an antigenic change of the human H1N1 virus during the early epidemic of this virus, 2009 H1N1 HA still retains unique three codon motifs, some of which became N glycosylation sites via a single nucleotide mutation in the human H1N1 virus. We thus hypothesize that the 2009 H1N1 HA antigenic sites involving the conserved amino acids will soon be targeted by antibody mediated selection pressure in humans. Indeed, amino acid substitutions predicted here are occurring in the recent 2009 H1N1 variants. The present study suggests that antibodies elicited by natural infection with the 1918 pandemic or its early descendant viruses play a role in specific immunity against 2009 H1N1, and provides an insight into future likely antigenic changes in the evolutionary process of 2009 H1N1 in the human population.", "venue": "PloS one", "year": 2010.0, "author_names": ["Manabu Igarashi", "Kimihito Ito", "Reiko Yoshida", "Daisuke Tomabechi", "Hiroshi Kida", "Ayato Takada"], "n_citations": 157, "n_key_citations": 16, "score": 0}, {"corpus_id": 221788870, "title": "Triaging Interventional Pain Procedures During COVID 19 or Related Elective Surgery Restrictions: Evidence Informed Guidance from the American Society of Interventional Pain Physicians (ASIPP)", "abstract": "BACKGROUND The COVID 19 pandemic has worsened the pain and suffering of chronic pain patients due to stoppage of \"elective\" interventional pain management and office visits across the United States. The reopening of America and restarting of interventional techniques and elective surgical procedures has started. Unfortunately, with resurgence in some states, restrictions are once again being imposed. In addition, even during the Phase II and III of reopening, chronic pain patients and interventional pain physicians have faced difficulties because of the priority selection of elective surgical procedures.Chronic pain patients require high intensity care, specifically during a pandemic such as COVID 19. Consequently, it has become necessary to provide guidance for triaging interventional pain procedures, or related elective surgery restrictions during a pandemic. OBJECTIVES The aim of these guidelines is to provide education and guidance for physicians, healthcare administrators, the public and patients during the COVID 19 pandemic. Our goal is to restore the opportunity to receive appropriate care for our patients who may benefit from interventional techniques. METHODS The American Society of Interventional Pain Physicians (ASIPP) has created the COVID 19 Task Force in order to provide guidance for triaging interventional pain procedures or related elective surgery restrictions to provide appropriate access to interventional pain management (IPM) procedures in par with other elective surgical procedures. In developing the guidance, trustworthy standards and appropriate disclosures of conflicts of interest were applied with a section of a panel of experts from various regions, specialties, types of practices (private practice, community hospital and academic institutes) and groups. The literature pertaining to all aspects of COVID 19, specifically related to epidemiology, risk factors, complications, morbidity and mortality, and literature related to risk mitigation and stratification was reviewed. The evidence informed with the incorporation of the best available research and practice knowledge was utilized, instead of a simplified evidence based approach. Consequently, these guidelines are considered evidence informed with the incorporation of the best available research and practice knowledge. RESULTS The Task Force defined the medical urgency of a case and developed an IPM acuity scale for elective IPM procedures with 3 tiers. These included emergent, urgent, and elective procedures. Examples of emergent and urgent procedures included new onset or exacerbation of complex regional pain syndrome (CRPS) acute trauma or acute exacerbation of degenerative or neurological disease resulting in impaired mobility and inability to perform activities of daily living. Examples include painful rib fractures affecting oxygenation and post dural puncture headaches limiting the ability to sit upright, stand and walk. In addition, urgent procedures include procedures to treat any severe or debilitating disease that prevents the patient from carrying out activities of daily living. Elective procedures were considered as any condition that is stable and can be safely managed with alternatives. LIMITATIONS COVID 19 continues to be an ongoing pandemic. When these recommendations were developed, different stages of reopening based on geographical regulations were in process. The pandemic continues to be dynamic creating every changing evidence based guidance. Consequently, we provided evidence informed guidance. CONCLUSION The COVID 19 pandemic has created unprecedented challenges in IPM creating needless suffering for pain patients. Many IPM procedures cannot be indefinitely postponed without adverse consequences. Chronic pain exacerbations are associated with marked functional declines and risks with alternative treatment modalities. They must be treated with the concern that they deserve. Clinicians must assess patients, local healthcare resources, and weigh the risks and benefits of a procedure against the risks of suffering from disabling pain and exposure to the COVID 19 virus.", "venue": "Pain physician", "year": 2020.0, "author_names": ["Christopher G Gharibo", "Amit Sharma", "Amol Soin", "Shalini Shah", "Sudhir Diwan", "Ricardo M Buenaventura", "Devi E Nampiaparampil", "Steve M Aydin", "Sanjay Bakshi", "Salahadin Abdi", "Sachin Sunny Jha", "Harold J Cordner", "Alan David Kaye", "Alaa Abd-Elsayed", "Kenneth D Candido", "Nebojsa Nick Knezevic", "Sairam L Atluri", "Bradley W Wargo", "Mahendra R Sanapati", "Sukdeb Datta", "Joshua A Hirsch", "Laxmaiah Manchikanti", "Kartic Rajput"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 222177711, "title": "Beef and Pork Marketing Margins and Price Spreads during COVID 19", "abstract": "Abstract COVID 19 related disruptions led to a historic rise in the spread between livestock and wholesale meat prices. Concerns about concentration and allegations of anticompetitive behavior have led to several inquiries and civil suits by the U.S. Department of Agriculture and the U.S. Department of Justice, with increases in price differentials serving as a focal point. This article notes the difference between price spreads and marketing margins, outlines corresponding economic theory, and describes the empirical evidence on wholesale meat and livestock price dynamics in the wake of COVID 19 disruptions. At one point during the pandemic, beef and pork packers were both operating at about 60% of the previous year's processing volume. We explore how such a massive supply shock would be expected to affect marketing margins even in the absence of anti competitive behavior. Moreover, we document how margin measurements are critically sensitive to the selection of data and information utilized. Finally, we conclude with some discussion around policy proposals that would pit industry concentration against industry coordination and economies of scale.", "venue": "Applied economic perspectives and policy", "year": 2020.0, "author_names": ["Jayson L Lusk", "Glynn T Tonsor", "Lee L Schulz"], "n_citations": 17, "n_key_citations": 3, "score": 0}]} -{"query": "Expert systems in solid dosage development", "session_id": 3263349474576630, "user_id": 840333864080690, "candidates": [{"corpus_id": 113137515, "title": "Expert systems in solid dosage development", "abstract": "This article introduces and reviews the use of expert systems in solid dosage development (tablets and film coating) The applications, experience and benefits to the pharmaceutical industry are discussed. Expert systems, where introduced and implemented, have generated significant benefits in terms of knowledge protection, cost reduction, training, consistency and improved communication", "venue": "", "year": 1993.0, "author_names": ["Raymond C Rowe"], "n_citations": 9, "n_key_citations": 1, "score": 1}, {"corpus_id": 235763127, "title": "Application of SeDeM Expert System in the development of novel directly compressible co processed excipients via co processing", "abstract": "Computer aided formulation design is gaining fantastic attention in chemical engineering of high functionality pharmaceutical materials for dosage form manufacture. To accelerate development of novel formulations in a quality by design perspective, SeDeM Expert System preformulation algorithm was developed as a tool for the design of solid drug delivery systems and for prediction of direct compression manufacturability of solid formulations. This research aims to integrate SeDeM Expert System into particle engineering design space of co processing of solid excipients to develop novel composites with optimum direct compression propensity, using corn starch and microcrystalline cellulose powders as model primary excipients. The data and information generated from the expert system have elucidated the bulk level characteristics of the primary excipients, enabled computation of the optimum co processing ratio of the ingredients, and validated the impact of co processing on material functionality. The experimental flowability (7.78+ 0.17) compressibility functions (5.16+ 0.14) parameter profile (0.92) and parametric profile index (6.72+ 0.27) of the engineered composites, were within the acceptable thresholds. With a reliability constant of 0.961, the net direct compression propensity of the composites expressed as Good Compression Index (6.46+ 0.26) was superior to that of the primary excipients, but comparable to reference co processed materials, StarLac(r) (6.44+ 0.14) and MicroceLac(r)100 (6.58+ 0.03) Application of SeDeM Expert System in particle engineering via co processing has provided an accelerated upstream proactive mechanism for designing directly compressible co processed excipients in a quality by design fashion. A four stage systematic methodology of co processing of solid excipients was postulated. Stage I entails the characterization of CMAs of both defective and corrective excipients, and elucidation of their physicomechanical limitations using SeDeM diagrams. Stage II involves computation of loading capacity of the corrective excipient using dilution potential equation. Stage III entails the selection of co processing technique based on desired Critical Material Attributes as revealed by the information obtained from Stage I. Stage IV evaluates the impact of co processing by monitoring the critical behavior of the engineered composites with a decision on either to accept or reject the product.", "venue": "Future Journal of Pharmaceutical Sciences", "year": 2021.0, "author_names": ["Ilyasu Salim", "Adeniji Kehinde Olowosulu", "Abdulrahman Abdulsamad", "Mahmud Sani Gwarzo", "Garba M Khalid", "Naimatu Tijjani Ahmad", "Florence Egbomonjiade Eichie", "Fatima Shuaibu Kurfi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 19762214, "title": "Gastric emptying of non disintegrating solid drug delivery systems in fasted state: relevance to drug dissolution", "abstract": "Importance of the field: Knowledge of gastric emptying (GE) of solid drug delivery systems (DDS) is meaningful for the development of new DDS as it enables the design of in vitro dissolution experiments with conditions close to those in vivo in order to predict drug plasma concentration profiles with high reliability. Areas covered in this review: Gastric emptying of non disintegrating pellets, tablets and mini tablets in the fasted state is described on the basis of various studies performed in the last 30 years, which have evaluated the emptying process mostly by gamma scintigraphy. Different influences on GE and mathematical models describing GE kinetics of single and multiunit dosage forms are represented. A discussion on the implementation of these data in the development of drug dissolution testing procedures is given. What the reader will gain: Readers will gain an insight into the kinetics and mechanisms of GE processes. Some suggestions on the use of the obtained knowledge in biopharmaceutical testing of DDS are also given. Take home message: Gastric emptying of non disintegrating solid DDS is a very important process, which might influence drug dissolution, bioavailability and the plasma concentration profile. It is reasonable to consider this process in biopharmaceutical testing of these DDS.", "venue": "Expert opinion on drug delivery", "year": 2010.0, "author_names": ["Igor Locatelli", "Natasa Nagelj Kovacic", "Ales Mrhar", "Marija Bogataj"], "n_citations": 19, "n_key_citations": 0, "score": 0}, {"corpus_id": 16513667, "title": "Comment Catching Up with Expert Systems", "abstract": "century, a new era that will be far more scientific, technologic, and sophisticated than anyone would have imagined just a quarter of a century ago. However, the continued success in all areas of pharmaceutical science will depend entirely on how fast pharmaceutical scientists will adapt to rapidly changing technology. Almost 10 years ago, a survey by Shangraw and Demarest revealed a very interesting fact about solid dosage formulation design and development: Tradition was still a very important reason for preferring to use a particular excipient (1) It is not difficult to predict that, in this century, trial and error formulation development and traditional excipient selection will be a part of history. Pharmaceutical formulators will enjoy the availability of the harmonized and fingerprinted (in terms of functionality testing) excipients, and formulations will be developed using databases (preformulation and compaction data banks, etc. (2) The awareness of and the use of artificial intelligence based expert systems (rule based systems, fuzzy logic, genetic algorithm, artificial neural networks [ANNs] simulations, etc. in the areas of preformulation, formulation and process development, regulatory affairs, new drug delivery system development, project management, and all other areas of pharmaceutical science will increase dramatically. To shorten the adaptation period of pharmaceutical scientists to rapidly changing technological advances, I recently formed two new focus groups, namely, the Expert Systems Focus Group and the Excipients Focus Group. They have been approved by the American Association of Pharmaceutical Scientists (AAPS) They will act in conjunction with the Pharmaceutical Technology Section of AAPS and are open to all members of AAPS and other pharmaceutical associations. I would like to give a brief overview of expert systems (ESs) and then address some challenges facing ES developments in terms of their verification and validation (V&V) processes, in part because of FDA's interest in the V&V of all types of software. In a future article, each of the following issues will be discussed in depth. ESs, also known as knowledge based systems, basically are computer programs that either recommend or make decisions based on knowledge gathered from experts in the field. Functional areas of ESs include, but are not limited to, control, design, diagnosis, instruction, interpretation, monitoring, plan", "venue": "", "year": 2001.0, "author_names": ["Metin Celik"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 231979885, "title": "Rational selection of bio enabling oral drug formulations a PEARRL commentary.", "abstract": "New drug candidates often require bio enabling formation technologies such as lipid based formulations, solid dispersions, or nanosized drug formulations. Development of such more sophisticated delivery systems generally requires higher resource investment compared to a conventional oral dosage form, which might slow down clinical development. To achieve the biopharmaceutical objectives while enabling rapid cost effective development, it is imperative to identify a suitable formulation technique for a given drug candidate as early as possible. Hence many companies have developed internal decision trees based mostly on prior organizational experience, though they also contain some arbitrary elements. As part of the EU funded PEARRL project, a number of new decision trees are here proposed that reflect both the current scientific state of the art and a consensus among the industrial project partners. This commentary presents and discusses these, while also going beyond this classical expert approach with a pilot study using emerging machine learning, where the computer suggests formulation strategy based on the physicochemical and biopharmaceutical properties of a molecule. Current limitations are discussed and an outlook is provided for likely future developments in this emerging field of pharmaceutics.", "venue": "Journal of pharmaceutical sciences", "year": 2021.0, "author_names": ["Martin Kuentz", "Rene Holm", "Christian Kronseder", "Christoph Saal", "Brendan T Griffin"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 31566798, "title": "Advances in gastro retentive drug delivery systems", "abstract": "Introduction: In recent years, various technological improvements have been achieved and new concepts have been developed, in the area of controlled release solid oral dosage forms, especially for products where an extended time of release is associated with an extended gastric retention time. These Gastro Retentive Systems have been quite investigated because they can improve the in vivo performance of many drugs. Areas covered: This paper summarizes current approaches in the research and development of gastro retentive dosage forms from recent literature. Apart from the numerous mechanisms of action involved, a short review of different key parameters is proposed, taking into account the stomach physiology. Most of the current technologies published, patented or marketed are presented. Promising drugs to develop in the near future are mentioned, and the importance of such systems in fixed Dose Combinations is also discussed. The importance of food effect is mentioned, and the impact of the multiple unit systems versus monolithic approach is discussed, especially regarding the dose intake. Expert opinion: In conclusion, numerous mechanisms like floating, sinking, effervescence, swelling, bioadhesion, magnetic, etc. have been proposed over the years. While most of the proposed systems show promising dissolution profiles and in vitro retention, only few of them have also shown success in vivo. Currently, the polymeric swelling monolithic systems are the most prominent marketed forms. The possibility to combine different mechanisms in order to ensure true gastric retention even in the fasted state should be further investigated.", "venue": "Expert opinion on drug delivery", "year": 2011.0, "author_names": ["Pascal Prinderre", "Christophe Sauzet", "Claus Fuxen"], "n_citations": 56, "n_key_citations": 4, "score": 0}, {"corpus_id": 34286502, "title": "Unified methodology of neural analysis in decision support systems built for pharmaceutical technology", "abstract": "The objective of this study was to create universal methodology of artificial neural networks (ANNs) application in construction of decision support systems designed for various dosage forms. Two different dosage forms (solid dispersions and microemulsions) were modeled with use of the same methodology, software and hardware environments. Completely different models prepared confirmed their generalization ability both for solid dosage forms (solid dispersions) and liquid dosage forms (microemulsions) ME_expert and SD_expert systems basing on the neural expert committees were created. In the pilot study their application allowed for appropriate choice of qualitative and quantitative composition of particular pharmaceutical formulation. It was also proposed that ME_expert and SD_expert might provide in silico formulation procedures. Unified methodology of neural modeling in pharmaceutical technology was confirmed to be effective in providing valuable tools for pharmaceutical product development.", "venue": "Expert Syst. Appl.", "year": 2007.0, "author_names": ["Aleksander Mendyk", "Renata Jachowicz"], "n_citations": 46, "n_key_citations": 2, "score": 0}, {"corpus_id": 27432128, "title": "Pneumatic dry granulation: potential to improve roller compaction technology in drug manufacture", "abstract": "Introduction: Solid dosage form manufacture still remains the most common in the production of pharmaceutical products. Established granulation processes can benefit from novel technical improvements, which can in turn enhance the behavior and properties of the process intermediates, that is, granules. These improvements in the manufacturing process can ultimately shorten development times, provide processing solutions for challenging materials and improve quality of drug delivery systems. Areas covered: The aim of this review is to give the reader an overview of the latest trends in research with regard to roller compaction technology. Pneumatic dry granulation is also discussed as a new development with the potential to improve and extend the use of dry granulation processes, which can result in a substantial contribution to drug delivery system development and drug product manufacture. Expert opinion: Dry granulation techniques, and more specifically roller compaction, can provide many advantages over the more established wet granulation techniques. There are still problems with roller compaction such as high amounts of fines and poor flow of granulate. Technical innovations that improve existing processes will have a considerable impact on development times and contribute to improved material processability and behavior of the end product. Pneumatic dry granulation has the potential to provide such alternatives.", "venue": "Expert opinion on drug delivery", "year": 2011.0, "author_names": ["Niklas Sandler", "Robert Frank Lammens"], "n_citations": 31, "n_key_citations": 1, "score": 0}, {"corpus_id": 201868793, "title": "Benefits of Fractal Approaches in Solid Dosage Form Development", "abstract": "Pharmaceutical formulations are complex systems consisting of active pharmaceutical ingredient(s) and a number of excipients selected to provide the intended performance of the product. The understanding of materials' properties and technological processes is a requirement for building quality into pharmaceutical products. Such understanding is gained mostly from empirical correlations of material and process factors with quality attributes of the final product. However, it seems also important to gain knowledge based on mechanistic considerations. Promising is here to study morphological and/or topological characteristics of particles and their aggregates. These geometric aspects must be taken into account to better understand how product attributes emerge from raw materials, which includes, for example, mechanical tablet properties, disintegration or dissolution behavior. Regulatory agencies worldwide are promoting the use of physical models in pharmaceutics to design quality into a final product. This review deals with pharmaceutical applications of theoretical models, focusing on percolation theory, fractal, and multifractal geometry. The use of these so called fractal approaches improves the understanding of different aspects in the development of solid dosage forms, for example by identifying critical drug and excipient concentrations, as well as to study effects of heterogeneity on dosage form performance. The aim is to link micro and macrostructure to the emerging quality attributes of the pharmaceutical solid dosage forms as a strategy to enhance mechanistic understanding and to advance pharmaceutical development and manufacturing processes.", "venue": "Pharmaceutical Research", "year": 2019.0, "author_names": ["Renata Abreu-Villela", "Martin Kuentz", "Isidoro Caraballo"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 231204245, "title": "Analytical Method Development and Validation for Estimation of Ranitidine in Solid Dosage Form by UV Spectrophotometric Method", "abstract": "Ranitidine is a histamine 2 receptor blocker and it is effective against peptic ulcer, gastroesophageal reflux disease and heart burn. The main objective of this study was to develop and validate an easy, affordable and cost effective method for the determination of ranitidine in tablet dosage form. The development and validation study was performed under the guidance of ICH and USP. Results showed that the proposed validated method has good accuracy with RSD of 0.60. Repeatability and intermediate precision suggested good precision whereas the value of correlation coefficient 0.9999 confirmed about the linearity of the method. The system suitability data and similarity factors were also found within the permissible range. The specificity study revealed that there was no placebo and diluent effect on the absorbance. Further, stability study of analytical solutions as well as estimation of drug content from market products were also performed.", "venue": "", "year": 2020.0, "author_names": ["Subrata Paul", "Labani Barai", "MD Faruk Husen", "Sabarni Sarker", "Tarun Kumar Pal", "Puja Bal", "Md A M Sarker", "Syeda Saima Alam", "Sheta Biswas"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Membrane introduction Mass Spectrometry: Trends and applications", "session_id": 591591026302984, "user_id": 2914313971836111, "candidates": [{"corpus_id": 31820080, "title": "Membrane introduction mass spectrometry: trends and applications.", "abstract": "Recent advances in membrane introduction mass spectrometry (MIMS) are reviewed. On line monitoring is treated by focusing on critical variables, including the nature and dimensions of the membrane, and the analyte vapor pressure, diffusivity, and solubility in the membrane barrier. Sample introduction by MIMS is applied in (i) on line monitoring of chemical and biological reactors, (ii) analysis of volatile organic compounds in environmental matrices, including air, water and soil, and (iii) in more fundamental studies, such as measurements of thermochemical properties, reaction mechanisms, and kinetics. New semipermeable membranes are discussed, including those consisting of thin polymers, low vapor pressure liquids, and zeolites. These membranes have been used to monitor polar compounds, selectively differentiate compounds through affinity binding, and provide isomer differentiation based on molecular size. Measurements at high spatial resolution, for example, using silicone capped hypodermic needle inlets, are also covered, as is electrically driven sampling through microporous membranes. Other variations on the basic MIMS experiment include analyte preconcentration through cryotrapping (CT MIMS) or trapping in the membrane (trap and release) as well as differential thermal release methods and reverse phase (i.e. organic solvent) MIMS. Method limitations center on semivolatile compounds and complex mixture analysis, and novel solutions are discussed. Semivolatile compounds have been monitored with thermally assisted desorption, ultrathin membranes and derivatization techniques. Taking advantage of the differences in time of membrane permeation, mixtures of structurally similar compounds have been differentiated by using sample modulation techniques and by temperature programmed desorption from a membrane interface. Selective ionization techniques that increase instrument sensitivity towards polar compounds are also described, and comparisons are made with other direct sampling (nonchromatographic) methods that are useful in mixture analysis.", "venue": "Mass spectrometry reviews", "year": 2000.0, "author_names": ["R C Johnson", "Robert Graham Cooks", "T M Allen", "M E Cisper", "P H Hemberger"], "n_citations": 199, "n_key_citations": 3, "score": 1}, {"corpus_id": 96237659, "title": "Membrane introduction mass spectrometry (MIMS)", "abstract": "Membrane introduction mass spectrometry (MIMS) for chemical analysis involves directly sampling analytes in gaseous, liquid and solid samples through a semi permeable membrane coupled to a mass spectrometer, yielding selective and sensitive quantitation. Because MIMS is an on line technique, in which samples can be continuously flowed over a membrane interface, it can yield analytical results in real time without the need for sample clean up and chromatographic separation. This review highlights trends and developments in MIMS over the past decade and describes recent studies that pertain to its use for on site, in situ and in vivo chemical analysis. We report on advancements in instrumentation, including membrane materials, interface configurations and ionization techniques that have extended the range of analytes amenable to MIMS. We summarize the progress made in the miniaturization of mass spectrometers that have resulted in field portable systems and review recent applications of continuous mobile monitoring and on site environmental monitoring to yield both temporally and spatially resolved quantitative and semi quantitative data. Finally, we describe recent work involving the use of MIMS for in vivo chemical analysis.", "venue": "", "year": 2011.0, "author_names": ["N G Davey", "Erik T Krogh", "Chris G Gill"], "n_citations": 66, "n_key_citations": 1, "score": 0}, {"corpus_id": 94838499, "title": "Recent trends in atomic spectrometry with microwave induced plasmas", "abstract": "Abstract The state of the art and trends of development in atomic spectrometry with microwave induced plasmas (MIPs) since the 1998s are presented and discussed. This includes developments in devices for producing microwave plasma discharges, with reference also to miniaturized systems as well as to progress in sample introduction for microwave induced plasmas, such as pneumatic and ultrasonic nebulization using membrane desolvation, to the further development of gaseous analyte species generation systems and to both spark and laser ablation (LA) The features of microwave induced plasma mass spectrometry (MIP MS) as an alternative to inductively coupled plasma (ICP) MS are discussed. Recent work on the use of microwave induced plasma atomic spectrometry for trace element determinations and monitoring, their use as tandem sources and for particle sizing are discussed. Recent applications of the coupling of gas chromatography and MIP atomic spectrometry for the determination of organometallic compounds of heavy metals such as Pb, Hg, Se and Sn are reviewed and the possibilities of trapping for sensitivity enhancement, as required for many applications especially in environmental work, are showed at the hand of citations from the recent literature.", "venue": "", "year": 2004.0, "author_names": ["Jose A C Broekaert", "Volker Siemens"], "n_citations": 36, "n_key_citations": 0, "score": 0}, {"corpus_id": 22067281, "title": "Sample preparation for the analysis of volatile organic compounds in air and water matrices.", "abstract": "This review summarizes literature data from the past 5 years on new developments and/or applications of sample preparation methods for analysis of volatile organic compounds (VOC) mainly in air and water matrices. Novel trends in the optimization and application of well established airborne VOC enrichment techniques are discussed, like the implementation of advanced cooling systems in cryogenic trapping and miniaturization in adsorptive enrichment techniques. Next, focus is put on current tendencies in integrated sampling extraction sample introduction methods such as solid phase microextraction (SPME) and novel in needle trapping devices. Particular attention is paid to emerging membrane extraction techniques such as membrane inlet mass spectrometry (MIMS) and membrane extraction with a sorbent interface (MESI) For VOC enrichment out of water, recent evolutions in direct aqueous injection (DAI) and liquid liquid extraction (LLE) are highlighted, with main focus on miniaturized solvent extraction methods such as single drop microextraction (SDME) and liquid phase microextraction (LPME) Next, solvent free sorptive enrichment receives major attention, with particular interest for innovative techniques such as stir bar sorptive extraction (SBSE) and solid phase dynamic extraction (SPDE) Finally, recent trends in membrane extraction are reviewed. Applications in both immersion and headspace mode are discussed.", "venue": "Journal of chromatography. A", "year": 2007.0, "author_names": ["Kristof Demeestere", "Jo Dewulf", "Bavo De Witte", "Herman R Van Langenhove"], "n_citations": 302, "n_key_citations": 9, "score": 0}, {"corpus_id": 211079366, "title": "Direct analysis of naphthenic acids in constructed wetland samples by condensed phase membrane introduction mass spectrometry.", "abstract": "The application of direct mass spectrometry techniques to the analysis of complex samples has a number of advantages including reduced sample handling, higher sample throughput, in situ process monitoring, and the potential for adaptation to on site analysis. We report the application of a semi permeable capillary hollow fibre membrane probe (immersed directly into an aqueous sample) coupled to a triple quadrupole mass spectrometer by a continuously flowing methanol acceptor phase for the rapid analysis of naphthenic acids with unit mass resolution. The intensity of the naphthenic acid associated peaks in the mass spectrum are normalized to an internal standard in the acceptor phase for quantitation and the relative abundance of the peaks in the mass spectrum are employed to monitor compositional changes in the naphthenic acid mixture using principle component analysis. We demonstrate the direct analysis of a synthetic oil sands process affected water for classical naphthenic acids (CnH2n+zO2) as they are attenuated through constructed wetlands containing sedge (Carex aquatilis) cattail (Typha latifolia) or bulrush (Schoenoplectus acutus) Quantitative results for on line membrane sampling compare favourably to those obtained by solid phase extraction high resolution mass spectrometry. Additionally, chemometric analysis of the mass spectra indicates a clear discrimination between naphthenic acid influenced and natural background waters. Furthermore, the compositional changes within complex naphthenic acid mixtures track closely with the degree of attenuation. Overall, the technique is successful in following changes in both the concentration and composition of naphthenic acids from synthetic oil sands process affected waters, with the potential for high throughput screening and environmental forensics.", "venue": "The Science of the total environment", "year": 2020.0, "author_names": ["Kyle D Duncan", "L C Richards", "Joseph Monaghan", "Monique C Simair", "Chukwuemeka Ajaero", "Kerry M Peru", "Vanessa Friesen", "Dena W McMartin", "John V Headley", "Chris G Gill", "Erik T Krogh"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 44824153, "title": "Discrimination of constructed air samples using multivariate analysis of full scan membrane introduction mass spectrometry (MIMS) data.", "abstract": "RATIONALE Volatile and semi volatile organic compounds (S/VOCs) are important atmospheric pollutants affecting both human and environmental health. They are directly measured as an unresolved mixture using membrane introduction mass spectrometry (MIMS) We apply chemometric techniques to discriminate, classify, and apportion air samples from a variety of sources. METHODS Full scan mass spectra of lab constructed air samples were obtained using a polydimethylsiloxane membrane interface and an electron ionization ion trap mass spectrometer. Normalized full scan spectra were analyzed using principal component analysis (PCA) cluster analysis, and k nearest neighbours (kNN) for sample discrimination and classification. Multivariate curve resolution (MCR) was used to extract pure component contributions. Similar techniques were applied to VOC mixtures sampled from different woodsmoke emissions and from the headspace above aqueous hydrocarbon solutions. RESULTS PCA successfully discriminated 32 constructed VOC mixtures from nearly 300 air samples, with cluster analysis showing similar results. Further, kNN classification (k 1) correctly classified all but one test set sample, and MCR successfully identified the pure compounds used to construct the VOC mixtures. Real world samples resulting from the combustion of different wood species and those associated with water contaminated with different commercial hydrocarbon products were similarly discriminated by PCA. CONCLUSIONS Chemometric techniques have been evaluated using full scan MIMS spectra with a series of VOC mixtures of known composition containing known compounds, and successfully applied to samples with known sources, but unknown molecular composition. These techniques have application to source identification and apportionment in real world environmental samples impacted by atmospheric pollutants.", "venue": "Rapid communications in mass spectrometry RCM", "year": 2018.0, "author_names": ["L C Richards", "N G Davey", "T M Fyles", "Chris G Gill", "Erik T Krogh"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 28194717, "title": "Membrane Introduction Mass Spectrometry Combined with an Orthogonal Partial Least Squares Calibration Model for Mixture Analysis.", "abstract": "The emerging membrane introduction mass spectrometry technique has been successfully used to detect benzene, toluene, ethyl benzene and xylene (BTEX) while overlapped spectra have unfortunately hindered its further application to the analysis of mixtures. Multivariate calibration, an efficient method to analyze mixtures, has been widely applied. In this paper, we compared univariate and multivariate analyses for quantification of the individual components of mixture samples. The results showed that the univariate analysis creates poor models with regression coefficients of 0.912, 0.867, 0.440 and 0.351 for BTEX, respectively. For multivariate analysis, a comparison to the partial least squares (PLS) model shows that the orthogonal partial least squares (OPLS) regression exhibits an optimal performance with regression coefficients of 0.995, 0.999, 0.980 and 0.976, favorable calibration parameters (RMSEC and RMSECV) and a favorable validation parameter (RMSEP) Furthermore, the OPLS exhibits a good recovery of 73.86 122.20% and relative standard deviation (RSD) of the repeatability of 1.14 4.87% Thus, MIMS coupled with the OPLS regression provides an optimal approach for a quantitative BTEX mixture analysis in monitoring and predicting water pollution.", "venue": "Analytical sciences the international journal of the Japan Society for Analytical Chemistry", "year": 2017.0, "author_names": ["Mengting Li", "Lu Zhang", "Xiaolong Yao", "Xingyu Jiang"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 85558895, "title": "Comparison of Membrane Inlet and Capillary Introduction Miniature Mass Spectrometry for Liquid Analysis", "abstract": "Membrane inlet mass spectrometry (MIMS) is commonly used for detecting the components in liquid samples. When a liquid sample flows through a membrane, certain analytes will permeate into the vacuum chamber of a mass spectrometer from the solution. The properties of the membrane directly determine the substances that can be detected by MIMS. A capillary introduction (CI) method we previously proposed can also be used to analyze gas and volatile organic compounds (VOCs) dissolved in liquids. When CI analysis is carried out, the sample is drawn into the mass spectrometer with no species discrimination. The performance of these two injection methods was compared in this study, and similar response time and limit of detection (LOD) can be acquired. Specifically, MIMS can provide better detection sensitivity for most inorganic gases and volatile organic compounds. In contrast, capillary introduction shows wider compatibility on analyte types and quantitative range, and it requires less sample consumption. As the two injection methods have comparable characteristics and can be coupled with a miniature mass spectrometer, factors such as cost, pollution, device size, and sample consumption should be comprehensively considered when choosing a satisfactory injection method in practical applications.", "venue": "Polymers", "year": 2019.0, "author_names": ["Wenyan Shi", "Xinqiong Lu", "Jinbo Zhang", "Jianhong Zhao", "Lili Yang", "Quan Yu", "Xiaohao Wang"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 30843256, "title": "Delicate polydimethylsiloxane hollow fibre membrane interfaces for condensed phase membrane introduction mass spectrometry (CP MIMS)", "abstract": "RATIONALE On line analytical techniques such as condensed phase membrane introduction mass spectrometry (CP MIMS) permit direct and rapid analyte measurements in complex samples. Direct, rapid analytical methods are desirable because they eliminate potential contamination and/or dilution from sample workup steps, facilitate rapid sample screening and allow 'real time' monitoring applications. METHODS PDMS hollow fibre membrane (HFM) flow cell interfaces (215 um, 35 um, and 0.5 um thick composite) were coupled with an electrospray ionization (ESI) triple quadrupole mass spectrometer. A simultaneous push/pull methanol acceptor phase delivery system and membrane mounting via epoxy potting ensured that the delicate membranes were not ruptured during construction or sample measurements. Both flow cell and direct insertion 'J Probe' interfaces using the 0.5 um thick composite PDMS HFM were utilized for direct naphthenic acid measurements. RESULTS Delicate HFM CP MIMS interfaces were used for the rapid screening and continuous, on line monitoring of carboxylic acids and hydroxylated compounds directly in complex sample matrices under ambient conditions at pptr ppb detection limits. Push/pull acceptor phase (methanol) delivery maintained ambient hydrostatic pressures within the HFMs, improving ESI stability and analytical sensitivity, especially with stopped acceptor flow operation. Signal response times less than 2 min were achieved for thin, composite PDMS HFMs at 30degC. The continuous monitoring of naphthenic acid degradation was demonstrated. CONCLUSIONS Delicate PDMS HFM CP MIMS interfaces were developed and used for the direct, on line detection of low volatility, polar analytes in complex aqueous samples. Composite PDMS HFM interfaces yielded the best overall analytical performance improvements, and were used to demonstrate the direct measurement of naphthenic acids in complex aqueous samples.", "venue": "Rapid communications in mass spectrometry RCM", "year": 2014.0, "author_names": ["Megan D Willis", "Kyle D Duncan", "Erik T Krogh", "Chris G Gill"], "n_citations": 17, "n_key_citations": 1, "score": 0}, {"corpus_id": 338523, "title": "Membrane introduction mass spectrometry (MIMS) a versatile tool for direct, real time chemical measurements.", "abstract": "Membrane introduction mass spectrometry (MIMS) is a direct, continuous, on line measurement technique. It utilizes a membrane to semi selectively transfer analyte mixtures from a sample to a mass spectrometer, rejecting the bulk of the sample matrix, which can be a gas, liquid or solid/slurry. Analyte selectivity and sensitivity are affected by optimizations at the membrane, ionization and the mass spectrometer levels. MIMS can be roughly classified by the acceptor phase that entrains analyte(s) to the mass spectrometer after membrane transport, either a gaseous acceptor phase (GP MIMS) or condensed acceptor phase (CP MIMS) The aim of this article is to provide an introduction to MIMS as a technique and to explore current variants, recent developments and modern applications, emphasizing examples from our group, the Applied Environmental Research Laboratories as well as selected work from others in this emerging area. Also provided is a synopsis of current and future directions for this versatile analytical technique.", "venue": "Journal of mass spectrometry JMS", "year": 2014.0, "author_names": ["Erik T Krogh", "Chris G Gill"], "n_citations": 21, "n_key_citations": 0, "score": 0}]} -{"query": "Risk Management: Comparative Study between Islamic Banks and Conventional Banks", "session_id": 2456631703440925, "user_id": 5114560687490425, "candidates": [{"corpus_id": 214198772, "title": "Risk Management: Comparative Study Between Islamic Banks and Conventional Banks", "abstract": "In the future the role of Islamic Banking/Sharia should be developed as an alternative source of corporate financing in addition to conventional bank financing. The role of this institution is increasing because based on survey Islamic Development Bank for certain types of risks attached to Islamic Bank is relatively easier to manage it compared with conventional banks. Easier risk management results in lower financing risks, making it easy to compete because it is profitable for banks, corporations and the economy. The survey results show that in Islamic Bank: Capital is quite good, Capital and Liquidity risk is low. Credit, market and operating risk moderate. More concerned about credit and liquidity risk. The most commonly used risk management techniques are Credit rating.", "venue": "", "year": 2020.0, "author_names": ["Zainul Kisman"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 213393929, "title": "Risk and risk management practices", "abstract": "This study aims to compare types and levels of risk and risk management practices (RMPs) including the recognition, identification, assessment, analysis, monitoring and control of risk in both Islamic and conventional banks.,A questionnaire survey was conducted among the Islamic and conventional banks in Qatar, together with an analysis of archival data extracted from the Thomson Reuters Eikon database for the period 2009 2018. Data were analysed using descriptive statistics, ANOVA and regression analysis.,Islamic banks encounter unique types and levels of risk that are not encountered by conventional banks. In Islamic banks, risks such as those of operation and Sharia non compliance are perceived to be higher, while in conventional banks other risks such as those of credit and insolvency are higher; other risks, for example, liquidity risk, are faced by both. RMPs are determined by understanding risk and risk management, risk identification, risk monitoring and control and credit risk analysis, but not by risk assessment and analysis. However, the RMPs of the two types of bank are not significantly different, except in the analysis of credit risk.,The study contributes to the debate in the literature by developing a better understanding of the dynamism of risk management in Qatari banks, which can be extended to similar contexts in the region. However, the relatively small sample size in only one country limits the possibility of generalizing the findings. The survey methodology is based on the perception of bankers rather than their actual actions and does not provide in depth analysis for each type of risk, especially credit risk. However, using archival data, in addition to those from the survey, minimises the bias that would result from depending on one source of data.,The study provides valuable insights into the different types and levels of risk, as well as the RMPs in Islamic and conventional banks, which can help in guiding the future development and regulation of risk management in the banking sector of Qatar and its region.,The study helps to explain the mixed results of previous studies that compare types and levels of risk and RMPs in Islamic and conventional banks. Using different types of data and analysis, it provides evidence from one of the fastest growing economies in the world. It also addresses the concerns over RMPs in banks since the global financial crisis.", "venue": "", "year": 2020.0, "author_names": ["Adel Elgharbawy"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 213004773, "title": "CREDIT RISK MANAGEMENT A COMPARATIVE STUDY BETWEEN ISLAMIC AND CONVENTIONAL BANKS IN TURKEY.", "abstract": "This study aims to identify variables which determine credit risk in Islamic and Conventional banks. Panel data fixed effect model employed to analyze which belongs to three Islamic Banks in Turkey for the period 2008 Q1 to 2017 Q4. While for conventional banks, previous studies that has been conducted in Turkey used to compare Islamic to Conventional banks (CB) Non performing Loans (NPL) ratio was used as a proxy for credit risk. Result from fixed effect model showed that NPL in Islamic Banks is positively affected by Loan Loss Provision and Proportion of Loans to Deposits, and it is negatively affected by Assets Size. While literature showed that conventional bank's credit risk is positively affected by Net Interest Margin, Loan Loss Provision, and Capital Adequacy Ratio and it is negatively affected by Proportion of Loan to Deposits, Proportion of Loans to Assets and Size. There were clear differences between both Islamic and Conventional banks related to all variables of study except Loan Loss Provision and Proportion of Loan to Assets ratios.", "venue": "", "year": 2019.0, "author_names": ["Sakir Gormus", "Mohammed Alkhawaja"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169949241, "title": "LIQUIDITY RISK MANAGEMENT AND ITS IMPACT ON PERFORMANCE OF THE BANKS: A COMPARATIVE STUDY BETWEEN ISLAMIC AND CONVENTIONAL BANKS OF PAKISTAN, MALAYSIA AND INDONESIA", "abstract": "Liquidity risk Management is fundamental to sound banking practice. No doubt today all banking institutions face countless risks such as liquidity risk which can cause failure of a banking system. Therefore, a proper risk management technique is necessary for the existence and the growth of banks. Therefore, the purpose of this study is to examine the effectiveness of risk management practice that is liquidity risk their impact on performance or Profitability of Islamic and conventional banks. Liquidity risk is measured by loan to deposit ratio, cash to total asset ratio. Performance measure proxies were ROE and ROA for both Islamic and Conventional banks. Data are panel from 2011 2015 which is taken from the financial reports of Islamic and conventional banks. Regression analysis has been used to extract the results. The result of this study concluded that how this liquidity risk will affect the bank performance in conventional and Islamic banks.", "venue": "", "year": 2018.0, "author_names": ["Usama Yaqoob", "Umair Khalid"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 55092911, "title": "Liquidity Risk Management: A Comparative Study Between Islamic And Conventional Banks", "abstract": "This paper examines the factors that affect the liquidity risk for Islamic and conventional banks in the Golf countries, using the panel data for 11 IBs and 33 CBs between 2006 and 2013. Our results show that return on equity,A A Net Interest Margin, Capital Adequacy Ratio and inflation rate have a positive impact on liquidity risk for Islamic banks, while returns on assets, Non Performing Loan, size and GDP growth have a negative impact.A On the other hand, in conventional banks, size, Return on Equity, Net Interest Margin, Capital Adequacy Ratio, GDP growth and inflation rate have a positive impact, whereas the Return on Assets, Non Performing Loan have a negative impact on liquidity risk.A This study tries to see how Islamic and conventional banks manage their liquidity in response to changes on the basis of several factors.", "venue": "", "year": 2015.0, "author_names": ["Ameni Ghenimi", "Mohamed Omri"], "n_citations": 19, "n_key_citations": 1, "score": 0}, {"corpus_id": 155668843, "title": "Risk Management Practiced Tools in the MENA Region: A Comparative Study between Islamic and Conventional Banks", "abstract": "ABSTRACT The purpose of the study is to investigate the current risk management practices of Islamic and conventional banks in the MENA region. The study is based on a survey of 47 banks, including 24 conventional and 23 Islamic banks. The collected data were analysed using descriptive statistics and t tests. The findings indicate that banks in MENA region have effective risk strategies and effective risk management frameworks in place. Furthermore, the findings reveal that credit risk is considered the most important for both conventional and Islamic banks followed by liquidity risk. Finally, both conventional and Islamic banks continue to rely on traditional credit risk mitigation tools. These findings have significant contributions to the literature by comprehensively clarifying and critically analysing the current state of risk management among the Islamic banks and conventional banks located in the MENA region. JEL Classifications: G20, G21, G28, Keywords: risk management; MENA; Islamic banking; Islamic finance; VaR I. INTRODUCTION Risk management in the financial sector is very important since the ultimate goal of the institution is to maximize revenues and shareholders value. Moreover, the recent financial crisis was marked by market volatility, lack of liquidity in many financial markets and increased systemic risk (Delloitte, 2011) This crisis has forced banks to take a critical look at how they manage risk and has exhibited some significant weaknesses in risk management in financial industry (KPMG, 2009) This difficulty has highlighted the importance of risk management; as a result many institutions have reviewed their risk management models. An active role was undertaken in re examining their approaches of risk management, the establishment of risk management policy and approval of risk appetite. Solid practices risk management in the banking sector is important for financial stability and economic development. A robust framework of risk management can help banks to reduce their exposure to risks, and enhancing their ability to compete in the market (Mirakhor and Iqbal, 2007) Reducing the exposure will reduce systemic risk. Thus, it is necessary for banks to put in place a comprehensive risk management to identify, measure, monitor, manage, and report the various categories of risk. Risk management is a key element of the strategic management of the organization. This is the process whereby organizations methodically address the risks their activities with the goal of achieving sustained benefit. In this regard, CBSB (2011) forces authorities to ensure that banks have in place a comprehensive risk management. The process of risk management is a structured and consistent approach to identify and understand the potential risk factors and evaluation of consequences and uncertainties associated with these risk factors identified. Based on this information, the best plan of action is evaluated and selected to address identified risks and achieve the desired objectives. This study extends the work of Tafri et al. (2011) but differ in some aspects. Firstly, the questionnaire is similar but not identical; in fact it investigates the liquidity risk, which has not been examined by Tafri et al. (2011) Secondly, this study explores the risk management tools practiced in Islamic and conventional banks in a different environment (MENA Middle Eastern and North Africa region) The main purpose is to examine the current practices in risk management methodologies of Islamic and conventional banks in the MENA region. It discusses and analyses the tools and methods used in managing credit risk, market risk, liquidity risk and operational risk among Islamic and conventional banks in the aim to identify the convergence in the practices of risk management and risk mitigation between Islamic and conventional banks. The remainder of the paper is divided into four sections: Section II provides a brief review of the literature, Section III describes the methodology along with research hypotheses, Section IV discusses the main empirical results and Section V presents the main conclusions, limitations of the research.", "venue": "", "year": 2015.0, "author_names": ["Rim Ben Selma Mokni", "Abdelghani Echchabi", "Mohamed Taher Rajhi"], "n_citations": 8, "n_key_citations": 2, "score": 0}, {"corpus_id": 157399794, "title": "A comparative study of Islamic and conventional banks' risk management practices: empirical evidence from Pakistan", "abstract": "Abstract While conventional bank risk management practices are well documented in the literature, there is limited research devoted at comparing the risk management practices of Islamic and conventional banks and how the recent financial crisis affected the approach taken in each banking model to manage the risks. In this paper, we use self administered questionnaire to collect data from 150 bank senior managers and risk specialists from Pakistani conventional and Islamic banks to identify the main contributing factors to their risk management practices after the 2007 2008 financial crisis. The study results reveal that risk identification, risk assessment and analysis, credit risk analysis and risk governance are the most efficient and influential variables in explaining the risk management practices of Islamic banks, while understanding risk management, credit risk analysis and risk governance are the most significant and contributing variables in the risk management practices of conventional banks. Differences are also observed between Islamic and conventional banks in their liquidity risk analysis and risk governance. The results presented in this study are likely to benefit bank managers, investors, regulators and policymakers as they will serve them as guide when developing, reformulating and overseeing the bank(s) existing risk management practices.", "venue": "", "year": 2018.0, "author_names": ["Asma Abdul Rehman", "Abdelhafid Benamraoui", "Aasim Munir Dad"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 211444480, "title": "Exchange Rate Risk Management in Participation (Islamic) and Conventional Banks in Turkey: A Comparative Study", "abstract": "", "venue": "International Business Management", "year": 2019.0, "author_names": ["Fatma Mansour", "Hatice Dogukanli"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 169421493, "title": "The State of Liquidity Risk Management of Islamic Banks in Bangladesh: A Comparative Study with Conventional Banks", "abstract": "This paper aims to analyze the current state of liquidity and liquidity risk management of Islamic banks, the historical trend of the liquidity position, and provides a comparison with the liquidity position of conventional banks in Bangladesh. The paper utilizes liquidity ratio, deployment ratio, profit sharing investment account (PSIA) to total deposits ratio, liquidity gap over a specific time period, net stable funding ratio (NSFR) and liquidity coverage ratio (LCR) to discuss the state of liquidity and the trend of liquidity of Islamic banks. Five Islamic banks and five private commercial conventional banks, which do not have any Islamic banking branches, or windows, are chosen as samples. The data is collected from the annual reports published by selected commercial banks. Simple descriptive statistics such as mean and standard deviations are used to analyze the data. This study finds that the liquidity ratio and deployment ratios for Islamic banks are in a downward trend, although by a small percentage. Islamic banks have a negative short term liquidity gap, although by a small percentage and the variations of liquidity gap are much higher, and the gap is in a declining trend towards being positive. Conventional banks have a positive short term liquidity gap. Profit sharing investment accounts are experiencing an increasing trend and occupy the major portion of deposits. Liquidity ratio and deployment ratio remain higher for Islamic banks than conventional banks. For the past two years, both types of banks have maintained an adequate ratio as required in Basel III.", "venue": "", "year": 2018.0, "author_names": ["Md Abdul Jalil", "Allie Biswas"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 168301281, "title": "Liquidity Risk Management: A Comparative Study between Conventional and Islamic Banks in Bangladesh", "abstract": "Liquidity risk may arise from diverse operations of financial intermediaries, facilitators and supporters as they are fully liable to make available liquidity when required by the third party. Incase of Islamic Banks additional efforts are required for scaling liquidity management due to their unique characteristics and conformity with Shariah principles. The objective of this study is to look into the liquidity risk associated with the solvency of the financial institutions, with a purpose to evaluate liquidity risk management (LRM) through a comparative analysis between conventional and Islamic banks of Bangladesh. This paper investigates the significance of Size of the Firm, Net Working Capital, Return on Equity, Capital Adequacy and Return on Assets (ROA) on Liquidity Risk Management in conventional and Islamic banks in Bangladesh. The study has taken six mid size banks three conventional and three Islamic banks as samples. It is based on secondary data which are collected from the selected banks' annual reports, covering a period of 2007 2011. Independent variables that have positive but insignificant relation are; size of the bank and net working capital to liquidity risk in Islamic banks and in case of conventional banks size of bank is negatively related with the liquidity risk. Only return on assets is positively affecting the liquidity risk at 10% level in case of conventional banks, but in Islamic banks the relationship is insignificant. The other variables are found to be insignificant in affecting the liquidity risk for both the conventional and Islamic banks in Bangladesh Journal of Business and Technology (Dhaka) Vol.10(2) 2015; 18 35", "venue": "", "year": 2016.0, "author_names": ["Lutfor S Rahman", "Sm Hasanul Banna"], "n_citations": 32, "n_key_citations": 7, "score": 0}]} -{"query": "An empirical study on evaluation metrics of generative adversarial networks.", "session_id": 3974817014672852, "user_id": 6056927808130936, "candidates": [{"corpus_id": 49326005, "title": "An empirical study on evaluation metrics of generative adversarial networks", "abstract": "Evaluating generative adversarial networks (GANs) is inherently challenging. In this paper, we revisit several representative sample based evaluation metrics for GANs, and address the problem of how to evaluate the evaluation metrics. We start with a few necessary conditions for metrics to produce meaningful scores, such as distinguishing real from generated samples, identifying mode dropping and mode collapsing, and detecting overfitting. With a series of carefully designed experiments, we comprehensively investigate existing sample based metrics and identify their strengths and limitations in practical settings. Based on these results, we observe that kernel Maximum Mean Discrepancy (MMD) and the 1 Nearest Neighbor (1 NN) two sample test seem to satisfy most of the desirable properties, provided that the distances between samples are computed in a suitable feature space. Our experiments also unveil interesting properties about the behavior of several popular GAN models, such as whether they are memorizing training samples, and how far they are from learning the target distribution.", "venue": "ArXiv", "year": 2018.0, "author_names": ["Qiantong Xu", "Gao Huang", "Yang Yuan", "Chuan Guo", "Yu Sun", "Felix Wu", "Kilian Q Weinberger"], "n_citations": 125, "n_key_citations": 15, "score": 1}, {"corpus_id": 9957731, "title": "BEGAN: Boundary Equilibrium Generative Adversarial Networks", "abstract": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "venue": "ArXiv", "year": 2017.0, "author_names": ["David Berthelot", "Tom Schumm", "Luke Metz"], "n_citations": 894, "n_key_citations": 148, "score": 0}, {"corpus_id": 16153365, "title": "Generative Adversarial Imitation Learning", "abstract": "Consider learning a policy from example expert behavior, without interaction with the expert or access to reinforcement signal. One approach is to recover the expert's cost function with inverse reinforcement learning, then extract a policy from that cost function with reinforcement learning. This approach is indirect and can be slow. We propose a new general framework for directly extracting a policy from data, as if it were obtained by reinforcement learning following inverse reinforcement learning. We show that a certain instantiation of our framework draws an analogy between imitation learning and generative adversarial networks, from which we derive a model free imitation learning algorithm that obtains significant performance gains over existing model free methods in imitating complex behaviors in large, high dimensional environments.", "venue": "NIPS", "year": 2016.0, "author_names": ["Jonathan Ho", "Stefano Ermon"], "n_citations": 1274, "n_key_citations": 290, "score": 0}, {"corpus_id": 8239952, "title": "Learning to Discover Cross Domain Relations with Generative Adversarial Networks", "abstract": "While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross domain relations when given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN) Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity.", "venue": "ICML", "year": 2017.0, "author_names": ["Taeksoo Kim", "Moonsu Cha", "Hyunsoo Kim", "Jung Kwon Lee", "Jiwon Kim"], "n_citations": 1238, "n_key_citations": 144, "score": 0}, {"corpus_id": 18828233, "title": "Towards Principled Methods for Training Generative Adversarial Networks", "abstract": "The goal of this paper is not to introduce a single algorithm or method, but to make theoretical steps towards fully understanding the training dynamics of generative adversarial networks. In order to substantiate our theoretical analysis, we perform targeted experiments to verify our assumptions, illustrate our claims, and quantify the phenomena. This paper is divided into three sections. The first section introduces the problem at hand. The second section is dedicated to studying and proving rigorously the problems including instability and saturation that arize when training generative adversarial networks. The third section examines a practical and theoretically grounded direction towards solving these problems, while introducing new tools to study them.", "venue": "ICLR", "year": 2017.0, "author_names": ["Martin Arjovsky", "Leon Bottou"], "n_citations": 1261, "n_key_citations": 129, "score": 0}, {"corpus_id": 206771128, "title": "Least Squares Generative Adversarial Networks", "abstract": "Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To overcome such a problem, we propose in this paper the Least Squares Generative Adversarial Networks (LSGANs) which adopt the least squares loss function for the discriminator. We show that minimizing the objective function of LSGAN yields minimizing the Pearson X2 divergence. There are two benefits of LSGANs over regular GANs. First, LSGANs are able to generate higher quality images than regular GANs. Second, LSGANs perform more stable during the learning process. We evaluate LSGANs on LSUN and CIFAR 10 datasets and the experimental results show that the images generated by LSGANs are of better quality than the ones generated by regular GANs. We also conduct two comparison experiments between LSGANs and regular GANs to illustrate the stability of LSGANs.", "venue": "2017 IEEE International Conference on Computer Vision (ICCV)", "year": 2017.0, "author_names": ["Xudong Mao", "Qing Li", "Haoran Xie", "Raymond Y K Lau", "Zhen Wang", "Stephen Paul Smolley"], "n_citations": 2359, "n_key_citations": 288, "score": 0}, {"corpus_id": 3366315, "title": "Spectral Normalization for Generative Adversarial Networks", "abstract": "One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL 10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.", "venue": "ICLR", "year": 2018.0, "author_names": ["Takeru Miyato", "Toshiki Kataoka", "Masanori Koyama", "Yuichi Yoshida"], "n_citations": 2297, "n_key_citations": 402, "score": 0}, {"corpus_id": 11758569, "title": "Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks", "abstract": "In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs) that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks demonstrating their applicability as general image representations.", "venue": "ICLR", "year": 2016.0, "author_names": ["Alec Radford", "Luke Metz", "Soumith Chintala"], "n_citations": 8777, "n_key_citations": 1286, "score": 0}, {"corpus_id": 1033682, "title": "Generative Adversarial Nets", "abstract": "We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.", "venue": "NIPS", "year": 2014.0, "author_names": ["Ian J Goodfellow", "Jean Pouget-Abadie", "Mehdi Mirza", "Bing Xu", "David Warde-Farley", "Sherjil Ozair", "Aaron C Courville", "Yoshua Bengio"], "n_citations": 25214, "n_key_citations": 4494, "score": 0}, {"corpus_id": 12803511, "title": "Conditional Generative Adversarial Nets", "abstract": "Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.", "venue": "ArXiv", "year": 2014.0, "author_names": ["Mehdi Mirza", "Simon Osindero"], "n_citations": 5128, "n_key_citations": 742, "score": 0}]} -{"query": "Eloctrolyte based review paper of batteries", "session_id": 1599290339038399, "user_id": 6635925831940318, "candidates": [{"corpus_id": 133790035, "title": "Review Paper on RF based Energy Harvesting System", "abstract": "As the law of conservation of Energy states that energy can neither be created nor be destroyed, it can only be converted or transformed from one form to another, moreover there are various sources of energy like solar, wind, geothermal etc. The purpose of this paper is to put light on radio frequency based energy harvesting systems. The said RF energy is currently transmitted from various sources/transmitters which include mobile base stations, mobile, telephone, TV/radio broadcast stations and handheld radios. The propensity to gather or harvest RF energy from the committed sources empowers or authorizes wireless charging for low power appliances or devices which furthermore results in better product design dependableness and utilization. The battery operating systems can be charged gradually and slowly to abolish battery replacements or to extend battery durability disposable batteries can be used. The battery free systems i.e. RF energy based devices can be designed to work upon availability i.e. when the sufficient charge is accumulated. In all the cases mentioned above the devices or the appliances can be operated without the usage of cables battery panels connectors which can make these devices more mobile and portable while operation and charging as well. All this and more can be achieved by RF based energy harvesting and main cause or reason to harvest RF based energy is that it is consequentially FREE energy. The sources of RF energy are increasing day by day like mobile based transmitters from which more and more energy can be harvested. This paper more importantly focuses on parameters to design the system, methods, different frequency ranges that can be utilized and the respective circuitry for converting Low voltage output to High voltage for various applications using RF based energy harvesting.", "venue": "Communications on Applied Electronics", "year": 2019.0, "author_names": ["Parth Thakar", "Ameya Kadam"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 211193140, "title": "Emerging challenges in the thermal management of cellulose nanofibril based supercapacitors, lithium ion batteries and solar cells: A review.", "abstract": "In recent years, extensive efforts have been devoted to electronic miniaturization and integration. Accordingly, heating up of electronics has become a critical problem that needs to be urgently solved by efficient and reliable thermal management. Electronic device substrates made of cellulose nanofibrils (CNFs) exhibit outstanding flexibility, mechanical properties, and optical properties. Combining CNFs with high thermal conductivly fillers is an effective thermal management technique. This paper focuses on the thermal management of electronic devices and highlights the potential of CNF based materials for efficient thermal management of energy storage electronic such as supercapacitors, lithium ion batteries and solar cells. A high thermal conductivity composite material for electronic devices can be obtained by combining CNFs as the framework material with carbon nanotubes, graphene, and inorganic nitrides. Moreover, The research progress in the application of CNFs based materials for supercapacitors, lithium ion batteries and solar cells is highlighted, and the emerging challenges of different CNFs based energy storage devices are discussed.", "venue": "Carbohydrate polymers", "year": 2020.0, "author_names": ["Yuehua Zhang", "Ningke Hao", "Xue-jiao Lin", "Shuangxi Nie"], "n_citations": 41, "n_key_citations": 0, "score": 1}, {"corpus_id": 216264761, "title": "Review Conducting Polymer Based Binders for Lithium Ion Batteries and Beyond", "abstract": "In the search for active Lithium ion battery materials with ever increasing energy density, the limits of conventional auxiliary materials, such as binders and conducting additives are being tested. Binders adhere to active substances and current collectors, yielding an interconnected electrode structure that ensures mechanical integrity during the (de )lithiation process. Even though the battery binder only accounts for a fraction of battery weight and cost, it is a bottleneck technology in the deployment of high energy density active materials that experience significant volume variation and side reactions. This review paper discusses research on alternative binders derived from conducting polymers (CPs) The use of CPs in binders enables mechanically flexible electronic contacts with the active material with the goal of accommodating larger volume changes within the electrode. Following a summary of the reasoning behind the use of CP based binders, their rational design is reviewed, including novel composite syntheses and chemical modifications. A new class of multifunctional CP based binders exhibits promising properties such as high electronic conductivity, the ability for aqueous processing, and efficient binding that tackle the limiting features of traditional binders. The practical application of these binders in Li ion batteries and beyond is summarized, yielding an outline of current achievements, and a discussion of remaining knowledge gaps and possible future development of such binders.", "venue": "", "year": 2020.0, "author_names": ["Van At Nguyen", "Christian Kuss"], "n_citations": 24, "n_key_citations": 0, "score": 1}, {"corpus_id": 230606021, "title": "Review Paper on Assessment of Groundwater Quality in Open Landfill", "abstract": "Groundwater is one the important source of fresh water available on earth. It is the one which helps in meeting the water needs for various activities. This groundwater cannot be used for general purposes without assessing the quality of water. Physical, chemical and biological characteristics of water should be within the permissible limits. But water quality in most of the areas around the open dump yards are not within the permissible limits due to leachate percolation. This study is to get a detailed idea on the quality of groundwater when waste are dumped in open areas without any Engineered methods. It is found that most of the area around the open landfill contains contaminated groundwater due to open dumping of waste. When the waste contains heavy metals like Zinc (Zn) and Lead (Pb) it is evident that the waste contains batteries, Lead based paints, Fluorescent lamps. This heavy metals, when present beyond the permissible limits of Bureau of Indian Standards (BIS) causes serious health issues when it is consumed unceasingly. If leachate contains Biological Oxygen Demand (BOD) and Chemical Oxygen Demand (COD) presence of organic matter in the groundwater is confirmed. Analysis of water quality using statistical analysis, indicates that most of the characteristics which are highly correlated are highly responsible for contamination of groundwater. Based on the parameters which are beyond the permissible limits, the types of waste deposited in that landfill can be identified. Hence suitable appropriate preventive measures", "venue": "", "year": 2020.0, "author_names": ["M Madhumitha", "C RavathiM"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 220059854, "title": "A critical updated review of the hydrometallurgical routes for recycling zinc and manganese from spent zinc based batteries.", "abstract": "This review paper aims to present and analyse data from the most recent literature (between 2007 and 2019) published on the topic of manganese (Mn) and zinc (Zn) recovery from zinc based spent batteries through hydrometallurgical methods. In a first attempt, a detailed comparative assessment of the metals leaching performance (as well as the experimental variables that influence its performance) reported in the various studies with strong acid or bases, potentially supplemented by complexing or reducing agents, as well as the reactions involved, are reviewed and discussed. All data point out that the use of a reductant is needed to fully solubilize Mn from spent batteries during the leaching process. Comparison of the data seem to indicate that most reductants have similar performance and, therefore, the choice of a reductant should be focused on low cost or even waste materials. In a second attempt, the separative processes mostly described in the literature to recover Mn and Zn from leachates are reviewed emphasizing the strengths and weaknesses of each technique. Solvent extraction is the most widely tested process for this aim. A thorough comparison of existing data indicates that, in general, neutral extractants have higher potential for selective separation of Zn and Mn. Furthermore, although chemical precipitation is a simple process, low pure final metal hydroxide products are expected to be achieved when alkaline precipitation is implemented comparatively to the Mn oxidative precipitation where Mn can be recovered selectively as a solid of manganese (IV) oxide.", "venue": "Waste management", "year": 2020.0, "author_names": ["S Maryam Sadeghi", "Joao M Jesus", "Helena M V M Soares"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 224988372, "title": "Thermal management technology of power lithium ion batteries based on the phase transition of materials: A review", "abstract": "Abstract With the rapid development of electric vehicles and hybrid electric vehicles industry, heat generation problem of vehicles power source has been becoming a challenge which influences the temperature distribution and lifespan of batteries. An efficient battery thermal management system for controlling the temperature of batteries in a reasonable range and improving battery module's temperature uniformity to optimize the performance of power lithium ion (Li ion) batteries is necessary. In recent years, phase change material (PCM) is widely used as the working medium of battery thermal management system, which is an effective method to control the working temperature of batteries. In this context, this paper reviews two types of battery thermal management systems (BTMS) based on phase transition principle, including the thermal management system based on solid liquid phase transition principle and the thermal management system based on liquid gas phase transition principle. In addition, for the prediction of battery heat generation, several kinds of existing thermophysical models are reviewed in detail. These thermophysical models can accurately predict the distribution of heating area and the rising trend of temperature, which can provide thought support for the development and construction of thermal management system or model, as well as provide theoretical basis for the established thermal management system. Furthermore, the simulation time and calculation error of various models in computer are also summarized and discussed in this paper. On the other hand, the advantages, disadvantages and cost effectiveness of each battery cooling technology are evaluated and discussed objectively in the latter section of the paper. In view of the shortcomings of some technologies, this paper discusses and puts forward appropriate optimization measures to provide a reasonable solution for the further research of battery thermal management system in the future.", "venue": "", "year": 2020.0, "author_names": ["Kun Jiang", "Gaoliang Liao", "E Jiaqiang", "Feng Zhang", "Jingwei Chen", "Erwei Leng"], "n_citations": 13, "n_key_citations": 1, "score": 0}, {"corpus_id": 220313462, "title": "Review Paper on Recent Active Voltage Balancing Methods for Supercapacitor Energy Storage System", "abstract": "Performance of electrical energy storage system and life expectancy of components of storage system plays vital role in performance of electric vehicle. Based on desirable electrical characteristics compared to Li ion batteries, Supercapacitor Energy storage system (SCESS) is nowadays involved as basic element in energy storage system of electric vehicle. Among various factors that decide life expectancy of supercapacitors, major attention is gained by voltage imbalance during charging of array of supercapacitors. Various techniques of active voltage balancing are already introduced and implemented in practical applications. To overcome disadvantages and improve efficiency of these techniques, modifications are done in traditional balancing circuits and also new techniques are introduced by researchers in recent years. This paper reviews about recent active voltage balancing techniques for array of supercapacitor energy storage system for various applications.", "venue": "2019 5th International Conference On Computing, Communication, Control And Automation (ICCUBEA)", "year": 2019.0, "author_names": ["Anuradha A Ghotekar", "B E Kushare"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 20384280, "title": "Paper based batteries: a review.", "abstract": "There is an extensively growing interest in using paper or paper like substrates for batteries and other energy storage devices. Due to their intrinsic characteristics, paper (or paper like) batteries show outstanding performance while retaining low cost, multifunctionality, versatility, flexibility and disposability. In this overview, we review recent achievements in paper (or paper like) batteries as well as their applications. Various types of paper power devices are discussed including electrochemical batteries, biofuel cells, lithium ion batteries, supercapacitors, and nanogenerators. Further scientific and technological challenges in this field are also discussed.", "venue": "Biosensors bioelectronics", "year": 2014.0, "author_names": ["Thu H Nguyen", "Arwa Fraiwan", "Seokheun Choi"], "n_citations": 156, "n_key_citations": 5, "score": 0}, {"corpus_id": 208735191, "title": "A review of electrospun nanofiber based separators for rechargeable lithium ion batteries", "abstract": "Abstract In this paper, state of the art electrospun nanofiber based separators for lithium ion battery (LIB) are reviewed. Recent years, extensive efforts have been made to improve battery performances for future energy applications such as electric vehicles and energy storage systems. Separator, as a crucial component in lithium ion battery, also gained rapid developments to achieve advanced properties. Electrospun nanofiber based membrane is a promising candidate for separator in LIB to enhance lithium ions transportation efficiency due to its ideal features like interconnected porous structures, high porosities and large surface to volume ratio. In this review, we classified electrospun separator into five major types namely the monolayer separator, multilayer separator, modified separator, composite separator and gel polymer electrolyte. Each electrospun separator type was comprehensively discussed and summarized to cover the research achievements within the recent years. The outlook and future directions in this research field are also provided.", "venue": "", "year": 2019.0, "author_names": ["Yifu Li", "Qinghai Li", "Zhongchao Tan"], "n_citations": 52, "n_key_citations": 0, "score": 0}, {"corpus_id": 126088349, "title": "Failure modes and mechanisms for rechargeable Lithium based batteries: a state of the art review", "abstract": "The Li ion battery (LiB) is regarded as one of the most popular energy storage devices for a wide variety of applications. Since their commercial inception in the 1990s, LiBs have dominated the consumer market of portable electronic devices, especially for laptops, cell phones, and many medical devices. As the transition of Li ion batteries from being used in portable electronic devices to longer lifetime and more safety critical applications, such as electric cars, electrically powered underwater vehicles, and aircrafts, the price of failure has become much more important in terms of both liability and cost (Hendricks et al. in J Power Sources 297:113 120, 2015) This paper reviews the current development and potential problems of Li ion batteries, particularly focusing on the failure mechanism and its possible solutions of Li ion batteries. It has been a general consensus that Li ion batteries will continue to dominate the battery market in the foreseen future as a convenient electric power source. Finally, this paper provides authors' perspectives on future directions and challenges on experimental and computational modeling aspects of Li based battery researches, in particular, the failure analysis of Li based batteries.", "venue": "", "year": 2019.0, "author_names": ["Dandan Lyu", "Bo Ren", "Shaofan Li"], "n_citations": 27, "n_key_citations": 0, "score": 0}]} -{"query": "the beautyful ones are not yet born", "session_id": 7199559540563421, "user_id": 2518863036702004, "candidates": [{"corpus_id": 191449196, "title": "The Beautyful Ones Are Not Yet Born", "abstract": "The central story in this book tells of an upright man resisting the temptations of easy bribes and easy satisfactions and winning for his honesty nothing but scorn.", "venue": "", "year": 1968.0, "author_names": ["Ayi Kwei Armah"], "n_citations": 256, "n_key_citations": 6, "score": 1}, {"corpus_id": 176738557, "title": "The beautyful ones are not yet born", "abstract": "", "venue": "", "year": 1991.0, "author_names": ["Bob Hurst", "Branford Marsalis", "Branford Marsalis Trio", "Jeff Watts", "Wynton Marsalis", "Courtney Pine"], "n_citations": 90, "n_key_citations": 4, "score": 0}, {"corpus_id": 219086205, "title": "British Popular Culture in Armah's The Beautyful Ones Are Not Yet Born", "abstract": "This article considers the first novel of the Ghanaian author Ayi Kwei Armah, The Beautyful Ones Are Not Yet Born (1968, Oxford: Heinemann) in terms of the novel's references to British popular cul.", "venue": "Scrutiny2", "year": 2019.0, "author_names": ["David S Robinson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 229200241, "title": "Armah, Ayi Kwei: The Beautyful Ones Are Not Yet Born", "abstract": "", "venue": "", "year": 2020.0, "author_names": ["Thomas Bruckner"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 163674702, "title": "Phantasy and repression in the beautyful ones are not yet born", "abstract": "A recurrent metaphor in criticism of The Beautyful Ones Are Not Yet Born is that of the penetrating eye whose moral vision illuminates the oneness of truth behind the disparate facts of experience; yet this Platonic metaphor has ignored a central problematic in the novel: the riven psyche of its main character. Ayo Mamadu finds the metaphor at work in Armah's text: \"the black artist, by the tenets of Two Thousand Seasons, penetrates to a deeper, connected whole, which usually has the permanence of cyclical form\" (510) The writer lays bare the real by \"penetrating into the essence of the observed object such that from the slice, properties of the whole structure are revealed. By carefully sifting and straining the particular experience, the artist hopes and means to reach the underlying truth which is also universal\" (510 11) Armah's presentation of decay \"sacrifices little by way of penetration for its artistry,\" says Neil Lazarus (140) \"The authorial voice in The Beautyful Ones unifies this plethora of heterogeneous details by filtering them through the web of its moral intelligence. The commentary thus provided structures the political vision of the work as a whole\" (147) According to Dan Izevbaye, the novel challenges \"the beholder to test his ability to penetrate the object to the beauty beyond. The eye of the beholder thus becomes a moral organ and an index to his moral integrity\" (232) Here, the penetrating eye and the integrity of the \"I\" are one. The moral vision of the penetrating organ is given a specular dominance. Dominant specularity is also assumed in Jean Solomon's statement that \"[the] substance of the book is the man's direct look at the quality of existence around him and his struggle to come to terms with it without doing violence to his moral nature\" (26; emphasis added) The man's vision \"sensitively penetrates and dissects. The opening pages of the book take us to the center of this vision\" with images of trash and filth, thus revealing \"the underlying truth of Ghana\" (26) Lazarus, admittedly, analyzes the novel in less Platonic terms: its modality is such that its meaning emerges in a reciprocal, dialectical manner, and this reciprocity \"is most clearly demonstrated in the complex relationship between the affirmative vision that is implicit in \"the man\" 's search for authentic values and the blasted landscape within which the novel's action is staged\" (137 38) This implies that the reader views the landscape as objectively \"there\" through the continuing moral presence of a fictional subject whom we come to trust. Although trustworthy for some, his vision for others is an overreaction. Achebe declared The Beautyful Ones \"a sick book,\" describing its author as an \"alienated native\" (25) Similar, if less dismissive responses have been articulated by Ben Obumselu, Shatto Arthur Gakwandi, and Leonard Kibera.", "venue": "", "year": 1995.0, "author_names": ["Stewart Crehan"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 161910731, "title": "Symbol and meaning in the beautyful ones are not yet born", "abstract": "(1973) Symbol and meaning in the beautyful ones are not yet born. World Literature Written in English: Vol. 12, No. 1, pp. 4 26.", "venue": "", "year": 1973.0, "author_names": ["Kolawole Ogungbesan"], "n_citations": 7, "n_key_citations": 0, "score": 0}, {"corpus_id": 38924018, "title": "Speech and Thought Representation in the Beautyful Ones are Not Yet Born", "abstract": "Speech/Thought representation in literature is dubious, for the relationship between the speech/thought represented and the representing clause, the quoted speaker and the narrator/reporter, and the speaking situation of the two speeches have complex and ambiguous features. Especially, when the character's speech or consciousness is registered in Free Indirect mode, tracing the source of the speeches/consciousnesses becomes problematic as the character's and the narrator's speeches/consciousnesses are blended ambiguously. Hence, this thesis intends to probe and depicts the aforementioned features of various modes of Speech and Thought representation and the effects the modes have in The Beautyful Ones Are Not Yet Born. To this end, relevant extracts that represent the modes used are selected and analyzed thoroughly. Consequently, the study depicts that Direct and Free Direct, Quoted Indirect, and Indirect modes of Speech are applied in the novel under study. The first two modes are used to show interging events and behaviors that move the plot of the novel forward and to portray character's behavior. Quoted Indirect Speech is also used widely to bring in the past experience of characters to their immediate story with their own words rather than the narrator's words. Thus, their reported experiences are emphasized for they are represented in the narrator's direct report. However, the narrator's Indirect report is applied to represent past information used as a background for the story narrated. Nevertheless, Free Indirect Speech is rarely used in the novel. Moreover, Modes of Free Direct, Indirect and Free Indirect Thought are used in the novel. Despite mode of Direct Though is rarely applied, Free Direct mode is employed to let readers probe in the character's mind and learns about their consciousness with less narratorial intervention. Furthermore; Free Indirect Thought is applied to portray the consciousness of the character, to ridicule on characters and to create sympathy. Besides, it is used to control the reader's response to characters. More to the point, the Indirect Speech is applied to provide information has a background effect.", "venue": "", "year": 2012.0, "author_names": ["Abebawu Eshetu"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 178388761, "title": "The beautyful ones are not yet born a novel", "abstract": "", "venue": "", "year": 1975.0, "author_names": ["Ayi Kwei Armah"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 161891314, "title": "The mythic undercurrent in The Beautyful Ones Are Not Yet Born", "abstract": "", "venue": "", "year": 1988.0, "author_names": ["John Coates"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 164753545, "title": "Reviews: A Beautyful Novel: The Beautyful Ones are Not Yet Born", "abstract": "", "venue": "", "year": 1970.0, "author_names": ["Ralph Noble"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "The supervised hierarchical dirichlet process", "session_id": 2557113048010431, "user_id": 2245888302938728, "candidates": [{"corpus_id": 1073194, "title": "The Supervised Hierarchical Dirichlet Process", "abstract": "We propose the supervised hierarchical Dirichlet process (sHDP) a nonparametric generative model for the joint distribution of a group of observations and a response variable directly associated with that whole group. We compare the sHDP with another leading method for regression on grouped data, the supervised latent Dirichlet allocation (sLDA) model. We evaluate our method on two real world classification problems and two real world regression problems. Bayesian nonparametric regression models based on the Dirichlet process, such as the Dirichlet process generalised linear models (DP GLM) have previously been explored; these models allow flexibility in modelling nonlinear relationships. However, until now, hierarchical Dirichlet process (HDP) mixtures have not seen significant use in supervised problems with grouped data since a straightforward application of the HDP on the grouped data results in learnt clusters that are not predictive of the responses. The sHDP solves this problem by allowing for clusters to be learnt jointly from the group structure and from the label assigned to each group.", "venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "year": 2015.0, "author_names": ["Andrew M Dai", "Amos J Storkey"], "n_citations": 33, "n_key_citations": 3, "score": 1}, {"corpus_id": 19147241, "title": "Supervised Topic Modeling Using Hierarchical Dirichlet Process Based Inverse Regression: Experiments on E Commerce Applications", "abstract": "The proliferation of e commerce calls for mining consumer preferences and opinions from user generated text. To this end, topic models have been widely adopted to discover the underlying semantic themes (i.e. topics) Supervised topic models have emerged to leverage discovered topics for predicting the response of interest (e.g. product quality and sales) However, supervised topic modeling remains a challenging problem because of the need to prespecify the number of topics, the lack of predictive information in topics, and limited scalability. In this paper, we propose a novel supervised topic model, Hierarchical Dirichlet Process based Inverse Regression (HDP IR) HDP IR characterizes the corpus with a flexible number of topics, which prove to retain as much predictive information as the original corpus. Moreover, we develop an efficient inference algorithm capable of examining large scale corpora (millions of documents or more) Three experiments were conducted to evaluate the predictive performance over major e commerce benchmark testbeds of online reviews. Overall, HDP IR outperformed existing state of the art supervised topic models. Particularly, retaining sufficient predictive information improved predictive R squared by over 17.6 percent; having topic structure flexibility contributed to predictive R squared by at least 4.1 percent. HDP IR provides an important step for future study on user generated texts from a topic perspective.", "venue": "IEEE Transactions on Knowledge and Data Engineering", "year": 2018.0, "author_names": ["Weifeng Li", "Junming Yin", "Hsinchsun Chen"], "n_citations": 7, "n_key_citations": 1, "score": 0}, {"corpus_id": 14747499, "title": "Supervised Hierarchical Dirichlet Processes with Variational Inference", "abstract": "We present an extension to the Hierarchical Dirichlet Process (HDP) which allows for the inclusion of supervision. Our model marries the non parametric benefits of HDP with those of Supervised Latent Dirichlet Allocation (SLDA) to enable learning the topic space directly from data while simultaneously including the labels within the model. The proposed model is learned using variational inference which allows for the efficient use of a large training dataset. We also present the online version of variational inference, which makes the method scalable to very large datasets. We show results comparing our model to a traditional supervised parametric topic model, SLDA, and show that it outperforms SLDA on a number of benchmark datasets.", "venue": "2013 IEEE International Conference on Computer Vision Workshops", "year": 2013.0, "author_names": ["Cheng Zhang", "Carl Henrik Ek", "Xavi Gratal", "Florian T Pokorny", "Hedvig Kjellstrom"], "n_citations": 17, "n_key_citations": 3, "score": 0}, {"corpus_id": 1476569, "title": "Statistical Anomaly Detection in Human Dynamics Monitoring Using a Hierarchical Dirichlet Process Hidden Markov Model", "abstract": "Understanding of human dynamics has drawn attention to various areas. The wide spread of positioning technologies, such as GPS facilitates location information to be obtained with high spatio temporal resolution as well as at low costs. By collecting individual location information in real time, monitoring of human dynamics has recently become possible and is expected to the area of dynamic traffic control. In this monitoring, detecting anomalous states of human dynamics become important. This research aims to define an anomaly detection problem of the human dynamics monitoring with time series gridded population data and develop an anomaly detection method for this problem. According to the result of a review we have conducted, we discussed the characteristics of the anomaly detection in human dynamics monitoring and categorized our problem to a semi supervised anomaly detection problem that detects contextual anomalies behind time series data. We developed an anomaly detection method based on a sticky hierarchical Dirichlet process hidden Markov model, which is able to estimate the number of latent states according to the input data. Results of the experiment with synthetic data showed that our proposed method has good fundamental performance with respect to the detection rate. Through the experiments with real gridded population data, anomalies were detected when and where an actual social event had occurred.", "venue": "IEEE Transactions on Intelligent Transportation Systems", "year": 2017.0, "author_names": ["Takashi Fuse", "Kei Kamiya"], "n_citations": 33, "n_key_citations": 0, "score": 0}, {"corpus_id": 6996650, "title": "Hierarchical Dirichlet scaling process", "abstract": "We present the hierarchical Dirichlet scaling process (HDSP) a Bayesian nonparametric mixed membership model. The HDSP generalizes the hierarchical Dirichlet process to model the correlation structure between metadata in the corpus and mixture components. We construct the HDSP based on the normalized gamma representation of the Dirichlet process, and this construction allows incorporating a scaling function that controls the membership probabilities of the mixture components. We develop two scaling methods to demonstrate that different modeling assumptions can be expressed in the HDSP. We also derive the corresponding approximate posterior inference algorithms using variational Bayes. Through experiments on datasets of newswire, medical journal articles, conference proceedings, and product reviews, we show that the HDSP results in a better predictive performance than labeled LDA, partially labeled LDA, and author topic model and a better negative review classification performance than the supervised topic model and SVM.", "venue": "Machine Learning", "year": 2016.0, "author_names": ["Dongwoo Kim", "Alice H Oh"], "n_citations": 11, "n_key_citations": 2, "score": 0}, {"corpus_id": 16848294, "title": "Learning Latent Activities from Social Signals with Hierarchical Dirichlet Processes", "abstract": "Understanding human activities is an important research topic, most noticeably in assisted living and healthcare monitoring environments. Beyond simple forms of activity (e.g. an RFID event of entering a building) learning latent activities that are more semantically interpretable, such as sitting at a desk, meeting with people, or gathering with friends, remains a challenging problem. Supervised learning has been the typical modeling choice in the past. However, this requires labeled training data, is unable to predict never seen before activity, and fails to adapt to the continuing growth of data over time. In this chapter, we explore the use of a Bayesian nonparametric method, in particular the hierarchical Dirichlet process, to infer latent activities from sensor data acquired in a pervasive setting. Our framework is unsupervised, requires no labeled data, and is able to discover new activities as data grows. We present experiments on extracting movement and interaction activities from sociometric badge signals and show how to use them for detecting of subcommunities. Using the popular Reality Mining dataset, we further demonstrate the extraction of colocation activities and use them to automatically infer the structure of social subgroups.", "venue": "", "year": 2014.0, "author_names": ["Dinh Q Phung", "Thuong Nguyen", "Sunil Gupta", "Svetha Venkatesh"], "n_citations": 14, "n_key_citations": 2, "score": 0}, {"corpus_id": 12500367, "title": "Activity recognition using a supervised non parametric hierarchical HMM", "abstract": "The problem of classifying human activities occurring in depth image sequences is addressed. The 3D joint positions of a human skeleton and the local depth image pattern around these joint positions define the features. A two level hierarchical Hidden Markov Model (H HMM) with independent Markov chains for the joint positions and depth image pattern, is used to model the features. The states corresponding to the H HMM bottom level characterize the granular poses while the top level characterizes the coarser actions associated with the activities. Further, the H HMM is based on a Hierarchical Dirichlet Process (HDP) and is fully non parametric with the number of pose and action states inferred automatically from data. This is a significant advantage over classical HMM and its extensions. In order to perform classification, the relationships between the actions and the activity labels are captured using multinomial logistic regression. The proposed inference procedure ensures alignment of actions from activities with similar labels. Our construction enables information sharing, allows incorporation of unlabelled examples and provides a flexible factorized representation to include multiple data channels. Experiments with multiple real world datasets show the efficacy of our classification approach. Hierarchical composition of poses enables information sharing and model simplification.The non parametric nature estimates Markov states automatically from data.Inference procedure suitable for sequence classification.", "venue": "Neurocomputing", "year": 2016.0, "author_names": ["Natraj Raman", "Stephen J Maybank"], "n_citations": 37, "n_key_citations": 2, "score": 0}, {"corpus_id": 10720120, "title": "SSHLDA: A Semi Supervised Hierarchical Topic Model", "abstract": "Supervised hierarchical topic modeling and unsupervised hierarchical topic modeling are usually used to obtain hierarchical topics, such as hLLDA and hLDA. Supervised hierarchical topic modeling makes heavy use of the information from observed hierarchical labels, but cannot explore new topics; while unsupervised hierarchical topic modeling is able to detect automatically new topics in the data space, but does not make use of any information from hierarchical labels. In this paper, we propose a semi supervised hierarchical topic model which aims to explore new topics automatically in the data space while incorporating the information from observed hierarchical labels into the modeling process, called Semi Supervised Hierarchical Latent Dirichlet Allocation (SSHLDA) We also prove that hLDA and hLLDA are special cases of SSHLDA. We conduct experiments on Yahoo! Answers and ODP datasets, and assess the performance in terms of perplexity and clustering. The experimental results show that predictive ability of SSHLDA is better than that of baselines, and SSHLDA can also achieve significant improvement over baselines for clustering on the FScore measure.", "venue": "EMNLP", "year": 2012.0, "author_names": ["Xianling Mao", "Zhaoyan Ming", "Tat-Seng Chua", "Si Li", "Hongfei Yan", "Xiaoming Li"], "n_citations": 50, "n_key_citations": 2, "score": 0}, {"corpus_id": 77375170, "title": "ML HDP: A Hierarchical Bayesian Nonparametric Model for Recognizing Human Actions in Video", "abstract": "Action recognition from videos is an important area of computer vision research due to its various applications, ranging from visual surveillance to human computer interaction. To address action recognition problems, this paper presents a framework that jointly models multiple complex actions and motion units at different hierarchical levels. We achieve this by proposing a generative topic model, namely, multi label hierarchical Dirichlet process (ML HDP) The ML HDP model formulates the co occurrence relationship of actions and motion units, and enables highly accurate recognition. In particular, our topic model possesses the three level representation in action understanding, where low level local features are connected to high level actions via mid level atomic actions. This allows the recognition model to work discriminatively. In our ML HDP, atomic actions are treated as latent topics and automatically discovered from data. In addition, we incorporate the notion of class labels into our model in a semi supervised fashion to effectively learn and infer multi labeled videos. Using discovered topics and inferred labels, which are jointly assigned to local features, we present the straightforward methods to perform three recognition tasks including action classification, joint classification and segmentation of continuous actions, and spatiotemporal action localization. In experiments, we explore the use of three different features and demonstrate the effectiveness of our proposed approach for these tasks on four public datasets: KTH, MSR II, Hollywood2, and UCF101.", "venue": "IEEE Transactions on Circuits and Systems for Video Technology", "year": 2019.0, "author_names": ["Nguyen Anh Tu", "Thien Huynh-The", "Kifayat-Ullah Khan", "Young-Koo Lee"], "n_citations": 24, "n_key_citations": 0, "score": 0}, {"corpus_id": 14210608, "title": "Weakly Supervised Joint Sentiment Topic Detection from Text", "abstract": "Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework called joint sentiment topic (JST) model based on latent Dirichlet allocation (LDA) which detects sentiment and topic simultaneously from text. A reparameterized version of the JST model called Reverse JST, obtained by reversing the sequence of sentiment and topic generation in the modeling process, is also studied. Although JST is equivalent to Reverse JST without a hierarchical prior, extensive experiments show that when sentiment priors are added, JST performs consistently better than Reverse JST. Besides, unlike supervised approaches to sentiment classification which often fail to produce satisfactory performance when shifting to other domains, the weakly supervised nature of JST makes it highly portable to other domains. This is verified by the experimental results on data sets from five different domains where the JST model even outperforms existing semi supervised approaches in some of the data sets despite using no labeled documents. Moreover, the topics and topic sentiment detected by JST are indeed coherent and informative. We hypothesize that the JST model can readily meet the demand of large scale sentiment analysis from the web in an open ended fashion.", "venue": "IEEE Transactions on Knowledge and Data Engineering", "year": 2012.0, "author_names": ["Chenghua Lin", "Yulan He", "Richard M Everson", "Stefan M Ruger"], "n_citations": 297, "n_key_citations": 28, "score": 0}]} -{"query": "speech emotion recognition", "session_id": 7639158819148111, "user_id": 3320034197897867, "candidates": [{"corpus_id": 219065494, "title": "Introducing the Urdu Sindhi Speech Emotion Corpus: A Novel Dataset of Speech Recordings for Emotion Recognition for Two Low Resource Languages", "abstract": "Speech emotion recognition is one of the most active areas of research in the field of affective computing and social signal processing. However, most research is directed towards a select group of languages such as English, German, and French. This is mainly due to a lack of available datasets in other languages. Such languages are called low resource languages given that there is a scarcity of publicly available datasets. In the recent past, there has been a concerted effort within the research community to create and introduce datasets for emotion recognition for low resource languages. To this end, we introduce in this paper the Urdu Sindhi Speech Emotion Corpus, a novel dataset consisting of 1,435 speech recordings for two widely spoken languages of South Asia, that is Urdu and Sindhi. Furthermore, we also trained machine learning models to establish a baseline for classification performance, with accuracy being measured in terms of unweighted average recall (UAR) We report that the best performing model for Urdu language achieves a UAR 65.00% on the validation partition and a UAR 56.96% on the test partition. Meanwhile, the model for Sindhi language achieved UARs of 66.50% and 55.29% on the validation and test partitions, respectively. This classification performance is considerably better than the chance level UAR of 16.67% The dataset can be accessed via https:/zenodo.org/record/3685274.", "venue": "", "year": 2020.0, "author_names": ["Zafi Sherhan Syed", "Sajjad Ali", "Muhammad Shehram", "Abbas Shah"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 210883887, "title": "Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers", "abstract": "Abstract Speech is the most natural way of expressing ourselves as humans. It is only natural then to extend this communication medium to computer applications. We define speech emotion recognition (SER) systems as a collection of methodologies that process and classify speech signals to detect the embedded emotions. SER is not a new field, it has been around for over two decades, and has regained attention thanks to the recent advancements. These novel studies make use of the advances in all fields of computing and technology, making it necessary to have an update on the current methodologies and techniques that make SER possible. We have identified and discussed distinct areas of SER, provided a detailed survey of current literature of each, and also listed the current challenges.", "venue": "Speech Commun.", "year": 2020.0, "author_names": ["Mehmet Berkehan Akcay", "Kaya Oguz"], "n_citations": 95, "n_key_citations": 5, "score": 1}, {"corpus_id": 210043094, "title": "A CNN Assisted Enhanced Audio Signal Processing for Speech Emotion Recognition", "abstract": "Speech is the most significant mode of communication among human beings and a potential method for human computer interaction (HCI) by using a microphone sensor. Quantifiable emotion recognition using these sensors from speech signals is an emerging area of research in HCI, which applies to multiple applications such as human reboot interaction, virtual reality, behavior assessment, healthcare, and emergency call centers to determine the speaker's emotional state from an individual's speech. In this paper, we present major contributions for; (i) increasing the accuracy of speech emotion recognition (SER) compared to state of the art and (ii) reducing the computational complexity of the presented SER model. We propose an artificial intelligence assisted deep stride convolutional neural network (DSCNN) architecture using the plain nets strategy to learn salient and discriminative features from spectrogram of speech signals that are enhanced in prior steps to perform better. Local hidden patterns are learned in convolutional layers with special strides to down sample the feature maps rather than pooling layer and global discriminative features are learned in fully connected layers. A SoftMax classifier is used for the classification of emotions in speech. The proposed technique is evaluated on Interactive Emotional Dyadic Motion Capture (IEMOCAP) and Ryerson Audio Visual Database of Emotional Speech and Song (RAVDESS) datasets to improve accuracy by 7.85% and 4.5% respectively, with the model size reduced by 34.5 MB. It proves the effectiveness and significance of the proposed SER technique and reveals its applicability in real world applications.", "venue": "Sensors", "year": 2020.0, "author_names": ["", "Soonil Kwon"], "n_citations": 56, "n_key_citations": 5, "score": 0}, {"corpus_id": 214018462, "title": "Speech emotion recognition with deep convolutional neural networks", "abstract": "Abstract The speech emotion recognition (or, classification) is one of the most challenging topics in data science. In this work, we introduce a new architecture, which extracts mel frequency cepstral coefficients, chromagram, mel scale spectrogram, Tonnetz representation, and spectral contrast features from sound files and uses them as inputs for the one dimensional Convolutional Neural Network for the identification of emotions using samples from the Ryerson Audio Visual Database of Emotional Speech and Song (RAVDESS) Berlin (EMO DB) and Interactive Emotional Dyadic Motion Capture (IEMOCAP) datasets. We utilize an incremental method for modifying our initial model in order to improve classification accuracy. All of the proposed models work directly with raw sound data without the need for conversion to visual representations, unlike some previous approaches. Based on experimental results, our best performing model outperforms existing frameworks for RAVDESS and IEMOCAP, thus setting the new state of the art. For the EMO DB dataset, it outperforms all previous works except one but compares favorably with that one in terms of generality, simplicity, and applicability. Specifically, the proposed framework obtains 71.61% for RAVDESS with 8 classes, 86.1% for EMO DB with 535 samples in 7 classes, 95.71% for EMO DB with 520 samples in 7 classes, and 64.3% for IEMOCAP with 4 classes in speaker independent audio classification tasks.", "venue": "Biomed. Signal Process. Control.", "year": 2020.0, "author_names": ["Dias Issa", "M Fatih Demirci", "Adnan Yazici"], "n_citations": 44, "n_key_citations": 3, "score": 0}, {"corpus_id": 150269335, "title": "Feature Selection Based Transfer Subspace Learning for Speech Emotion Recognition", "abstract": "Cross corpus speech emotion recognition has recently received considerable attention due to the widespread existence of various emotional speech. It takes one corpus as the training data aiming to recognize emotions of another corpus, and generally involves two basic problems, i.e. feature matching and feature selection. Many previous works study these two problems independently, or just focus on solving the first problem. In this paper, we propose a novel algorithm, called feature selection based transfer subspace learning (FSTSL) to address these two problems. To deal with the first problem, a latent common subspace is learnt by reducing the difference of different corpora and preserving the important properties. Meanwhile, we adopt the l2/mml:mo>1/mml:mrow>/mml:msub>/mml:math>/inline formula> norm on the projection matrix to deal with the second problem. Besides, to guarantee the subspace to be robust and discriminative, the geometric information of data is exploited simultaneously in the proposed FSTSL framework. Empirical experiments on cross corpus speech emotion recognition tasks demonstrate that our proposed method can achieve encouraging results in comparison with state of the art algorithms.", "venue": "IEEE Transactions on Affective Computing", "year": 2020.0, "author_names": ["Peng Song", "Wenming Zheng"], "n_citations": 26, "n_key_citations": 3, "score": 0}, {"corpus_id": 218597564, "title": "Clustering Based Speech Emotion Recognition by Incorporating Learned Features and Deep BiLSTM", "abstract": "Emotional state recognition of a speaker is a difficult task for machine learning algorithms which plays an important role in the field of speech emotion recognition (SER) SER plays a significant role in many real time applications such as human behavior assessment, human robot interaction, virtual reality, and emergency centers to analyze the emotional state of speakers. Previous research in this field is mostly focused on handcrafted features and traditional convolutional neural network (CNN) models used to extract high level features from speech spectrograms to increase the recognition accuracy and overall model cost complexity. In contrast, we introduce a novel framework for SER using a key sequence segment selection based on redial based function network (RBFN) similarity measurement in clusters. The selected sequence is converted into a spectrogram by applying the STFT algorithm and passed into the CNN model to extract the discriminative and salient features from the speech spectrogram. Furthermore, we normalize the CNN features to ensure precise recognition performance and feed them to the deep bi directional long short term memory (BiLSTM) to learn the temporal information for recognizing the final state of emotion. In the proposed technique, we process the key segments instead of the whole utterance to reduce the computational complexity of the overall model and normalize the CNN features before their actual processing, so that it can easily recognize the Spatio temporal information. The proposed system is evaluated over different standard dataset including IEMOCAP, EMO DB, and RAVDESS to improve the recognition accuracy and reduce the processing time of the model, respectively. The robustness and effectiveness of the suggested SER model is proved from the experimentations when compared to state of the art SER methods with an achieve up to 72.25% 85.57% and 77.02% accuracy over IEMOCAP, EMO DB, and RAVDESS dataset, respectively.", "venue": "IEEE Access", "year": 2020.0, "author_names": ["Mustaqeem", "Muhammad Sajjad", "Soonil Kwon"], "n_citations": 38, "n_key_citations": 3, "score": 0}, {"corpus_id": 204801099, "title": "Speech Emotion Recognition with Dual Sequence LSTM Architecture", "abstract": "Speech Emotion Recognition (SER) has emerged as a critical component of the next generation of human machine interfacing technologies. In this work, we propose a new dual level model that predicts emotions based on both MFCC features and mel spectrograms produced from raw audio signals. Each utterance is preprocessed into MFCC features and two mel spectrograms at different time frequency resolutions. A standard LSTM processes the MFCC features, while a novel LSTM architecture, denoted as Dual Sequence LSTM (DS LSTM) processes the two mel spectrograms simultaneously. The outputs are later averaged to produce a final classification of the utterance. Our proposed model achieves, on average, a weighted accuracy of 72.7% and an unweighted accuracy of 73.3% a 6% improvement over current state of the art unimodal models and is comparable with multimodal models that leverage textual information as well as audio signals.", "venue": "ICASSP 2020 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)", "year": 2020.0, "author_names": ["Jianyou Wang", "Michael Xue", "Ryan Culhane", "Enmao Diao", "Jie Ding", "Vahid Tarokh"], "n_citations": 14, "n_key_citations": 3, "score": 0}, {"corpus_id": 221748924, "title": "Deep Net: A Lightweight CNN Based Speech Emotion Recognition System Using Deep Frequency Features", "abstract": "Artificial intelligence (AI) and machine learning (ML) are employed to make systems smarter. Today, the speech emotion recognition (SER) system evaluates the emotional state of the speaker by investigating his/her speech signal. Emotion recognition is a challenging task for a machine. In addition, making it smarter so that the emotions are efficiently recognized by AI is equally challenging. The speech signal is quite hard to examine using signal processing methods because it consists of different frequencies and features that vary according to emotions, such as anger, fear, sadness, happiness, boredom, disgust, and surprise. Even though different algorithms are being developed for the SER, the success rates are very low according to the languages, the emotions, and the databases. In this paper, we propose a new lightweight effective SER model that has a low computational complexity and a high recognition accuracy. The suggested method uses the convolutional neural network (CNN) approach to learn the deep frequency features by using a plain rectangular filter with a modified pooling strategy that have more discriminative power for the SER. The proposed CNN model was trained on the extracted frequency features from the speech data and was then tested to predict the emotions. The proposed SER model was evaluated over two benchmarks, which included the interactive emotional dyadic motion capture (IEMOCAP) and the berlin emotional speech database (EMO DB) speech datasets, and it obtained 77.01% and 92.02% recognition results. The experimental results demonstrated that the proposed CNN based SER system can achieve a better recognition performance than the state of the art SER systems.", "venue": "Sensors", "year": 2020.0, "author_names": ["Tursunov Anvarjon", "", "Soonil Kwon"], "n_citations": 19, "n_key_citations": 1, "score": 0}, {"corpus_id": 213636084, "title": "Feature extraction algorithms to improve the speech emotion recognition rate", "abstract": "In this digitally growing era speech emotion recognition plays significant role in several applications such as Human Computer Interface (HCI) lie detection, automotive system to assist steering, intelligent tutoring system, audio mining, security, Telecommunication, Interaction between a human and machine at home, hospitals, shops etc. Speech is a unique human characteristic used as a tool to communicate and express one's perspective to others. Speech emotion recognition is extracting the emotions of the speaker from his or her speech signal. Feature extraction, Feature selection and classifier are three main stages of the emotion recognition. The main aim of this work is to improve the speech emotion recognition rate of a system using the different feature extraction algorithms. The work emphasizes on the preprocessing of the received audio samples where the noise from speech samples is removed using filters. In next step, the Mel Frequency Cepstral Coefficients (MFCC) Discrete Wavelet Transform (DWT) pitch, energy and Zero crossing rate (ZCR) algorithms are used for extracting the features. In feature selection stage Global feature algorithm is used to remove redundant information from features and to identify the emotions from extracted features machine learning classification algorithms are used. These feature extraction algorithms are validated for universal emotions comprising Anger, Happiness, Sad and Neutral.", "venue": "Int. J. Speech Technol.", "year": 2020.0, "author_names": ["Anusha Koduru", "Hima Bindu Valiveti", "Budati Anil Kumar"], "n_citations": 22, "n_key_citations": 1, "score": 0}, {"corpus_id": 221139446, "title": "Jointly Fine Tuning \"BERT like\" Self Supervised Models to Improve Multimodal Speech Emotion Recognition", "abstract": "Multimodal emotion recognition from speech is an important area in affective computing. Fusing multiple data modalities and learning representations with limited amounts of labeled data is a challenging task. In this paper, we explore the use of modality specific \"BERT like\" pretrained Self Supervised Learning (SSL) architectures to represent both speech and text modalities for the task of multimodal speech emotion recognition. By conducting experiments on three publicly available datasets (IEMOCAP, CMU MOSEI, and CMU MOSI) we show that jointly fine tuning \"BERT like\" SSL architectures achieve state of the art (SOTA) results. We also evaluate two methods of fusing speech and text modalities and show that a simple fusion mechanism can outperform more complex ones when using SSL models that have similar architectural properties to BERT.", "venue": "INTERSPEECH", "year": 2020.0, "author_names": ["Shamane Siriwardhana", "Andrew Reis", "Rivindu Weerasekera", "Suranga Nanayakkara"], "n_citations": 20, "n_key_citations": 1, "score": 0}]} -{"query": "The Japanese chart of charts", "session_id": 183344504224826, "user_id": 5471915004616982, "candidates": [{"corpus_id": 166541431, "title": "The Japanese chart of charts", "abstract": "", "venue": "", "year": 1986.0, "author_names": ["Qing Shui Zheng Ji", "G Nicholson"], "n_citations": 3, "n_key_citations": 0, "score": 1}, {"corpus_id": 22837458, "title": "Clinical features of axillary osmidrosis: A retrospective chart review of 723 Japanese patients", "abstract": "Axillary osmidrosis often disturbs a person's social life, particularly in Asian countries. However, the clinical aspects of this condition have not been well documented in the English language published work. This study aimed to provide information on the features of axillary osmidrosis, with a particular focus on sex differences. A retrospective review was made of the charts for 723 Japanese patients (492 female, 231 male) The mean age at initial presentation (29.1 years) was nearly the same for males and females. Almost all patients (96.1% had wet earwax, which was extremely high compared to its frequency in the general Japanese population. An association with hyperhidrosis was seen in 61.8% of these patients. Subjective odor levels in female patients were significantly lower than those in males (P 0.001) A positive family history was more frequent for females than for males (P 0.001) and prior treatment history was also more frequent for females than for males (P 0.015) Most patients (86.6% had received some treatments in our clinic. There were significantly fewer females who underwent surgical treatments compared to males (P 0.026) as females preferred less invasive techniques (P 0.001) Several features, including male/female ratios, and associations of wet earwax and hyperhidrosis, corresponded to previously reported data on axillary osmidrosis. Female patients were more concerned with axillary odor than males, and females had a tendency for polysurgery.", "venue": "The Journal of dermatology", "year": 2013.0, "author_names": ["Daichi Morioka", "Fumio Ohkubo", "Yoshiyasu Amikura"], "n_citations": 11, "n_key_citations": 2, "score": 0}, {"corpus_id": 33784822, "title": "Risk assessment chart for predicting fatty liver in Japanese subjects.", "abstract": "OBJECTIVE The diagnosis of fatty liver is done mainly by ultrasonography, which it is not included in the usual health checkup examinations. The aim of this study was to develop an index to predict the existence of fatty liver using tests that are part of specific health examinations. METHODS A total of 7,305 Japanese (4,042 men; 3,263 women) who underwent annual health checks were enrolled. Body mass index (BMI) Waist circumference (WC) blood pressure, and levels of triglyceride (TG) high density lipoprotein cholesterol, fasting plasma glucose (FPG) alanine aminotransferase (ALT) and gamma glutamyl transpeptidase were used to predict fatty liver, and a stepwise procedure was used to select an optimal subset of dummy regressors. The probabilities for predicting fatty liver were calculated from the logistic regression equation using the constant and coefficients for each variable. RESULTS Risk assessment charts for predicting the probability of fatty liver were developed. These probabilities were displayed in a color coded manner by combining BMI, TG, FPG, ALT, and WC. CONCLUSION Our fatty liver predicting index consisted of the components of metabolic syndrome (MetS) and ALT, thus indicating a close relationship of fatty liver and MetS. The use of this index enables quantitative assessments of the severity of MetS.", "venue": "The Tokai journal of experimental and clinical medicine", "year": 2012.0, "author_names": ["Fumiyo Inabe", "Eiko Takahashi", "Kengo Moriyama", "Masako Negami", "Hiroki Otsuka"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 53219683, "title": "Establishment of a longitudinal growth chart corresponding to pubertal timing", "abstract": "Abstract. A standard growth chart is indispensable for evaluating an individual's growth. In Japan, the cross sectional growth chart from fiscal year 2000 is most commonly used in the clinical setting. However, when using the current standard growth chart to assess growth during puberty, two problems are encountered. First, the individual pubertal height trajectory does not fit the cross sectional growth chart because the pubertal height curve of individuals rises more sharply than that indicated by the cross sectional growth chart. Second, variations in the timing of an individuals' growth spurt render it difficult or impossible to assess individual growth patterns using a single chart. To address these two issues, new growth charts were established using height measurements of 6744 boys and 6929 girls born between April 1975 and March 1976 in the Akita Prefecture. Individuals whose age at peak height velocity (agePHV) was 2 standard deviation greater or lesser than the mean were excluded, and the remaining participants were divided into three groups according to the first and third quartiles of agePHV. Finally, we established three longitudinal growth charts each for boys and girls based on a healthy Japanese population.", "venue": "Clinical pediatric endocrinology case reports and clinical investigations official journal of the Japanese Society for Pediatric Endocrinology", "year": 2018.0, "author_names": ["Keisuke Yoshii", "Toshiaki Tanaka"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 43248491, "title": "Risk assessment chart for death from cardiovascular disease based on a 19 year follow up study of a Japanese representative population.", "abstract": "BACKGROUND Based on the NIPPON DATA80, risk charts for the probability of death from coronary heart disease (CHD) stroke, and all cardiovascular disease (CVD) were constructed by sex and 10 year age groups. METHODS AND RESULTS The 9,638 participants were followed up for 19 years from 1980, excluding 28 individuals without the necessary baseline data and 257 participants with past history of stroke or CHD. Final analysis was performed on 9,353 participants (4,098 men, mean age 50.3 years; 5,255 women, mean age 50.8) using a Cox proportional hazards model. Death probabilities over a 10 year period from CHD, stroke, and all CVD were calculated and displayed as color coding on each chart by combining 10 year age, systolic blood pressure, smoking, and serum total cholesterol and glucose levels. Six different colors corresponding to probabilities of death were displayed on each chart. CONCLUSIONS The original charts based on the findings from NIPPON DATA80 are suitable for assessing CHD, stroke, and all CVD death risk in the general Japanese population. These charts should be used as a health education tool for lifestyle modification targeting individuals with CVD risk factors.", "venue": "Circulation journal official journal of the Japanese Circulation Society", "year": 2006.0, "author_names": "", "n_citations": 126, "n_key_citations": 1, "score": 0}, {"corpus_id": 28311380, "title": "More fundamental and practical indices based on the data analysis of NIPPON DATA 80 might be needed for clinical settings.", "abstract": "To the Editor: Five years have passed since the risk assessment chart developed from NIPPON DATA801 was released. There is no doubt that it is an innovative tool, corresponding to the Framingham Score,2 that can be applied to Japanese people. However, we occasionally face some problems when we apply it in the clinical setting. The risk assessment chart is a model that assesses risk by estimating the probability of death from cardiovascular disease based on various risk factors. The chart does not involve complex calculations and includes 6 color coded ranks, thus providing a quick reference in the clinical setting. The range in which values can be extrapolated is indicated, thereby obviating the risk of extrapolating extreme values that is associated with formulas. At this point this chart can be an innovative tool. However, in clinical studies, model formulas are more useful than charts in some cases. For example, the various probabilities of death are conventionally calculated as continuous values, which in a 6 rank system would be converted to discrete classes and become less precise. When calculating the number needed to treat from the absolute mortality rate, the certainty of values would be reduced by the width of the ranks. In addition, in today's highly computerized environment, formulas may be easier to use than charts when analyzing many patients at once. Because the study did not specify the b value for each risk factor or the total mortality rate in each population based on the Cox proportional hazards model, the reader is unable to calculate and reproduce the probability of death in each population. If these values were shown, the degree of the effect of each risk factor could be assessed. Moreover, as the goodness of fit of this model was not described, its level of explanation can not be inferred. Based on this, we believe it would be useful to release more fundamental and practical indices about this chart, as with the Framingham Score.2", "venue": "Circulation journal official journal of the Japanese Circulation Society", "year": 2011.0, "author_names": ["Yuichiro Yamada", "Shoji Haruta"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 25176449, "title": "[Drug use evaluation of antidyslipidemic agents at a community hospital in Japan]", "abstract": "OBJECTIVES In recent years, the therapeutic implications of dyslipidemias have been clarified in large scale epidemiologic surveys, and the validity of pharmacotherapy has been established. We investigated the practical realities of pharmacotherapy for dyslipidemias at a community hospital in Japan. METHODS Medical chart surveys were performed retrospectively on 451 dyslipidemic outpatients who visited a community hospital in Japan in July 1997. We collected clinical data from medical charts regarding selected drugs for dyslipidemias, serum lipid levels before drug treatment and after one year of treatment, and risk factors for coronary heart disease (CHD) RESULTS Regardless of dyslipidemia phenotype, approximately 80% of patients were administered statins. The possibility was raised that physicians recorded risk factors in medical charts incompletely, particularly with regard to family CHD history, smoking, and obesity. Based on Japanese and us guidelines for dyslipidemias, low density lipoprotein (LDL) cholesterol levels fully satisfied the requirements for initiating pharmacotherapy in the present study. However, the higher the risk of CHD, the lower the percentage of subjects who met the treatment goals defined by both guidelines. Only 23% of patients at high risk for CHD controlled LDL cholesterol sufficiently based on Japanese guidelines. CONCLUSION To optimize pharmacotherapy for dyslipidemias, medical staff should assess risk factors for CHD more completely and attempt to achieve full control of serum lipids, particularly in patients at high risk for CHD.", "venue": "Yakugaku zasshi Journal of the Pharmaceutical Society of Japan", "year": 2002.0, "author_names": ["Kazuko Ujita", "Keiko Ohno", "Masayuki Hashiguchi", "Hirotoshi Echizen", "Tadaaki Rikihisa", "Hiroyasu Ogata"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 14033345, "title": "Cochineal dye induced immediate allergy: Review of Japanese cases and proposed new diagnostic chart.", "abstract": "BACKGROUND Cochineal dye is used worldwide as a red coloring in foods, drinks, cosmetics, quasi drugs, and drugs. The main component of the red color is carminic acid (CA) Carmine is an aluminum or calcium chelated product of CA. CA and carmine usually contain contaminating proteins, including a 38 kDa protein thought to be the primary allergen. Severe allergic reactions manifest as anaphylaxis. The aim of this study was to review all Japanese reported cases and propose useful diagnostic chart. METHODS All reported Japanese cases of cochineal dye induced immediate allergy were reviewed, and newly registered cases were examined by skin prick test (SPT) with cochineal extract (CE) and measurement of CE and carmine specific serum IgE test. Two dimensional (2D) western blotting using patient serum was conducted to identify the antigen. RESULTS Twenty two Japanese cases have been reported. SPT and the level of specific IgE test indicated that six cases should be newly registered as cochineal dye allergy. All cases were adult females, and all cases except three involved anaphylaxis; 13 cases involved past history of local symptoms associated with cosmetics use. Japanese strawberry juice and fish meat sausage, and European processed foods (especially macarons made in France) and drinks were recent major sources of allergen. 2D western blotting showed that patient IgE reacted to the 38 kDa protein and other proteins. Serum from healthy controls also weakly reacted with these proteins. CONCLUSIONS SPT with CE and determination of the level of CE and carmine specific IgE test are useful methods for the diagnosis of cochineal dye allergy.", "venue": "Allergology international official journal of the Japanese Society of Allergology", "year": 2018.0, "author_names": ["Naoko Takeo", "Masashi Nakamura", "Satoshi Nakayama", "Osamu Okamoto", "Naoki Sugimoto", "Shinichi Sugiura", "Nayu Sato", "Susumu Harada", "Masao Yamaguchi", "Naoya Mitsui", "Yumiko Kubota", "Kayoko Suzuki", "Makoto Terada", "Akiyo Nagai", "Junko Sowa-Osako", "Yutaka Hatano", "Hiroshi Akiyama", "Akiko Yagami", "Sakuhei Fujiwara", "Kayoko Matsunaga"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 17099856, "title": "Growth standard charts for Japanese children with mean and standard deviation (SD) values based on the year 2000 national survey", "abstract": "Growth charts are essential and universally used for evaluating growth and development of children in both clinical settings and in public health examinations (1, 2) We previously reported the growth standards for Japanese children with percentile values based on the year 2000 national survey data (3) which were established by the lambda mu sigma (LMS) method (4) These standards have been widely used mainly in public health examinations. In clinical practices, Japanese physicians preferably assess growth with standard deviation (SD) scores, because many physicians feel that percentiles are not suitable for monitoring children with extreme growth retardation. Considering this, we created practical growth charts with mean and SD values, based on the criteria of the national medical aid program for specific pediatric chronic diseases by using the eye fitting method (5) Although these charts have been widely used in clinical settings, they do not reflect the correct distributions of height and weight for Japanese children, especially the weight chart. Weight is not usually distributed normatively, but the practical weight chart was made with the assumption of a normal distribution. To this end, we saw the need for growth standards that can be used appropriately both for clinical and public health purposes. Therefore, we reanalyzed the previously reported growth standard charts with percentile values (3) and constructed the growth standards with mean and SD values for Japanese children, which would be applicable not only for clinical practices but also for public health examinations.", "venue": "Clinical pediatric endocrinology case reports and clinical investigations official journal of the Japanese Society for Pediatric Endocrinology", "year": 2016.0, "author_names": ["Tsuyoshi Isojima", "Noriko Kato", "Yoshiya Ito", "Susumu Kanzaki", "Mitsunori Murata"], "n_citations": 51, "n_key_citations": 1, "score": 0}, {"corpus_id": 195844631, "title": "Clinical Characteristics of Pars Tensa Cholesteatoma: A Comparative Study of Area Based Classification Systems Proposed by the Japanese Otological Society and the European Academy of Otology Neuro Otology.", "abstract": "OBJECTIVES To assess the clinical characteristics of extent patterns in pars tensa cholesteatoma. MATERIALS AND METHODS This was a retrospective chart review. Forty four patients with pars tensa cholesteatoma who underwent primary surgery at a tertiary academic medical center were included. The main outcomes measured were sex, age, clinical background, and stage classification of pars tensa cholesteatoma (including the extent of cholesteatoma and involvement of the sinus tympani) according to two staging classifications: criteria advocated by the Japanese Otological Society (JOS) and those advocated by the European Academy of Otology and Neuro Otology (EAONO)/JOS joint consensus statements. RESULTS The mean patient age standard deviation was 38.4+ 19.6 years. The patients comprised 19 men and 25 women. According to the JOS classification, 18 ears (40.9% were classified as stage I, 22 (50.0% as stage II, and 4 (9.1% as stage III. According to the EAONO/JOS joint consensus statements, 14 ears (31.8% were classified as stage I, 26 (59.1% as stage II, and 4 (9.1% as stage III. Fourteen ears (31.8% demonstrated involvement of the sinus tympani. Four ears (9.1% that were originally categorized as stage I cholesteatoma by the JOS criteria showed sinus tympani invasion and were subsequently categorized as stage II according to the EAONO/JOS criteria. CONCLUSION We determined the clinical characteristics of pars tensa cholesteatoma based on the novel and well defined classification criteria. Further studies including long term outcomes are necessary to demonstrate the clinical relevance of the discrepancy between the two criteria with respect to involvement of the sinus tympani.", "venue": "The journal of international advanced otology", "year": 2019.0, "author_names": ["Masaomi Motegi", "Yutaka Yamamoto", "Takeshi Tada", "Masahiro Takahashi", "Sayaka Sampei", "Hiromi Sano", "Tsunetaro Morino", "Manabu Komori", "Masahiro Miura", "Kazuhisa Yamamoto", "Yuichiro Yaguchi", "Yuika Sakurai", "Hiromi Kojima"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "Stress level OR Depression AND diabetes quality of life", "session_id": 1232544594590449, "user_id": 2773079202734895, "candidates": [{"corpus_id": 219071979, "title": "Analysis of Factors Affecting the Depression and Quality of Life in the Family Taking Care of Dementia Patients", "abstract": "Background/Objectives: This study was intended to identify the factor having influence on the stress, depression and the quality of life in the family who care the patient with dementia.Method/Statistical Analysis: For this study, raw data were requested to Korea Centers for Disease Control and Prevention and among them, the data for 150,802 citizens of 45 years or older in D Metropolitan City were analyzed. For the analysis, the statistical software R program was used and the significance level was 0.05.Findings: Out of the factors having influence on the family who are cohabitating with the patient with dementia, the factors having influence on the quality of life were age, benefit of basic living security, lifelong drinking, days of medium level of physical activity, diabetes, arthritis, subjective oral health level, subjective health level, experience of depression, etc..Improvements/Applications: This study was intended to provide the basic data to develop the programs for the health improvement and the health education of the family who care the patient with dementia", "venue": "", "year": 2020.0, "author_names": ["Kyung-hee Kang", "Kwon-Seob So", "Hye-jeong Hwang"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 11886390, "title": "[The quality of life, symptoms of depression and coping with stress among individuals with type 2 diabetes preliminary study]", "abstract": "AIM To compare quality of life, symptoms of depression and strategies of coping with stress among individuals with type 2 diabetes and healthy individuals in their middle age, and also to verify the correlation of the aforementioned variables. METHODS 87 persons took part in the study: 42 persons with type 2 diabetes and 45 healthy persons. There were used 4 questionnaires with recognized psychometric properties. RESULTS The results showed significant differences in the level of global quality of life, satisfaction with health and physical domain, symptoms of depression, and also in terms of reactive coping with stress, which focuses on emotions and avoidance. CONCLUSIONS Individuals with diabetes have lower global perceived quality of life and satisfaction with health and physical domain. In this group, the intensity of depressive symptoms is higher. Both groups use a task oriented style with the same frequency in times of stress. Persons with diabetes use an emotion oriented style more often than healthy persons, whereas the latter use an avoidance oriented style. Both groups use various proactive coping strategies with the same frequency.", "venue": "Psychiatria polska", "year": 2014.0, "author_names": ["Dorota Kalka"], "n_citations": 11, "n_key_citations": 1, "score": 1}, {"corpus_id": 54068640, "title": "Effect of relaxation therapy on depression, anxiety, stress and quality of life among diabetic patients", "abstract": "Objective: The goal of this study was to evaluate the effect of relaxation therapy on depression, anxiety, stress, quality of life, and blood glucose levels among patients diagnosed with type II diabetes mellitus (T2DM) Methods: A quasi experimental research design was used. Sample: Convenience sample of 70 patients was recruited and assigned to one of two groups, an intervention group (Group A) and a control group (Group B) A table of random numbers was generated and used to make group assignments. Setting: The study was conducted at Medical Outpatient Clinics in Menoufia University Hospital, Menoufia governorate, Egypt. Instruments: Data collection included a structured interview questionnaire that included socio demographic characteristics and clinical data, the Depression, Anxiety and Stress Scale (DASS) and the World Health Organization Quality of Life (WHOQOL BRIEF) Results: The findings indicate that anxiety level, stress, depression, and quality of life were improved in the intervention group with a statistically significant degree compared to the control group. Conclusions: Relaxation therapy improved depression, anxiety, stress, quality of life, and blood glucose levels among patients diagnosed with T2DM. Recommendation: Relaxation therapy, patient education programs and treatment protocols should be integrated into the medical outpatient clinic to assist patients diagnosed with T2DM to cope with their stress, anxiety, depression, and enhance blood glucose control.", "venue": "", "year": 2017.0, "author_names": ["Sabah M Ebrahem", "Samah Elgarhy Masry"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 152069292, "title": "Depression as Related to Quality of Life Andsocial Support among Patients with Diabetes", "abstract": "The aim of the present study was to explore the relationship among depression, social support, and quality of life in 59 diabetic patients (Males= 25, Females= 34) visiting different hospitals in Sialkot and Gujranwala. The sample was selected by purposive sampling technique. Three standardized tools; Multidimensional Scale of Perceived Social Support (Zimet, Dahlem, Zimet& Farley, 1988) Quality of Life Scale (WHO, 1991) and depression items from Depression, Anxiety and Stress Scale (Lovibond Lovibond, 1995) were used in the present study to collect the data. The results indicated that there is a significant negative relationship among depression, social support, and quality of life in diabetic patients. Moreover, the relationship of sociodemographic variables of diabetic patients was also explored with the level of depression, quality of life and social support. Implications of the findings were discussed.", "venue": "", "year": 2015.0, "author_names": ["Sameera Shafiq", "Amira Iftekhar"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 56224941, "title": "EXAMINING THE EFFECTIVENESS OF MINDFULNESS BASED STRESS REDUCTION PROGRAM AND CONSCIOUS YOGA ON QUALITY OF LIFE IN PATIENTS WITH DIABETES TYPE 2", "abstract": "Objective: Diabetes is a chronic disease that causes severe side effects in patients. According to the previous studies, the incidence of depression and anxiety is higher among patients with diabetes type 2. The present study was conducted with the aim of examining the effectiveness of mindfulness based stress reduction program and conscious yoga on depression, anxiety and stress in patients with diabetes type 2. Materials and Methods: The study was quasi experimental with pre test, post test, control group and a 2 month follow up. 24 patients among patients with diabetes who referred to Imam Hossein hospital were selected in an available way and were randomly assigned into experimental (n1=12) and control groups (n2=12) The level of quality of life was measured using Quality of Life Questionnaire (SF 36) in pre test. Then, participants of the experimental group received group mindfulness based stress reduction program and conscious yoga for 8 sessions. After completing the interventions, patients' quality of life level was measured again and data were analyzed using multivariate repeated measurement model. Results Findings showed there is a significant difference between experimental and control groups in terms of the quality of life level and mindfulness based stress reduction program significantly increases the quality of life in the participants of the experimental group. Conclusion: The result of this study suggests that mindfulnessbased stress reduction program can be an appropriate therapeutic method for improving quality of life in patients with diabetes type 2.", "venue": "", "year": 2014.0, "author_names": ["Soheila Rahmani", "Alireza Zahirrodin", "Mahshid Moradi", "Shahrzad Hoveida", "Somayeh Nejati"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 209313133, "title": "Health related Quality of Life and Its Predictors in Korean Patients with Myocardial Infarction in the Acute Phase", "abstract": "This study aims to investigate health related quality of life (HRQoL) of Korean patients in the acute phase of myocardial infarction (MI) and correlates of this important patient outcome. A total of 150 patients with recent MI were recruited. The Korean version of the MacNew Quality of Life after Myocardial Infarction Questionnaire was used to assess their HRQoL. Demographic, behavioural and disease related factors were also assessed and the Depression, Anxiety and Stress Scale (DASS 21) was used for psychological well being. Participants who had a higher education level and better financial status had better HRQoL. Diabetes, history of stroke, other heart disease and a higher score of the DASS 21 were adversely associated with HRQoL. The findings of this study help identify risk factors that are related to lower HRQoL after MI. Early psychological and financial support may help reduce the impact of MI on patients' overall health and quality of life.", "venue": "Clinical nursing research", "year": 2019.0, "author_names": ["Kyoungrim Kang", "Leila Gholizadeh", "Hae-Ra Han"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 198276241, "title": "Lower urinary tract symptoms and quality of life in community dwelling individuals aged 45 years and over. A population based study", "abstract": "The objective of this study was to identify the factors associated with the impairment of quality of life (QoL) in community dwelling individuals with LUTS. A randomized sample of the population registered in the Family Health Program Niteroi aged 45 years or over was selected. Information about demographic, socioeconomic and lifestyle factors, co morbidities and nocturia was collected. The NANDA I taxonomy was used to identify the other LUTS, and QoL evaluation was performed in accordance with the SF 36 Short Form questionnaire (SF36 SF) For the SF36 SF domains (outcome) associated with LUTS, multiple logistic models were tested including the urinary symptoms and the sociodemographic and associated clinical variables. Stress urinary incontinence was associated with white skin, female gender, obesity, smoking, alcohol intake, depression and low scores in all evaluated domains of QoL. Nocturia was associated with advanced age, low schooling level, higher BMI, hypertension, diabetes, health insurance and the lowest scores in all evaluated domains of Qol, except for the Role Emotional. According to multivariate analysis, stress incontinence and depression are associated with the highest risks of low scores in General Health, Physical Functioning and Vitality domains, while nocturia and obesity showed association with the highest risks of low scores in Physical Functioning, Bodily Pain and Vitality domains.", "venue": "Acta Scientiarum. Health Sciences", "year": 2019.0, "author_names": ["Carlos Augusto Faria", "Dayse Mary S Correia", "K S Panisset", "Maria Luiza Garcia Rosa"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 209407901, "title": "Health related quality of life and readmission of patients with cardiovascular disease in South Korea", "abstract": "Aims: The purpose of this study was to investigate the health related quality of life (HRQOL) of patients with cardiovascular disease and its relationship to hospital readmission. Methods: The cross sectional study used data from 1037 adults aged 19 years diagnosed with myocardial infarction or angina pectoris. Raw data were obtained from the fourth to sixth Korea National Health and Nutrition Examination Survey (2007 2014) Results: Readmission was found to be associated with age, living status, education level, unemployment, individual income level, stroke, osteoarthritis, diabetes, depression, low stress level, walking days per week, and activity limitations due to cardiovascular disease. Conclusion: In summary, readmission was related to HRQOL among patients with myocardial infarction. Interventions that consider efforts to reduce readmission through improved diagnosis and development of systematic management of cardiovascular disease symptoms are required.", "venue": "Perspectives in public health", "year": 2019.0, "author_names": ["Hyun Su Kim", "Yoonjung Kim", "Haejin Kwon"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 22113363, "title": "[Emotional distress and quality of life in people with diabetes and their families]", "abstract": "OBJECTIVE The daily experience of living with diabetes can adversely affect the quality of life of people with diabetes and their families. We present the results for Spain of the DAWN2 study related to quality of life and wellbeing of patients and their families. METHODS The DAWN2 study is an observational, cross sectional study. In the present study, we used the Spanish sample of patients (N=502) and their relatives (N=123) RESULTS A total of 13.9% of patients were at risk of possible depression while 50.0% of people with diabetes and 45.5% of family members reported a high level of diabetes related emotional stress. CONCLUSIONS People with diabetes experience high levels of stress and the psychosocial impact of diabetes also affects family members.", "venue": "Gaceta sanitaria", "year": 2015.0, "author_names": ["Marina Belendez Vazquez", "Inaki Lorente Armendariz", "Mercedes Maderuelo Labrador"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 116090301, "title": "Emotional distress and quality of life in people with diabetes and their families", "abstract": "Objective The daily experience of living with diabetes can adversely affect the quality of life of people with diabetes and their families. We present the results for Spain of the DAWN2 study related to quality of life and wellbeing of patients and their families. Methods The DAWN2 study is an observational, cross sectional study. In the present study, we used the Spanish sample of patients (N=502) and their relatives (N=123) Results A total of 13.9% of patients were at risk of possible depression while 50.0% of people with diabetes and 45.5% of family members reported a high level of diabetes related emotional stress. Conclusions People with diabetes experience high levels of stress and the psychosocial impact of diabetes also affects family members.", "venue": "", "year": 2015.0, "author_names": ["Marina Belendez Vazquez", "Inaki Armendariz", "Mercedes Maderuelo Labrador"], "n_citations": 3, "n_key_citations": 0, "score": 0}]} -{"query": "sanity check for saliency maps", "session_id": 4579032668979261, "user_id": 1315603484971541, "candidates": [{"corpus_id": 208121311, "title": "Learning Reliable Visual Saliency For Model Explanations", "abstract": "By highlighting important features that contribute to model prediction, visual saliency is used as a natural form to interpret the working mechanism of deep neural networks. Numerous methods have been proposed to achieve better saliency results. However, we find that previous visual saliency methods are not reliable enough to provide meaningful interpretation through a simple sanity check: saliency methods are required to explain the output of non maximum prediction classes, which are usually not ground truth classes. For example, let the methods interpret an image of \"dog\" given a wrong class label \"fish\" as the query. This procedure can test whether these methods reliably interpret model's predictions based on existing features that appear in the data. Our experiments show that previous methods failed to pass the test by generating similar saliency maps or scattered patterns. This false saliency response can be dangerous in certain scenarios, such as medical diagnosis. We find that these failure cases are mainly due to the attribution vanishing and adversarial noise within these methods. In order to learn reliable visual saliency, we propose a simple method that requires the output of the model to be close to the original output while learning an explanatory saliency mask. To enhance the smoothness of the optimized saliency masks, we then propose a simple Hierarchical Attribution Fusion (HAF) technique. In order to fully evaluate the reliability of visual saliency methods, we propose a new task Disturbed Weakly Supervised Object Localization (D WSOL) to measure whether these methods can correctly attribute the model's output to existing features. Experiments show that previous methods fail to meet this standard, and our approach helps to improve the reliability by suppressing false saliency responses. After observing a significant layout difference in saliency masks between real and adversarial samples. we propose to train a simple CNN on these learned hierarchical attribution masks to distinguish adversarial samples. Experiments show that our method can improve detection performance over other approaches significantly.", "venue": "IEEE Transactions on Multimedia", "year": 2020.0, "author_names": ["Yulong Wang", "Hang Su", "Bo Zhang", "Xiaolin Hu"], "n_citations": 11, "n_key_citations": 0, "score": 0}, {"corpus_id": 201125339, "title": "XRAI: Better Attributions Through Regions", "abstract": "Saliency methods can aid understanding of deep neural networks. Recent years have witnessed many improvements to saliency methods, as well as new ways for evaluating them. In this paper, we 1) present a novel region based attribution method, XRAI, that builds upon integrated gradients (Sundararajan et al. 2017) 2) introduce evaluation methods for empirically assessing the quality of image based saliency maps (Performance Information Curves (PICs) and 3) contribute an axiom based sanity check for attribution methods. Through empirical experiments and example results, we show that XRAI produces better results than other saliency methods for common models and the ImageNet dataset.", "venue": "2019 IEEE/CVF International Conference on Computer Vision (ICCV)", "year": 2019.0, "author_names": ["Andrei Kapishnikov", "Tolga Bolukbasi", "Fernanda Vi'egas", "Michael Terry"], "n_citations": 42, "n_key_citations": 9, "score": 0}, {"corpus_id": 174801164, "title": "Segment Integrated Gradients: Better attributions through regions", "abstract": "Saliency methods can aid understanding of deep neural networks. Recent years have witnessed many improvements to saliency methods, as well as new ways for evaluating them. In this paper, we 1) present a novel region based attribution method, Segment Integrated Gradients (SIG) that builds upon integrated gradients (Sundararajan et al. 2017) 2) introduce evaluation methods for empirically assessing the quality of image based saliency maps (Performance Information Curves (PICs) and 3) contribute an axiom based sanity check for attribution methods. Through empirical experiments and example results, we show that SIG produces better results than other saliency methods for common models and the ImageNet dataset.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Andrei Kapishnikov", "Tolga Bolukbasi", "Fernanda B Viegas", "Michael Terry"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 52938797, "title": "Sanity Checks for Saliency Maps", "abstract": "Saliency methods have emerged as a popular tool to highlight features in an input deemed relevant for the prediction of a learned model. Several saliency methods have been proposed, often guided by visual appeal on image data. In this work, we propose an actionable methodology to evaluate what kinds of explanations a given method can and cannot provide. We find that reliance, solely, on visual assessment can be misleading. Through extensive experiments we show that some existing saliency methods are independent both of the model and of the data generating process. Consequently, methods that fail the proposed tests are inadequate for tasks that are sensitive to either data or model, such as, finding outliers in the data, explaining the relationship between inputs and outputs that the model learned, and debugging the model. We interpret our findings through an analogy with edge detection in images, a technique that requires neither training data nor model. Theory in the case of a linear model and a single layer convolutional neural network supports our experimental findings.", "venue": "NeurIPS", "year": 2018.0, "author_names": ["Julius Adebayo", "Justin Gilmer", "Michael Muelly", "Ian J Goodfellow", "Moritz Hardt", "Been Kim"], "n_citations": 631, "n_key_citations": 87, "score": 1}, {"corpus_id": 235422412, "title": "Investigating sanity checks for saliency maps with image and text classification", "abstract": "Saliency maps have shown to be both useful and misleading for explaining model predictions especially in the context of images. In this paper, we perform sanity checks for text modality and show that the conclusions made for image do not directly transfer to text. We also analyze the effects of the input multiplier in certain saliency maps using similarity scores, max sensitivity and infidelity evaluation metrics. Our observations reveal that the input multiplier carries input's structural patterns in explanation maps, thus leading to similar results regardless of the choice of model parameters. We also show that the smoothness of a Neural Network (NN) function can affect the quality of saliency based explanations. Our investigations reveal that replacing ReLUs with Softplus and MaxPool with smoother variants such as LogSumExp (LSE) can lead to explanations that are more reliable based on the infidelity evaluation metric.", "venue": "ArXiv", "year": 2021.0, "author_names": ["Narine Kokhlikyan", "Vivek Miglani", "Bilal Alsallakh", "Miguel Martin", "Orion Reblitz-Richardson"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 208548470, "title": "Sanity Checks for Saliency Metrics", "abstract": "Saliency maps are a popular approach to creating post hoc explanations of image classifier outputs. These methods produce estimates of the relevance of each pixel to the classification output score, which can be displayed as a saliency map that highlights important pixels. Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i.e. their \"fidelity\" We therefore investigate existing metrics for evaluating the fidelity of saliency methods (i.e. saliency metrics) We find that there is little consistency in the literature in how such metrics are calculated, and show that such inconsistencies can have a significant effect on the measured fidelity. Further, we apply measures of reliability developed in the psychometric testing literature to assess the consistency of saliency metrics when applied to individual saliency maps. Our results show that saliency metrics can be statistically unreliable and inconsistent, indicating that comparative rankings between saliency methods generated using such metrics can be untrustworthy.", "venue": "AAAI", "year": 2020.0, "author_names": ["Richard J Tomsett", "Daniel Harborne", "Supriyo Chakraborty", "Prudhvi K Gurram", "Alun David Preece"], "n_citations": 19, "n_key_citations": 0, "score": 0}, {"corpus_id": 230688621, "title": "Faithful Saliency Maps: Explaining Neural Networks by Augmenting \"Competition for Pixels\"", "abstract": "For certain machine learning models such as image classifiers, saliency methods promise to answer a crucial question: At the pixel level, where does the model look to classify a given image? If existing methods truthfully answer this question, they can bring some level of interpretability to an area of machine learning where it has been inexcusably absent: namely, to image classifying neural networks, usually considered some of the most \"blackbox\" classifiers. A multitude of different saliency methods has been developed over the last few years recently, however, Adebayo et al. [1] revealed that many of them fail socalled \"sanity checks\" That is, these methods act as mere edge detectors of the input image, outputting the same convincing looking saliency map completely independently of the model under investigation! Not only do they not illuminate the inner workings of the model at hand, but they may actually deceive the model investigator into believing that the model is working as it should. To fix these deceptive methods and save them from the trash pile of discarded research, Gupta and Arora [11] proposed an algorithm called competition for pixels. Yet as we uncovered, competition can be deceiving itself! This thesis makes three main contributions: (1) It examines competition for pixels, showing that the algorithm has serious issues in the few class setting. (2) It proposes an augmentation of the competition algorithm designed to address these issues. (3) It experimentally verifies the effectiveness of said augmentation.", "venue": "", "year": 2020.0, "author_names": ["Jorma Peer Gorns"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 168169969, "title": "A Simple Saliency Method That Passes the Sanity Checks", "abstract": "There is great interest in \"saliency methods\" (also called \"attribution methods\" which give \"explanations\" for a deep net's decision, by assigning a \"score\" to each feature/pixel in the input. Their design usually involves credit assignment via the gradient of the output with respect to input. Recently Adebayo et al. [arXiv:1810.03292] questioned the validity of many of these methods since they do not pass simple *sanity checks* which test whether the scores shift/vanish when layers of the trained net are randomized, or when the net is retrained using random labels for inputs. We propose a simple fix to existing saliency methods that helps them pass sanity checks, which we call \"competition for pixels\" This involves computing saliency maps for all possible labels in the classification task, and using a simple competition among them to identify and remove less relevant pixels from the map. The simplest variant of this is \"Competitive Gradient \\odot$ Input (CGI)\" it is efficient, requires no additional training, and uses only the input and gradient. Some theoretical justification is provided for it (especially for ReLU networks) and its performance is empirically demonstrated.", "venue": "ArXiv", "year": 2019.0, "author_names": ["Arushi Gupta", "Sanjeev Arora"], "n_citations": 6, "n_key_citations": 1, "score": 0}, {"corpus_id": 231639321, "title": "Benchmarking Perturbation based Saliency Maps for Explaining Deep Reinforcement Learning Agents", "abstract": "Recent years saw a plethora of work on explaining complex intelligent agents. One example is the development of several algorithms that generate saliency maps which show how much each pixel attributed to the agents' decision. However, most evaluations of such saliency maps focus on image classification tasks. As far as we know, there is no work which thoroughly compares different saliency maps for Deep Reinforcement Learning agents. This paper compares four perturbation based approaches to create saliency maps for Deep Reinforcement Learning agents trained on four different Atari 2600 games. All four approaches work by perturbing parts of the input and measuring how much this affects the agent's output. The approaches are compared using three computational metrics: dependence on the learned parameters of the agent (sanity checks) faithfulness to the agent's reasoning (input degradation) and run time.", "venue": "ArXiv", "year": 2021.0, "author_names": ["Tobias Huber", "Benedikt Limmer", "Elisabeth Andr'e"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 235490210, "title": "Benchmarking Perturbation based Saliency Maps for Explaining Atari Agents", "abstract": "Recent years saw a plethora of work on explaining complex intelligent agents. One example is the development of several algorithms that generate saliency maps which show how much each pixel attributed to the agents' decision. However, most evaluations of such saliency maps focus on image classification tasks. As far as we know, there is no work that thoroughly compares different saliency maps for Deep Reinforcement Learning agents. This paper compares four perturbationbased approaches to create saliency maps for Deep Reinforcement Learning agents trained on four different Atari 2600 games. All four approaches work by perturbing parts of the input and measuring how much this affects the agent's output. The approaches are compared using three computational metrics: dependence on the learned parameters of the agent (sanity checks) faithfulness to the agent's reasoning (input degradation) and run time. In particular, during the sanity checks we find issues with two approaches and propose a solution to fix one of those issues.", "venue": "", "year": 2021.0, "author_names": ["Tobias Huber", "Benedikt Limmer", "Elisabeth Andr'e"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Preclinical validation of anti-nuclear factor-kappa B therapy to inhibit human vestibular schwannoma growth", "session_id": 7644768236107686, "user_id": 5463562898817666, "candidates": [{"corpus_id": 21328469, "title": "Preclinical validation of anti nuclear factor kappa B therapy to inhibit human vestibular schwannoma growth", "abstract": "Vestibular schwannomas (VSs) the most common tumors of the cerebellopontine angle, arise from Schwann cells lining the vestibular nerve. Pharmacotherapies against VS are almost non existent. Although the therapeutic inhibition of inflammatory modulators has been established for other neoplasms, it has not been explored in VS. A bioinformatic network analysis of all genes reported to be differentially expressed in human VS revealed a pro inflammatory transcription factor nuclear factor kappa B (NF kB) as a central molecule in VS pathobiology. Assessed at the transcriptional and translational level, canonical NF kB complex was aberrantly activated in human VS and derived VS cultures in comparison to control nerves and Schwann cells, respectively. Cultured primary VS cells and VS derived human cell line HEI 193 were treated with specific NF kB siRNAs, experimental NF kB inhibitor BAY11 7082 (BAY11) and clinically relevant NF kB inhibitor curcumin. Healthy human control Schwann cells from the great auricular nerve were also treated with BAY11 and curcumin to assess toxicity. All three treatments significantly reduced proliferation in primary VS cultures and HEI 193 cells, with siRNA, 5 mM BAY11 and 50 mM curcumin reducing average proliferation standard error of mean) to 62.33% 10.59% 14.3 9.7% and 23.0 20.9% of control primary VS cells, respectively. These treatments also induced substantial cell death. Curcumin, unlike BAY11, also affected primary Schwann cells. This work highlights NF kB as a key modulator in VS cell proliferation and survival and demonstrates therapeutic efficacy of directly targeting NF kB in VS.", "venue": "Molecular oncology", "year": 2015.0, "author_names": ["Sonam Dilwali", "Martijn C Briet", "S Y Kao", "Takeshi Fujita", "Lukas D Landegger", "Michael P Platt", "Konstantina M Stankovic"], "n_citations": 23, "n_key_citations": 1, "score": 1}, {"corpus_id": 51801425, "title": "Preclinical Validation of Anti Nuclear Factor Kappa B Therapy against Vestibular Schwannoma and Neurofibromatosis Type II", "abstract": "Abstract Neurofibromatosis type 2 (NF2) is a genetic disorder that causes substantial suffering and debility due to many tumors that occur on the nerves within the skull and spine throughout a person's life. The hallmark of NF2 is vestibular schwannomas (VSs) also known as acoustic neuromas, which occur on the vestibular nerves that connect the inner ear with the brain. Initially, VSs cause hearing loss. However, as they grow, they can compress the brainstem and cause death. Current treatment options are limited to surgical removal and radiation therapy, both of which carry substantial risks, including deafness and facial paralysis. Although drug therapies against NF2 are gaining momentum, more effective and better tolerated drugs are sorely needed. Because NF2 tumors are typically slowly growing and non malignant, even therapies that simply reduce tumor volume and retard growth can be lifesaving. The most successful drug used today to treat NF2, bevacizumab, works in only about 50% of patients in halting tumor growth or causing tumor shrinkage. Bevacizumab is known to inhibit vascular endothelial growth factor (VEGF) but its precise mechanism of action in VSs is unknown. Our overriding objective it to develop new and better drug therapies to help people with NF2. Using an unbiased bioinformatic approach that synthesizes published knowledge on the genes that are known to be aberrantly expressed in NF2, we have identified a key role for nuclear factor kappa B (NF Kappa B We hypothesize that increased NF B signaling in VS contributes to abnormal growth, and that inhibition of the NF Kappa B pathway can prevent growth and promote death of VSs. We have proven this hypothesis in vitro, using primary human VS cells treated with 3 different NF kappa B inhibitors: (1) shRNA, (2) an experimental drug, BAY11, and (3) a dietary supplement, curcumin.", "venue": "", "year": 2015.0, "author_names": ["Konstantina M Stankovic"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 1050139, "title": "Nonsteroidal anti inflammatory medications are cytostatic against human vestibular schwannomas.", "abstract": "Vestibular schwannomas (VSs) are the most common tumors of the cerebellopontine angle. Significant clinical need exists for pharmacotherapies against VSs. Motivated by previous findings that immunohistochemical expression of cyclooxygenase 2 (COX 2) correlates with VS growth rate, we investigated the role of COX 2 in VSs and tested COX 2 inhibiting salicylates against VSs. COX 2 was found to be aberrantly expressed in human VS and primary human VS cells in comparison with control human nerve specimens and primary Schwann cells (SCs) respectively. Furthermore, levels of prostaglandin E2, the downstream enzymatic product of COX 2, were correlated with primary VS culture proliferation rate. Because COX 2 inhibiting salicylates such as aspirin are well tolerated and frequently clinically used, we assessed their repurposing for VS. Changes in proliferation, cell death, and cell viability were analyzed in primary VS cultures treated with aspirin, sodium salicylate, or 5 aminosalicylic acid. These drugs neither increased VS cell death nor affected healthy SCs. The cytostatic effect of aspirin in vitro was in concurrence with our previous clinical finding that patients with VS taking aspirin demonstrate reduced tumor growth. Overall, this work suggests that COX 2 is a key modulator in VS cell proliferation and survival and highlights salicylates as promising pharmacotherapies against VS.", "venue": "Translational research the journal of laboratory and clinical medicine", "year": 2015.0, "author_names": ["Sonam Dilwali", "S Y Kao", "Takeshi Fujita", "Lukas D Landegger", "Konstantina M Stankovic"], "n_citations": 32, "n_key_citations": 6, "score": 0}, {"corpus_id": 30546574, "title": "Aspirin Intake Correlates With Halted Growth of Sporadic Vestibular Schwannoma In Vivo", "abstract": "Objective Given the presence of a pathological immune response in sporadic vestibular schwannoma (sVS) this study aims to explore the roles of aspirin in minimizing sVS growth in vivo. Study Design Retrospective case review. Setting Tertiary care hospital. Patients People diagnosed with sVS and followed at a tertiary referral center by serial magnetic resonance imaging (MRI) for at least 4 months within the period of January 1980 through April 2012. Main Outcome Measures Patient use of aspirin and sVS growth rate measured by changes in the largest tumor dimension as noted on serial MRIs Results Within a set of 689 cases, 347 were followed by serial MRI scans (50.3% of the latter, 81 took aspirin, of which, 33 demonstrated sVS growth, and 48 did not. Of the 266 nonaspirin users, 154 demonstrated sVS growth, and 112 did not. A significant inverse association was found among aspirin users and sVS growth (odds ratio [OR] 0.50, 95% confidence interval [CI] 0.29 0.85) which was not confounded by age or sex. Conclusion Our results suggest a potential therapeutic role of aspirin in inhibiting sVS growth.", "venue": "Otology neurotology official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology", "year": 2014.0, "author_names": ["Cherian K Kandathil", "Sonam Dilwali", "Chen-Chi Wu", "Metin Ibrahimov", "Michael J McKenna", "Hang Lee", "Konstantina M Stankovic"], "n_citations": 40, "n_key_citations": 8, "score": 0}, {"corpus_id": 205609626, "title": "Tumor Penetrating Delivery of siRNA against TNFa to Human Vestibular Schwannomas", "abstract": "Vestibular schwannoma (VS) is the most common tumor of the cerebellopontine angle, and it typically presents with sensorineural hearing loss. The genomic landscape of schwannoma is complex and many of the molecules implicated in VS pathogenesis represent targets not amenable to antibody based or small molecule therapeutics. Tumor targeted delivery of small interfering RNA (siRNA) therapeutics provides a direct and effective means to interrogate targets while minimizing off target effects. To establish a preclinical model for therapeutic inhibition of putative targets in VS, archived tumor specimens, fresh tumor cells derived from patients with sporadic VS, and an established schwannoma cell line were screened. Nanoparticles directed by the tumor homing peptide iRGD were selectively taken up by primary VS cultures in vitro via interactions with avb3/b5 integrins and neuropilin 1 (NRP 1) Cellular uptake was inhibited by a neutralizing antibody against av integrin in a dose dependent manner. When applied to primary VS cultures, iRGD targeted nanoparticles delivered siRNA directed against TNFa in a receptor specific fashion to potently silence gene expression and protein secretion. Taken together, our results provide a proof of principle for tumor targeted, nanoparticle mediated delivery of siRNA to VS and establish a novel platform for the development and pre clinical screening of molecular therapeutics against VS.", "venue": "Scientific Reports", "year": 2017.0, "author_names": ["Yin Ren", "Jessica E Sagers", "Lukas D Landegger", "Sangeeta N Bhatia", "Konstantina M Stankovic"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 46876362, "title": "A Unified Methodological Framework for Vestibular Schwannoma Research.", "abstract": "Vestibular schwannomas are the most common neoplasms of the cerebellopontine angle, making up 6 8% percent of all intracranial growths. Though these tumors cause sensorineural hearing loss in up to 95% of affected individuals, the molecular mechanisms underlying this hearing loss remain elusive. This article outlines the steps established in our laboratory to facilitate the collection and processing of various primary human tissue samples for downstream research applications integral to the study of vestibular schwannomas. Specifically, this work describes a unified methodological framework for the collection, processing, and culture of Schwann and schwannoma cells from surgical samples. This is integrated with parallel processing steps now considered essential for current research: the collection of tumor and nerve secretions, the preservation of RNA and the extraction of protein from collected tissues, the fixation of tissue for the preparation of sections, and the exposure of primary human cells to adeno associated viruses for application to gene therapy. Additionally, this work highlights the translabyrinthine surgical approach to collect this tumor as a unique opportunity to obtain human sensory epithelium from the inner ear and perilymph. Tips to improve experimental quality are provided and common pitfalls highlighted.", "venue": "Journal of visualized experiments JoVE", "year": 2017.0, "author_names": ["Lukas D Landegger", "Jessica E Sagers", "Sonam Dilwali", "Takeshi Fujita", "Mehmet Ilhan Sahin", "Konstantina M Stankovic"], "n_citations": 8, "n_key_citations": 0, "score": 0}, {"corpus_id": 3071547, "title": "Mutations of the BRAF gene in human cancer", "abstract": "Cancers arise owing to the accumulation of mutations in critical genes that alter normal programmes of cell proliferation, differentiation and death. As the first stage of a systematic genome wide screen for these genes, we have prioritized for analysis signalling pathways in which at least one gene is mutated in human cancer. The RAS RAF MEK ERK MAP kinase pathway mediates cellular responses to growth signals. RAS is mutated to an oncogenic form in about 15% of human cancer. The three RAF genes code for cytoplasmic serine/threonine kinases that are regulated by binding RAS. Here we report BRAF somatic missense mutations in 66% of malignant melanomas and at lower frequency in a wide range of human cancers. All mutations are within the kinase domain, with a single substitution (V599E) accounting for 80% Mutated BRAF proteins have elevated kinase activity and are transforming in NIH3T3 cells. Furthermore, RAS function is not required for the growth of cancer cell lines with the V599E mutation. As BRAF is a serine/threonine kinase that is commonly activated by somatic point mutation in human cancer, it may provide new therapeutic opportunities in malignant melanoma.", "venue": "Nature", "year": 2002.0, "author_names": ["H Davies", "Graham Bignell", "Charles Cox", "Philip J Stephens", "Sarah Edkins", "Sheila Clegg", "Jon W Teague", "Hayley B Woffendin", "Mathew J Garnett", "William E Bottomley", "Neil Davis", "E Dicks", "Rebecca Ewing", "Yvonne Floyd", "Kristian A Gray", "Sarah Hall", "Rachel Hawes", "Jaime Hughes", "Vivian Kosmidou", "Andrew Menzies", "Catherine Mould", "Adrian Parker", "Claire H Stevens", "Stephen Watt", "Steven Hooper", "Rebecca Wilson", "Hiran Jayatilake", "Barry Gusterson", "Colin S Cooper", "Janet M Shipley", "Darren R Hargrave", "Kathy Pritchard-Jones", "Norman J Maitland", "Georgia Chenevix-Trench", "Gregory J Riggins", "Darell D Bigner", "Giuseppe Palmieri", "Antonio Cossu", "Adrienne M Flanagan", "Andrew G Nicholson", "Judy W C Ho", "Suet Yi Leung", "Siu Tsan Yuen", "Barbara L Weber", "Hilliard F Seigler", "Timothy L Darrow", "Hugh E Paterson", "Richard M Marais", "Christopher J Marshall", "R Wooster", "Michael R Stratton", "P Andrew Futreal"], "n_citations": 9145, "n_key_citations": 482, "score": 0}, {"corpus_id": 205145885, "title": "From ancient herb to modern drug: Artemisia annua and artemisinin for cancer therapy.", "abstract": "Artemisia annua L. is used throughout Asia and Africa as tea and press juice to treat malaria and related symptomes (fever, chills) Its active ingredient, artemisinin (ARS) has been developed as antimalarial drug and is used worldwide. Interestingly, the bioactivity is not restricted to malaria treatment. We and others found that ARS type drugs also reveal anticancer in vitro and in vivo. In this review, we give a systematic overview of the literature published over the past two decades until the end of 2016. Like other natural products, ARS acts in a multi specific manner against tumors. The cellular response of ARS and its derivatives (dihydroartemisinin, artesunate, artemether, arteether) towards cancer cells include oxidative stress response by reactive oxygen species and nitric oxide, DNA damage and repair (base excision repair, homologous recombination, non homologous end joining) various cell death modes (apoptosis, autophagy, ferroptosis, necrosis, necroptosis, oncosis) inhibition of angiogenesis and tumor related signal transduction pathways (e.g. Wnt/b catenin pathway, AMPK pathway, metastatic pathways, and others) and signal transducers (NF kB, MYC/MAX, AP 1, CREBP, mTOR etc) ARS type drugs are at the stairways to the clinics. Several published case reports and pilot phase I/II trials indicate clinical anticancer activity of these compounds. Because of unexpected cases of hepatotoxicity, combinations of ARS type drugs with complementary and alternative medicines are not recommended, until controlled clinical trials will prove the safety of non approved combination treatments.", "venue": "Seminars in cancer biology", "year": 2017.0, "author_names": ["Thomas Efferth"], "n_citations": 200, "n_key_citations": 7, "score": 0}, {"corpus_id": 3345796, "title": "Targeting the Raf MEK ERK mitogen activated protein kinase cascade for the treatment of cancer", "abstract": "Mitogen activated protein kinase (MAPK) cascades are key signaling pathways involved in the regulation of normal cell proliferation, survival and differentiation. Aberrant regulation of MAPK cascades contribute to cancer and other human diseases. In particular, the extracellular signal regulated kinase (ERK) MAPK pathway has been the subject of intense research scrutiny leading to the development of pharmacologic inhibitors for the treatment of cancer. ERK is a downstream component of an evolutionarily conserved signaling module that is activated by the Raf serine/threonine kinases. Raf activates the MAPK/ERK kinase (MEK)1/2 dual specificity protein kinases, which then activate ERK1/2. The mutational activation of Raf in human cancers supports the important role of this pathway in human oncogenesis. Additionally, the Raf MEK ERK pathway is a key downstream effector of the Ras small GTPase, the most frequently mutated oncogene in human cancers. Finally, Ras is a key downstream effector of the epidermal growth factor receptor (EGFR) which is mutationally activated and/or overexpressed in a wide variety of human cancers. ERK activation also promotes upregulated expression of EGFR ligands, promoting an autocrine growth loop critical for tumor growth. Thus, the EGFR Ras Raf MEK ERK signaling network has been the subject of intense research and pharmaceutical scrutiny to identify novel target based approaches for cancer treatment. In this review, we summarize the current status of the different approaches and targets that are under evaluation and development for the therapeutic intervention of this key signaling pathway in human disease.", "venue": "Oncogene", "year": 2007.0, "author_names": ["Patrick J Roberts", "Channing J Der"], "n_citations": 2423, "n_key_citations": 103, "score": 0}, {"corpus_id": 22086516, "title": "EGF ERBB signalling: towards the systems level", "abstract": "Signalling through the ERBB/HER receptors is intricately involved in human cancer and already serves as a target for several cancer drugs. Because of its inherent complexity, it is useful to envision ERBB signalling as a bow tie configured, evolvable network, which shares modularity, redundancy and control circuits with robust biological and engineered systems. Because network fragility is an inevitable trade off of robustness, systems level understanding is expected to generate therapeutic opportunities to intercept aberrant network activation.", "venue": "Nature Reviews Molecular Cell Biology", "year": 2006.0, "author_names": ["Ami Citri", "Yosef Yarden"], "n_citations": 1807, "n_key_citations": 101, "score": 0}]} -{"query": "productivity in telework", "session_id": 3641507066071779, "user_id": 818591807266484, "candidates": [{"corpus_id": 203315324, "title": "Mechanisms to improve labor productivity by performing telework", "abstract": "Abstract This study investigates mechanisms underlying the influence of telework on labor productivity in Japan. First, this study finds that appropriate telework hours increase labor productivity, but when telework hours are too long, telework decreases labor productivity. Second, telework increases life satisfaction, and life satisfaction improves labor productivity. However, telework increases the stress of balancing work and domestic chores, contrary to Japanese governmental expectations, and the stress decreases life satisfaction. The stress, fortunately, does not directly reduce labor productivity. Although telework increases happiness and work satisfaction, these factors do not influence labor productivity. Third, this study clarifies that telework is more efficient for improving labor productivity if workers commute more than 1 h or commute by trains or buses that are usually very crowded during rush hours in Japan. Finally, the effect of telework for workers who have a greater number of potential trivial duties is insignificantly larger. Supervisors and colleagues often ask others to perform trivial, extra tasks without regard for schedules. Telework may help workers avoid such trivial duties and increase labor productivity. However, the importance of trivial duties is also demonstrated in this study.", "venue": "", "year": 2020.0, "author_names": ["Sachiko Kazekami"], "n_citations": 48, "n_key_citations": 4, "score": 0}, {"corpus_id": 226704536, "title": "Workplace Motivation: Addressing Telework as a Mechanism for Maintaining Employee Productivity", "abstract": "3 Introduction. 4 Historical Background. 6 Problem Statement. 8 Purpose of Study. 9 Thesis Statement. 10 Theoretical Framework. 10 Generational Cohort Theory. 11 Human Capital Theory 11 Work Motivation Theory. 12 Definitions. 12 Literature Review. 15 Generational Behaviors. 14 Telework: New Horizons. 20 Enhancing Motivation and Workplace Productivity. 23 Limitation Future Directions. 27 Conclusion. 28 References. 30 WORKPLACE MOTIVATION 3 Abstract This research seeks to identify social and psychological factors that affect satisfaction levels of millennial and gen z employees The thesis suggests teleworking as a renewed tool for communicating, motivating, and executing work in organizations to increase productivity. The main factors identified for said analysis have been determined through the study of business and academic literature about workplace culture and how it is changing. The research at hand investigated the differences between baby boomer, millennial and gen z employees and how participating in telework may enhance their output. For analysis, three theories were referenced in relation to age, productivity and motivation. Generational Cohort Theory, Human Capital Theory and Work Motivation Theory were used to provide a more valuable and informed understanding of how telework is effective. The findings presented in this paper can be continued through qualitative interviews and case studies of companies using telework as a resource for increased employee productivity and motivation.", "venue": "", "year": 2020.0, "author_names": ["Kaitlyn Fujii"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 199346017, "title": "Telework Impact on Productivity and Well Being An Australian Study", "abstract": "The proliferation of collaboration and networking tools such as HipChat, Yammer, Quip, Smartsheet, Salesforce Community Cloud, mobile devices, and smartphones create multiple opportunities to work in many locations away from the traditional office. We define telework or telecommuting as a flexible work arrangement that allows people to work from any location other than the traditional office on either a temporary or regular basis (Di Martino and Wirth 1990; Maruyama, Hopkinson and James 2009)", "venue": "", "year": 2017.0, "author_names": ["Rachelle Bosua", "Sherah Kurnia", "Marianne Gloet", "Antonette Mendoza"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 114539022, "title": "Finding the Optimal Mix between Telework and Office Hours to Enhance Employee Productivity A Study into the Relationship between Telework Intensity and Individual Productivity, with Mediation of Intrinsic Motivation and Moderation of Office Hours", "abstract": "This survey study among 111 teleworkers in a bank organization investigated the relationship between telework intensity and individual productivity, and whether this relationship was mediated by employees' intrinsic motivation. Also the moderating role of office hours in the model's associations was studied. Based on the Job Demands Resources Model (Bakker Demerouti, 2007) and the professional isolation literature (e.g. Golden, Vega, Dino, 2008) we developed and tested a set of hypotheses. Partly in line with expectations, we found a direct curvilinear relationship between telework intensity and individual productivity, characterized by a slight, non significant positive association at the low telework intensity end, and a significant negative association for the high telework intensity end. Strikingly, we neither found support for a mediating role of intrinsic motivation, nor for a moderation effect of the number of office hours in the relationship between telework intensity and intrinsic motivation. However, the direct relationship between telework intensity and individual productivity appeared to be moderated by the number of office hours. It was concluded that consequences for productivity are contingent on telework intensity, and that the number of office hours has an important impact on the consequences of different telework intensities. The study's outcomes can inform management and HR practitioners to understand how to implement and appropriately make use of telework.", "venue": "", "year": 2016.0, "author_names": ["Niels Hoornweg", "Pascale Peters", "Beatrice van der Heijden"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 234482118, "title": "From Forced Working From Home to Working From Anywhere: Two Revolutions in Telework", "abstract": "The COVID 19 outbreak has admittedly caused a major disruption worldwide. The interruptions to production, transportation, and mobility have clearly had a significant impact on the well functioning of the global supply and demand chain. But what happened to the companies developing digital services, such as software. Were they interrupted as much or at all? And how has the enforced Working From Home mode impacted their ability to continue to deliver software? We hear that some managers are concerned that their engineers are not working effectively from home, or even lack the motivation to work in general, that teams lose touch and that managers do not notice when things go wrong. In this article, we share our findings from monitoring the situation in an international software company with engineers located in Sweden, USA, and the UK. We analyzed different aspects of productivity, such as developer satisfaction and well being, activity, communication and collaboration, efficiency and flow based on the archives of commit data, calendar invites, and Slack communication, as well as the internal reports of WFH experiences and 18 interviews. We find that company engineers continue committing code and carry out their daily duties without significant disruptions, while their routines have gradually adjusted to the new norm with new emerging practices and various changes to the old ones. In a way, our message is that there is no news, which is good news. Yet, the experiences gained with the WFH of such scale have already made significant changes in the software industry's future, work from anywhere being an example of major importance.", "venue": "", "year": 2021.0, "author_names": ["Darja Smite", "Nils Brede Moe", "Eriks Klotins", "Javier Gonzalez-Huerta"], "n_citations": 1, "n_key_citations": 0, "score": 1}, {"corpus_id": 167683829, "title": "No Place Like Home: The Effect of Telework Gains on Knowledge Worker Productivity", "abstract": "An unanswered question regarding telework is how differences between the office and the home work environment influence the effect of the extent of telework on productivity. Drawing from research a.", "venue": "", "year": 2014.0, "author_names": ["Nick van der Meulen", "Peter J van Baalen", "Eric van Heck"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 155117779, "title": "Telework in the United Kingdom as a model of flexibility, consensus, voluntarism and productivity", "abstract": "", "venue": "", "year": 2015.0, "author_names": ["Joseph Roger Carby-Hall"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 110110326, "title": "Telework, productivity and wellbeing: an Australian perspective", "abstract": "Developments in networking and collaboration technologies offer new opportunities for employees to telework. Even though studies indicate that teleworkers can be more productive when working away from the office, results are mostly self reported. Additionally, no studies have yet explored telework in terms of productivity and wellbeing from both a managerial and employee perspective in Australia. We followed a qualitative research design to explore telework, productivity and wellbeing, as well as a quantitative component to measure daily experiences of workers on telework and non telework days. Findings indicate that 1) productivity is a management concern and requires a different management approach to yield productive outcomes; 2) high level IT support is required for workers to be more productive; and 3) the ability to telework fosters wellbeing, which in turn contributes to productivity.", "venue": "", "year": 2013.0, "author_names": ["Rachelle Bosua", "Marianne Gloet", "Sherah Kurnia", "Antonette Mendoza", "Jongsay Yong"], "n_citations": 17, "n_key_citations": 1, "score": 0}, {"corpus_id": 159417476, "title": "Working from home: characteristics and outcomes of telework", "abstract": "PurposeThe purpose of this paper is to investigate the relationships between theoretically grounded telework factors and various individual and organizational outcomes of telework (overall satisfaction with telework, perceived advantages of telework, career opportunities and self reported productivity).Design/methodology/approachBased on a literature review, ten telework factors that may affect individual and organizational telework outcomes were identified and empirically tested using the survey data of 128 teleworkers exercising different telework intensity and representing various sectors of the economy.FindingsThe bundle of theoretically selected variables explained a significant part of the variance of telework outcomes. Reduced communication with co workers, supervisor's trust and support, suitability of the working place at home were found to be the most important telework factors impacting different telework outcomes. Higher self reported productivity was related to reduced time in communicating with co workers, a suitable working place at home and the possibility to take care of family members when teleworking.Practical implicationsThis study provides insights about the management of telework in organizations by highlighting the factors that promote the satisfaction, productivity and perceived career opportunities of teleworkers.Originality/valueThis paper challenges the results of previous research on the factors related with telework and its outcomes. Based on the job demands resources theory, the authors identified the factors that serve as resources in generating positive telework outcomes, and the factors increasing job demands and reducing satisfaction with telework.", "venue": "International Journal of Manpower", "year": 2019.0, "author_names": ["Audrone Nakrosiene", "Ilona Buciuniene", "Bernadeta Gostautaite"], "n_citations": 51, "n_key_citations": 3, "score": 0}, {"corpus_id": 228991836, "title": "Labor Force Telework Flexibility and Asset Prices: Evidence from the COVID 19 Pandemic", "abstract": "We show that labor force telework flexibility (LFTF) is a first order effect in accounting for the variations of asset prices and firm policies during the COVID 19 pandemic. Specifically, firms in high LFTF industries significantly outperform firms in low LFTF industries in stock returns. The positive LFTF return relation extends to G7 countries and is stronger in countries with more severe pandemic. A decomposition analysis of the LFTF measure shows that the job characteristics associated with the central component of telework, information and communication technologies, are the main driving force of the result. A dynamic neoclassical model of firms operating multiple job tasks together with pandemic shocks captures the positive relationship between labor force flexibility and stock returns. The model mechanism highlights that i) job task flexibility is a key driving force of the cross industry heterogeneity in firm value fluctuations, and ii) combining labor productivity (supply) and uncertainty shocks is crucial to generate the large drop and persistent recovery in firm value and output.", "venue": "", "year": 2020.0, "author_names": ["Jack Y Favilukis", "Xiaoji Lin", "Ali Sharifkhani", "Xiaofei Zhao"], "n_citations": 11, "n_key_citations": 0, "score": 0}]} -{"query": "WhatsApp group chat analysis using nlp", "session_id": 2120889615097577, "user_id": 6066305118652895, "candidates": [{"corpus_id": 233136050, "title": "People's Behaviour Analysis in Chat Message using Natural Language Processing", "abstract": "Nowadays, the mode of communication is mainly through messages. A lot of information has been conveyed through WhatsApp. WhatsApp is the most popular chat application with active users of more than 650 million. It has been widely used by all, especially among the business people and youngsters. Using several analyzing tools, users can analyse the WhatsApp group chat or personal chat. Authentically users wish to analyse their chat for several purposes. This research work is intended to perform a flirt analysis and time analysis. This project has many use cases like the parent, who wants to analyze their child chat; the police, who want to get valuable information from culprit chat; the business people, who wants to know the status of the business in the group chat. Using the Deep Learning model (NLP) sentimental analysis has been performed for each text. This helps to find the state of mind of the chatters. Further, this research work calculates the number of positive and negative statements that are used by each person in the text by using the text mining concept. As now due to this pandemic situation, every conversation and also the important discussion has been done through the WhatsApp and it was highly needed for the person who wants to check their child's conversation and also for the higher authority for enquiry and for the business chair person who are needed to analyse their business well being group can also be used for their personal usage of analyse using the algorithm in this method.", "venue": "2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV)", "year": 2021.0, "author_names": ["V Selina Annie Retna", "P Brundha", "G Rajkumar"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 133515808, "title": "Sentiment Analysis on WhatsApp Group Chat Using R", "abstract": "In today's world, the most popular chat application for fast communication is WhatsApp. Every smart phone user uses this mobile application for message communication. It is free and very fast communication mobile application, but nowadays people have become addict of this application and the negative aspect of it is also that some people have started using it for provoking people today. Today we are not using and running it, but it is running us which can prove to be very dangerous for us. Some fake news spread quickly by WhatsApp. So there is need to analyze WhatsApp chat by user's sentiment or opinion. This technique is known as sentiment analysis or opinion mining. In this review paper, I used group chat of WhatsApp as database and analyze sentiments and emotions using R Studio.", "venue": "", "year": 2019.0, "author_names": ["Sunil Joshi"], "n_citations": 2, "n_key_citations": 1, "score": 0}, {"corpus_id": 86852479, "title": "THE EFFECTIVENESS OF WHATSAPP GROUP CHAT FOR INTERNAL COMMUNICATION AT KADOKAWA GEMPAK STARZ(tm)", "abstract": "This study emphasizes on the proliferation of technological devices such as smartphones and the effectiveness of WhatsApp Group Chat in working organization such as Kadokawa Gempak Starz(tm) for Internal Communication. Technology is applied in work as they aid in solving problems and easy access to information. There were two research objectives which the study seeks to determine. First is to explain the adoption of WhatsApp Group Chat for Internal Communication through Technology Acceptance Model. Second, is to examine the challenges of using WhatsApp Group Chat for Internal Communication. The study employed a research design based on questionnaire with a total number of 120 respondents. Ultimately the findings showed that the respondents find it challenging to use WhatsApp Group Chat for Internal Communication and are on the fence about its usage for Internal Communication as well as the effectiveness to serve as a communication tool for internal Communication at Kadokawa Gempak Starz(tm) all of these variables will hugely impacted the behavioral intention towards WhatsApp Group Chat in the future.", "venue": "Book Chapters of The 1st Jakarta International Conference on Social Sciences and Humanities (JICoSSH)", "year": 2019.0, "author_names": ["Nadirah Binti Nissanto"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 234810239, "title": "Live Chat Analysis Using Machine Learning", "abstract": "The World Wide Web such as social networking sites and blog comments forum has huge user comments emotion data from different social events and product brand and arguments in the form of political views. Generate a heap. Reflects the user's mood on the network, the reader, has a huge impact on product suppliers and politicians. The challenge for the credibility of the analysis is the lack of sufficient tag data in the Natural Language Processing (NLP) field. Positive and negative classify content based on user feedback, live chat, whether the user is used as the base for a wide range of tasks related to the text content of a meaningful assessment. Data collection, and function number for all variants. A recurrent neural network is very good text classification. Analyzing unstructured form from social media data, reasonable structure, and analyzes attach great importance to note for this emotion. Emotional rewiring can use natural language processing sentiment analysis to predict. In the method by the Recurrent Neural Networks (RNNs) of the proposed prediction chat live chat into sentiment analysis. Sentiment analysis and in depth learning technology have been integrated into the solution to this problem, with their deep learning model automatic learning function is active. Using a Recurrent Neural Networks (RNNs) reputation analysis to solve various problems and language problems of text analysis and visualization product retrospective sentiment classifier cross depth analysis of the learning model implementation.", "venue": "", "year": 2021.0, "author_names": ["S Kavibharathi", "S Lakshmi Priyankaa", "M Kaviya", "Dr S Vasanthi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 235434345, "title": "Group chat analysis of hoax detection during the covid 19 pandemic using the k nearest neighbors algorithm and massive text processing", "abstract": "Group chat is the most widely used choice of various short information. Besides being easy to send messages, sharing short messages in group chat is considered effective compared to sending massively to several users. The ease of sending short messages in group chat is often used as the spread of fake news and untrue news or hoaxes, especially during the Covid 19 pandemic, the information shared can be easily shared by anyone without seeing a valid source. The dissemination of information related to Covid 19 without a clear source is a dangerous act, because it can lead users into false information and endanger themselves. Fake message detectors have not been widely implemented in instant message applications, for this reason, there is a need for a detector and a machine to analyze activities in group chat and see whether the message is included in content containing fake news or not. If a group chat has a lot of fake news, you can be sure that the group chat is not good to follow. The use of the K Nearest Neighbors algorithm is considered quite effective in classifying an object, the results can be determined whether it is included in fake news, miss information news, or true news. The process of processing messages is carried out by the massive text processing method because the characteristics of the text are different for each user so that text processing can be maximized for later classification. As a result, group chat can be analyzed based on active time, user messages, user activity, and messages sent between users.", "venue": "", "year": 2021.0, "author_names": ["K Umam"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213445849, "title": "The Use of Social Media by Female Physicians in an International Setting: A Mixed Methods Study of a Group WhatsApp Chat", "abstract": "Background: The past decade has witnessed an increase in informal and bottom up driven \"she for she\" efforts, often using social media, to promote the advancement of women in medicine. Yet, this area of research is nascent with limited information on the use of social media platforms by female physicians, especially in the international medical arena. The purpose of this study was to investigate the use of a social media platform by a diverse group of female physicians in an international setting. Materials and Methods: The study used a mixed methods approach, including quantitative descriptive statistics and qualitative thematic analysis of the content of posts of a women physicians WhatsApp group during a 1 year time period (June 1, 2018 May 31, 2019) Results: The group consisted of 122 members with 4897 posts during the 1 year time period. Nine themes were identified including requests for medical information, logistics, personal recommendations, promotion, celebration, community engagement, education, women's empowerment, and employment inquiries. Engagement was high with 72% of members posting during the last 30 days of analysis and 92% of questions posted receiving a response, often within minutes. There were no instances of unprofessional social media behavior. Conclusions: The social media platform was effective in enabling female physicians to expand networks, exchange ideas, share scientific information, celebrate accomplishments, and provide support to colleagues. Creating a social media forum for women physicians may be an effective tool to foster a network of support and community.", "venue": "Women's health reports", "year": 2020.0, "author_names": ["Halah Ibrahim", "Pascale Anglade", "Sawsan Abdel-Razig"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 222176467, "title": "Statement Analysis using NLP", "abstract": "We aim to draw on an important overlooked potential of affective dialogue systems their application to promote positive emotional states, similar to that of emotional support between humans. This can be achieved by eliciting a more positive emotional valence throughout a dialogue system interaction, i.e. positive emotion elicitation. Existing works on emotion elicitation have not yet paid attention to the emotional benefit for the users. Moreover, a positive emotion elicitat ion corpus does not yet exist despite the growing number of emotion rich corpora. Towards this goal, first, we propose a response retrieval approach for positive emotion elicitation by utilizing examples of emotion appraisal from a dialogue corpus. Second, we efficiently construct a corpus using the proposed retrieval method, by replacing responses in a dialogue with those that elicit a more positive emotion. We validate the corpus through crowdsourcing to ensure its quality. Finally, we propose a novel neural network architecture for an emotion sensitive neural chat based dialogue system, optimized on the constructed corpus to elicit positive emotion. Objective and subjective evaluations show that the proposed methods result in dialogue responses that are more natural and elicit a more positive emotional response. Further analyses of the results are discussed in", "venue": "", "year": 2020.0, "author_names": ["M S Vinu", "P Mohan", "S Ganesh Moorthy", "M Gobinath"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 199319847, "title": "A Framework for Predicting and Identifying Radicalization and Civil Unrest Oriented Threats from WhatsApp Group", "abstract": "Social media is an emerging area of research for data scientist. Lots of test dataset are available for Facebook, Twitter, and Instagram. Very few research has been done on WhatsApp group chat log due to scarcity of data openly available. WhatsApp is secured by encryption so WhatsApp is heavily used for terrorist activities, spreading riot, creating civil unrest, and disseminating radicalization. Data to conduct research on these issues are hardly available as these are very sensitive information. We are proposing here a framework to address this type of social issue. By NLP and semi supervised machine learning, we can design a system which will predict the probability of WhatsApp group to be a threat for civilization. Success of this framework depends on how much actual data we can feed to system. Depending on training data, it will predict with more accuracy. This paper will show how this system can work properly in actual field.", "venue": "Advances in Intelligent Systems and Computing", "year": 2019.0, "author_names": ["Koushik Deb", "Souptik Paul", "Kaustav Das"], "n_citations": 3, "n_key_citations": 1, "score": 0}, {"corpus_id": 220069594, "title": "Impact of the Use of a WhatsApp based Group Chat for Sharing Emergency Department Access Block Data on Overcrowding and Census in a Low Resource Emergency Department in Kigali, Rwanda", "abstract": "In Low Middle Income Countries (LMICs) hospitals often face serious communication issues that threaten to paralyze the process of healthcare provision. The Centre Hospitalier Universitaire de Kigali (CHUK) is located in the middle of a vibrant city of Kigali, and is often overwhelmed by a high number of referred patients from its catchment area, and those brought in by emergency evacuation ambulance system (SAMU) The facility has no interdepartmental landline communication network, which would be ideal to connect the inpatient services to the emergency room in order to fasten the care process. Using WhatsApp based Group Chat for sharing the real time caseloads, the number of patients boarding the emergency room has significantly improved (dropping from 38.1 7.1 to 28 6.5, p<0.001) although the overall length of stay in the emergency room has remained high (3.37 0.61 days) mainly due to other co factors such as the availability of specialized staff (i.e. neurosurgeon) and uninterrupted imaging services (i.e. computer tomography scans)", "venue": "medRxiv", "year": 2020.0, "author_names": ["Menales Nkeshimana", "Christine Uwineza", "Amelia Y Pousson", "Giles N Cattermole"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 216237228, "title": "CODE SWITCHING ANALYSIS IN ENGLISH LITERATURE WHATSAPP GROUP", "abstract": "The aim of this research was to find out the type and reason of code switching on WhatsApp group of Putera Batam University. In collecting the data, the research applied observational method non participatory technique. The data was analyzed by using the theory of Poplack (1980) The researchers applied the text in WhatsApp as the data. It was found 15 texts contain code switching. The data ware classified into three: tag switching, inter sentential switching and intra sentential switching. From the 15 text, intra sentential switching were the most frequent type of code switching because WhatsApp group members often change language from Indonesian to English in only a few sentences that appear at the beginning, in the middle, and at the end of sentence. Code switching have ten reason using theory of Grosjean (1981) First, to fill the linguistic needs for lexical items, specify phrases, discourse markers, or sentence fillers. Second, to continue the last language used (trigger) Third, quote someone. Fourth, to determine the recipient. Fifth, to qualify the message: strengthen or emphasize. Sixth, to determine the involvement of the speaker (personal message) Seventh, to mark and emphasize group identity. Eighth, conveys confidentiality, anger, and harassment. Ninth, to exclude someone from conversation. Tenth, change the role of the speaker: Increase status, add authority, and show expertise. For reasons in the WhatsApp group only 3 of 10 kinds of reasons were found: to fill the linguistic needs for lexical items, to continue the last language used (triggered) and to determine speaker involvement.", "venue": "", "year": 2020.0, "author_names": ["Thessa Cynthia Ameliza", "Ambalegin Ambalegin"], "n_citations": 3, "n_key_citations": 0, "score": 0}]} -{"query": "Chance constrained programming with joint constraints", "session_id": 1413308515174799, "user_id": 4648453903359274, "candidates": [{"corpus_id": 8449238, "title": "Chance Constrained Programming with Joint Constraints", "abstract": "Miller and Wagner have shown that a deterministic equivalent of a joint chance constrained programming model with independent random right hand side elements is a concave programming problem. This paper obtains similar equivalents for chance constrained programming models with coefficient matrices whose elements are normally distributed and with dependent random right hand side elements.", "venue": "Oper. Res.", "year": 1974.0, "author_names": ["Raj Jagannathan"], "n_citations": 322, "n_key_citations": 8, "score": 1}, {"corpus_id": 123662839, "title": "Chance Constrained Programming with Joint Constraints", "abstract": "This paper considers the mathematical properties of chance constrained programming problems where the restriction is on the joint probability of a multivariate random event. One model that is considered arises when the right handside constants of the linear constraints are random. Another model treated here occurs when the coefficients of the linear programming variables are described by a multinormal distribution. It is shown that under certain restrictions both situations can be viewed as a deterministic nonlinear programming problem. Since most computational methods for solving nonlinear programming models require the constraints be concave, this paper explores whether the resultant problem meets the concavity assumption. For many probability laws of practical importance, the constraint in the first type of model is shown to violate concavity. However, a simple logarithmic transformation does produce a concave restriction for an important class of problems. The paper also surveys the \"generalized linear programming\" method for solving such problems when the logarithmic transformation is justified. For the second type model, the constraint is demonstrated to be nonconcave.", "venue": "", "year": 1965.0, "author_names": ["Bruce L Miller", "Harvey M Wagner"], "n_citations": 184, "n_key_citations": 3, "score": 1}, {"corpus_id": 56178668, "title": "Towards sustainable water resources planning and pollution control: Inexact joint probabilistic double sided stochastic chance constrained programming model.", "abstract": "This study presents an inexact joint probabilistic double sided stochastic chance constrained programming (IJDSCCP) model for sustainable water resources planning and pollution control in water quality management systems under uncertainty. Techniques of interval parameter programming (IPP) joint probabilistic programming (JPP) and double sided stochastic chance constrained programming (DSCCP) are incorporated into a modeling framework. The IJDSCCP can not only address uncertainties presented as interval parameters and double sided randomness (i.e. both left hand and right hand sides) that are characterized as normal distributions, but also examine the reliability level of satisfying the entire system constraints. It further improves upon conventional stochastic chance constrained programming for handing random uncertainties in the left hand and right hand sides of constraints. Moreover, a non equivalent but sufficient linearization form of the IJDSCCP is presented to solve such a problem. Then, the model is applied to a representative case for water resources planning and pollution control. The results including water resources planning solutions, pollution control plans and system benefits under the combinations of different joint and individual probability levels will be obtained. The solutions are expressed as combinations of deterministic, interval and distributional information, which can facilitate analysis of different forms of uncertainties. After investigating and comparing the variations of results, it is found that an increasing joint probability level can lead to higher system benefits, i.e. [13,841.68, 21,801.81] x 106 Yuan (p 0.01, p1 0.0033, p2 0.0033 and p3 0.0033) [14,150.26, 22,260.06] x 106 Yuan (p 0.05, p1 0.0166, p2 0.0166 and p3 0.0166) and [14,280.55, 22,415.52] x 106 Yuan (p 0.10, p1 0.033, p2 0.033 and p3 0.033) A set of decreased individual probability levels gives rise to the maximum system benefits at the same joint probability level. Furthermore, the results of the IJDSCCP are compared with a general interval based optimization framework as well. Therefore, the results from the IJDSCCP are valuable for assisting managers in generating and identifying decision alternatives under different scenarios.", "venue": "The Science of the total environment", "year": 2019.0, "author_names": ["Chenglong Zhang", "Shanshan Guo", "Fan Zhang", "Bernard A Engel", "Ping Guo"], "n_citations": 13, "n_key_citations": 1, "score": 0}, {"corpus_id": 12742056, "title": "A Unified Approach for Multiobjective Fuzzy Chance Constrained Programming with Joint Normal Distribution", "abstract": "Abstract This paper describes some mathematical techniques and modeling aspects for solving fuzzy multiobjective probabilistic decision making problems in which the constraints are jointly distributed and the right sided parameters of the constraints are normally distributed fuzzy random variables. The probabilistic model is first converted into equivalent fuzzy programming model by using incomplete gamma function described in a fuzzy decision making environment. Then independent optimal solution of each objective are determined under the decomposed set of system constraints which are obtained by considering fuzzy nature of parameters involved with them. The tolerance membership function for measuring the degree of satisfaction of the decision maker with the achievement of objective values is defined. The membership functions are then converted into fuzzy goals by assigning unity as aspiration level. Finally a weighted fuzzy goal programming technique is used to achieve the highest degree of each of the defined membership goal to the extent possible by minimizing under deviational variables and thereby obtaining most satisfactory solution in the decision making context which leads to an efficient as well as optimal compromise solution. A numerical example is solved to illustrate the proposed methodology and the solution is compared with some other technique developed earlier.", "venue": "", "year": 2013.0, "author_names": ["Animesh Biswas", "Nilkanta Modak"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 231617482, "title": "Joint Planning of Distributed PV Stations and EV Charging Stations in the Distribution Systems Based on Chance Constrained Programming", "abstract": "Simultaneous deployment of the electric vehicle charging stations (EVCSs) and distributed photovoltaic stations (DPVSs) in the distribution systems is an effective way to reduce greenhouse gas emissions, promote renewable power adoption, and achieve sustainable development in energy utilization. In this context, how to deploy the EVCSs and DPVSs in the distribution systems with a reasonable scheme is of great importance. In this article, a joint planning model is developed to optimize locations and capacities of the EVCSs and DPVSs simultaneously to reduce energy losses in the distribution systems. In the joint planning model, constraints on bus voltage deviations and line currents are both formulated as chance constraints to ensure that the distribution systems are in reasonable operating statues. To quantify these two chance constraints, a scenario based method is developed to calculate the probabilistic power flow of the distribution systems during a typical planning day, in which random characters of the DPVS generations and the EVCS charging loads are both considered. The joint planning model of the EVCSs and DPVSs developed in this article is difficult to be solved by mathematical optimization methods. Therefore, genetic algorithm (GA) is customized and utilized to solve the joint planning model of the EVCSs and DPVSs. Finally, a case study based on IEEE 33 bus distribution systems validates the joint planning model and its solving algorithm.", "venue": "IEEE Access", "year": 2021.0, "author_names": ["Xinsong Zhang", "Yangyang Xu", "Shengnan Lu", "Cheng Lu", "Yunxiang Guo"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 120472063, "title": "On Chance Constrained Programming Problems with Joint Constraints", "abstract": "In this paper we consider chance constrained programming problems with joint constraints shown in the literature to be equivalent deterministic nonlinear programming problems. Since most existing computational methods for solution require that the constraints of the equivalent deterministic problem be concave, we obtain a simple condition for which the concavity assumption holds when the right hand side coefficients are independent random variables. We show that it holds for most probability distributions of practical importance. For the case where the random vector has a multivariate normal distribution, nonexistence of any efficient numerical methods for evaluating multivariate normal integrals necessitates the use of lower bound approximations. We propose an approximation for the case of positively correlated normal random variables.", "venue": "", "year": 1973.0, "author_names": ["V S Bawa"], "n_citations": 23, "n_key_citations": 2, "score": 0}, {"corpus_id": 5938866, "title": "Technical Note A Class of Nonlinear Chance Constrained Programming Models with Joint Constraints", "abstract": "Miller and Wagner [Opns. Res. 13, 930 945 1965] define joint chance constrained programming by specifying a set of constants that are joint probability measures of the extent to which constraint violations are permitted. For the special case of a random right hand side vector whose elements are independent random variables, they show that an equivalent deterministic concave program exists. The purpose of this paper is to generalize this result to a class of nonlinear chance constrained programming models with joint constraints.", "venue": "Oper. Res.", "year": 1973.0, "author_names": ["Raj Jagannathan", "M R Rao"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 4697274, "title": "A joint chance constrained programming approach for the single item capacitated lot sizing problem with stochastic demand", "abstract": "We study the single item single resource capacitated lot sizing problem with stochastic demand. We propose to formulate this stochastic optimization problem as a joint chance constrained program in which the probability that an inventory shortage occurs during the planning horizon is limited to a maximum acceptable risk level. We investigate the development of a new approximate solution method which can be seen as an extension of the previously published sample approximation approach. The proposed method relies on a Monte Carlo sampling of the random variables representing the demand in all planning periods except the first one. Provided there is no dependence between the demand in the first period and the demand in the later periods, this partial sampling results in the formulation of a chance constrained program featuring a series of joint chance constraints. Each of these constraints involves a single random variable and defines a feasible set for which a conservative convex approximation can be quite easily built. Contrary to the sample approximation approach, the partial sample approximation leads to the formulation of a deterministic mixed integer linear problem having the same number of binary variables as the original stochastic problem. Our computational results show that the proposed method is more efficient at finding feasible solutions of the original stochastic problem than the sample approximation method and that these solutions are less costly than the ones provided by the Bonferroni conservative approximation. Moreover, the computation time is significantly shorter than the one needed for the sample approximation method.", "venue": "Ann. Oper. Res.", "year": 2018.0, "author_names": ["Celine Gicquel", "Jianqiang Cheng"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 125707982, "title": "Interval joint probabilistic chance constrained programming with two side multi randomness: an application to energy environment systems management", "abstract": "Regional energy environment systems management become more and more focused on greenhouse gas emission control through improving energy efficiency and efficiently managing energy activities. Inexact linear programming models are developed for supporting the management. Due to the weather/climatic variations in the future, electricity demands and renewable power generations (in the right/left hand sides of constraints) have random characteristics. Moreover, an overall satisfactory level needs to be quantified based on multiple chance constraints. Therefore, this study improved upon traditional chance constrained programming and interval linear programming, and developed an interval joint probabilistic two side chance constrained programming (IJTCP) approach. A sufficient but non equivalent linearization form of the model was proposed so that the inexact model could be solved through the two step solution algorithm. The IJTCP was then applied to an integrated energy environment systems management under dual uncertainties. The application demonstrated that the IJTCP can effectively address the uncertainties presented as not only interval numbers and two side multi randomness but also the reliability of satisfying the entire system constraints. The application implicated that the IJTCP approach can be applied to other energy environment management problems under dual uncertainties.", "venue": "Stochastic Environmental Research and Risk Assessment", "year": 2017.0, "author_names": ["Gongchen Li", "Wei Sun", "Ying Lv", "Guanhui Cheng", "Yumin Chen", "Guohe Huang"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 157938265, "title": "An interval multistage joint probabilistic chance constrained programming model with left hand side randomness for crop area planning under uncertainty", "abstract": "Abstract The characteristics of the agricultural water management system is its great complexity and uncertainty as well as dynamic variations in the system components, which results in dynamic characteristics in optimizing the agricultural water allocation and crop area planning. In this study, an interval multistage joint probabilistic left hand side chance constrained programming (IMJLCP) model is developed for crop area planning in response to these issues. This method is derived from incorporating the techniques of multistage stochastic programming and joint probabilistic left hand side chance constrained programming within a general interval optimization framework. It can address uncertainties presented as both discrete intervals and probability distributions, and also reflect dynamic characteristics of the system conditions. Moreover, it can reflect randomness in the left hand side of the constraints and examine the reliability level of satisfying constraints at both joint and individual probabilities. The developed method is applied to a case study of dynamic agricultural water management and irrigated crop area planning in different growth stages in the middle reaches of Heihe River Basin, taking groundwater and surface water use into account. Six scenarios with different joint (i.e. p 0.01, 0.05 and 0.1) and individual probabilities (i.e. same and increasing) of the irrigation quota are examined, and a multilayered scenario tree will be provided for a dynamic analysis in a planning horizon. The results indicate that different levels of constraints violation reflect the attitudes of managers to economic benefit and risk. Furthermore, it can help managers to identify desired decision alternatives in intra and inter seasonal water allocation among different crops in different subareas. This application makes it highly feasible to enhance the efficiency of irrigation water and ensure sustainable use of water resources, especially for the arid regions dominated by agriculture.", "venue": "", "year": 2017.0, "author_names": ["Chenglong Zhang", "Mo Li", "Ping Guo"], "n_citations": 28, "n_key_citations": 0, "score": 0}]} -{"query": "TRAVEL AND LEARNING: A NEGLECTED TOURISM RESEARCH AREA", "session_id": 2023501772392103, "user_id": 4517292089123025, "candidates": [{"corpus_id": 155037016, "title": "Travel and learning a neglected tourism research area", "abstract": "Abstract This conceptual paper explores the nexus between travel and learning; an area of investigation long neglected by tourism researchers. Using Aristotle's concepts of phronesis techne and episteme a framework for the major areas of literature dealing with touristic learning are considered and opportunities and challenges for expanding the boundaries of knowledge are explored. Key proposals are: learning resulting from tourist experiences is likely to be highly personal and strongly tied to individual interests, motivations and prior knowledge; the nature of learning from a tourist experience only emerges over space and time; and long term meanings created by tourists are likely to be strongly influenced by their perceptions of how these experiences satisfy identity related needs and expectations.", "venue": "", "year": 2012.0, "author_names": ["John H Falk", "R R Ballantyne", "Jan Packer", "Pierre J Benckendorff"], "n_citations": 203, "n_key_citations": 19, "score": 1}, {"corpus_id": 159167537, "title": "Travel induced learning: a validation of the sustainability insight scale", "abstract": "ABSTRACT With 2017 as the UN's International Year of Sustainable Tourism for Development and the role of tourism in the UN Sustainable Development Goals, ensuring that tourism be designed and managed for sustainability is more imperative than ever. Here we present the Sustainability Insight Scale (SIS) which offers scholars and practitioners a practical tool for assessing sustainability specific learning. A strong link between travel and learning is well documented, and recent research documents positive links between travel and pro environmental outcomes. Integrating these writings with scholarship on sustainability meta competencies, we focus attention on four elements of sustainability insights: temporal thinking, interpersonal literacy, systems thinking, and personal connection to life on the planet. When acquired during travel, these insights are likely important precursors to post trip pro environmental behavioural change. With sustainable tourism on the 2030 Agenda for Sustainable Development, the SIS will be of interest to tourism researchers, planners, and policy makers seeking to promote sustainability education.", "venue": "Current Issues in Tourism", "year": 2019.0, "author_names": ["Michael L Lengieza", "Carter A Hunt", "Janet K Swim"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 143375067, "title": "Transformative Learning Theory: A Systematic Review of Travel and Tourism Scholarship", "abstract": "Transformational education is emerging as a key area of study in travel and tourism research. Determining how to facilitate theory driven transformational education is vital to the success of this emergent academic agenda. The purpose of this article is to: (a) recommend John Mezirow's transformative learning theory (TLT) as a framework to guide this agenda, and (b) systematically review travel and tourism research, using TLT as the screening criteria, to identify strategies for successfully implementing this framework as educators. Fifty three articles were identified, with only 14 published in tourism journals, indicating that research utilizing TLT in travel and tourism is in its infancy. Results suggest that a greater understanding of the theoretical basis of TLT, as well as scholarship highlighting intentional, creative, and effective uses of TLT in the tourism classroom is needed.", "venue": "", "year": 2015.0, "author_names": ["Garrett A Stone", "Lauren N Duffy"], "n_citations": 25, "n_key_citations": 2, "score": 0}, {"corpus_id": 226452625, "title": "The Study on Religion and Spirituality in Contemporary Travel: Spiritual Tourism in Sri Lanka", "abstract": "Spiritual Tourism is a journey to find the purpose and meaning of your life. It boosts your physical, mental and emotional strength. It develops, maintains, and enhances your body, mind and spirit. This research aims to provide what type of market that help to promote spiritual tourism in Sri Lanka under the objectives, to identify the potentials for spiritual tourism in Sri Lanka, and also to study about public and private sector involvement and the development and the current situation. This is written in the context of a strategic question: aEURoeWhat type of stage that hold the Spiritual tourism in Sri Lanka?aEUR A narrative approach is taken to cover an area of Sri Lankan spiritual tourism potentials and market. However, Sri Lanka had huge potential to develop sustainable tourism through spiritual tourism at present. The researcher used deskwork research methods in collecting data to achieve the research objectives. As the Findings that concluded, Sri Lanka is a huge potential to develop and market spiritual tourism through Buddhism. Apart from that, Sri Lanka as a state which needs marketing and promotion campaigns to attract more tourists while developing the spiritual tourism product at high level.", "venue": "", "year": 2020.0, "author_names": ["A K D T Yohani"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 233582647, "title": "Travel Professors: A YouTube channel about tourism education research", "abstract": "Abstract COVID 19 pandemic has had an immense impact on various aspects of life including tourism, education, and research. Educators increasingly engage with various online platforms transforming education, including YouTube. YouTube has been widely used for blended learning, online education, and for popularisation of research for several years prior to the COVID 19 outbreak. However, the use of YouTube in tourism academia has been lagging. One example is the Travel Professors YouTube channel http:/www.youtube.com/c/TravelProfessors It provides short videos filmed on location about various tourism related topics. It both aims to popularise tourism research and be a useful reference for in class and online learning. The present paper provides a detailed analysis of this YouTube channel over four years combining descriptive statistics, content analysis of reviews, and the creators' reflection. Opportunities and challenges in utilising YouTube in tourism education are demonstrated. Suggestions and recommendations for tourism academics on becoming YouTube creators are provided.", "venue": "", "year": 2021.0, "author_names": ["Denis Tolkach", "Stephen Pratt"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 226258280, "title": "Economic valuation of natural promenades in Iran using zonal travel costs method (Case study area: Gahar Lake in Lorestan Province in western Iran)", "abstract": "Gahar Lake is located within Oshtorankooh Protected Area (east of Lorestan Province in Iran) which has extensive potentials for the development of the tourism industry. The aim of the present research was to determine the economic value of the Gohar Lake resort using the zonal travel cost method. Therefore, at first, 380 questionnaires were distributed among the tourists by the simple random sampling method based on appropriate spatiotemporal distribution during the visiting seasons. The questionnaire items were categorized as economic, social, and miscellaneous parts. The calculation results revealed a value of USD 84.538 per visitor and a value of USD 1,986,657.163 per year, indicating the high value and importance of the region. The analysis showed that socio economic variables have a significant role in the use or non use of the resort. The obtained R2 coefficient was 0.82, indicating that around 82% of the changes in the number of visitors can be justified by the variables introduced in the model. The results also revealed the need to pay more attention to this region and formulate a tourism development plan.", "venue": "PloS one", "year": 2020.0, "author_names": ["Ebrahim Kheyri", "Maryam Morovati", "A Sadeghi Neshat", "Gholamreza Siahati"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 236259134, "title": "Using active learning strategies on travel and tourism higher education programmes in Ireland", "abstract": "Abstract The purpose of this reflective paper is to investigate the active learning strategies used in travel and tourism higher education programmes in Ireland. These programmes have undergone substantial adaptation in their delivery over the last Academic Year (2020 2021) due to COVID 19 which has presented many challenges specifically to the delivery of modules with an active learning approach has frequently formed the basis of delivery. Due to the nature of the travel and tourism industry a very active approach is deemed critical in the module delivery within related programmes. Therefore, the main aim of the reflective paper is to examine the use of active learning strategies on travel and tourism higher education programmes in Ireland, specifically Limerick Institute of Technology (LIT) The results of this study can be adapted on other travel and tourism programmes both in Ireland and abroad. Finally, this reflective paper which concentrates on the active learning strategies used in travel and tourism higher education programmes in LIT will be beneficial through contributing to the body of information on this topic, consequently paving the way for future research. Proposed areas of future research based on the discoveries of this paper are also highlighted.", "venue": "", "year": 2021.0, "author_names": ["Noelle O'Connor"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 237793773, "title": "Have coffee/tea, will travel: assessing the inclination towards sustainable coffee and tea tourism among the green generations", "abstract": "Purpose This study aims to identify the key variables which determine intentions to visit coffee/tea tourism plantations particularly those adopting sustainable practices. Also, this study ascertained the perception of risk in travelling due to the fear of Covid 19 on travel intentions to such coffee/tea tourism destinations. Design/methodology/approach Using the theory of planned behaviour as a basis for this study's framework, data was gathered from 302 eco conscious Generation Y and Z consumers via an online survey. Partial least squares were then applied to analyse the data. Findings Learning and relaxation motives were important in determining consumers' attitudes towards sustainable coffee/tea tourism. The intention to engage in sustainable coffee/tea tourism is most strongly affected by the risk of travelling, followed by attitude. Research limitations/implications The addition of contemporary variables was given to the theory of planned behaviour's core constructs to better reflect consumers' attitude and behaviour towards a growing form of tourism under unprecedented times. Practical implications Travel or tourism operators will have first hand insights on the factors that drive intentions to visit sustainable coffee and tea destinations, thus enabling more strategic action to be undertaken to reach the targeted young consumers. Originality/value This study examines young, environmental conscious consumers' perspectives on novel travel destinations which adopt sustainable practices. Risk in travelling was assessed which is necessary given Covid 19 has severely disrupted consumers' travel patterns.", "venue": "International Journal of Culture, Tourism and Hospitality Research", "year": 2021.0, "author_names": ["Jasmine A L Yeap", "Say Keat Ooi", "Husna Ara", "Muh Said"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 237258826, "title": "ROLE OF MACHINE LEARNING IN THE TOURISM SECTOR", "abstract": "In today's world, travel and tourism are greatly influenced by technology. Machine learning and artificial intelligence are successfully applied to this area. World's largest travel companies invest a fascinating amount over recommendation systems that in return help them to attract customers and make their services more user friendly. In this paper, the recommendation system algorithms used in the travel domain are studied. The focus of the research is on how machine learning and artificial intelligence are being used to improve the sector. The paper also includes the study of different recommendation algorithms that can be used to generate personalized suggestions. _____________________________________________________________________________________________ 1.", "venue": "", "year": 2021.0, "author_names": ["Nazeefa Kazi", "Mandar A Joshi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 210545707, "title": "Motivation and Self Efficacy of Travel and Tourism Business Study Program Using English to Support the Graduate Competence", "abstract": "Tourism Department, State Polytechnic of Bali has carried out a curriculum review on the three study programs in 2007, two years up to when this study is performed. Travel and Tourism Study Program is one of the study program that its curriculum was reviewed. As a result, the study program management has decided to emphasize that English has to be mastered in order to support the core subjects and support the graduates' competence where English is widely used in the industry where they are employed. This study is aimed at investigating the effect of motivation and self efficacy on the survival and communicative level learners and factors causing the high level of selfefficacy and its influence on their English speaking ability. This study applies descriptive qualitative research method. The data were collected through participating observation, interviews with the learners as well as literature study. The study found, so far self efficacy gives a significant effect on learners' speaking ability at the survival and communicative level. It was also found that the four criteria in motivation and selfefficacy have significant influence, which the most dominant is the criteria of learners 'psychic and emotional state of speaking English endeavor. This study contributes empirically that instructors can update their instruction techniques by observing learners' motivation and selfefficacy through communicative activities and practices. Additionally, learners can find out learning condition during the teaching learning process in the purpose of creating a comfortable and conducive learning atmosphere. Keywords motivation, self efficacy, speaking ability, survival level, communicative level", "venue": "", "year": 2019.0, "author_names": ["A A A N Harmini", "Gede Ginaya", "Cokorda Istri Sri Widhari", "I Dewa Gede Ari Pemayun"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "efficient charge recovery logic", "session_id": 4053743050925977, "user_id": 3467221662434280, "candidates": [{"corpus_id": 3021839, "title": "Design and implementation of energy efficient Adiabatic ECRL and basic gates", "abstract": "In this paper Improved structure for efficient charge recovery logic is presented. In order to optimize the power dissipation of digital systems, low power analysis should be applied throughout the design process from system level to process level. Further NAND and NOR gates have been implemented with efficient charge recovery logic (ECRL) and Proposed efficient charge recovery logic. Paper presents a comparative study among the inverter and basic gates at transition frequency varying from 50MHz to 400MHz. The proposed circuits attain large energy saving compared with conventional circuits. The circuits are simulated using 180nm technology nodes.", "venue": "2015 International Conference on Soft Computing Techniques and Implementations (ICSCTI)", "year": 2015.0, "author_names": ["M L Keote", "P T Karule"], "n_citations": 12, "n_key_citations": 1, "score": 0}, {"corpus_id": 18192782, "title": "Variations of the Power Dissipation in Adiabatic Logic Gates", "abstract": "The yield of adiabatic circuits strongly depends on the effects of parameter variations on the power dissipation. The dispersion of the threshold voltage has the most important impact on the yield. Different effects on the energy consumption due to interdie and intra die variations of the threshold voltage are presented. Three logic families, the Efficient Charge Recovery Logic (ECRL) the Positive Feedback Adiabatic Logic (PFAL) and the 2N 2N2P are compared with respect to energy saving and operating frequency range. Finally it is shown that power dissipation variations due to parameter variations are strongly dependent on the logic family.", "venue": "", "year": 2011.0, "author_names": ["Ettore Amirante", "Agnese Bargagli-Stoffi", "Jurgen Fischer", "Giuseppe Iannaccone", "Doris Schmitt-Landsiedel"], "n_citations": 40, "n_key_citations": 0, "score": 0}, {"corpus_id": 212579314, "title": "Design and Implementation of Adiabatic based Low Power Logic Circuits", "abstract": "demand necessitated the immediacy efforts in the field of development of low power VLSI design circuit. Though there are many approaches available that can be used to reduce the power/energy dissipation in conventional CMOS circuit which may include, reducing the supply voltage, or decreasing the node capacitances and minimizing the switching activities with efficient charge recovery logic. But all these reducing method have certain physical limitations, yet their limiting values are near but still they are in debatable. In this scenario many researchers are trying to adopt different optimization and energy conservation principle for VLSI circuit which led to the development of a new classical approach of switching logic knows as adiabatic switching logic. The basic principle in adiabatic logic circuits is to slow down the logic transition varying from logic 1 to logic 0 and vice versa, aiming in reducing the power dissipation. Many different approaches/ techniques are proposed for implementing adiabatic logic circuits among which, PFAL is one of those techniques which positively promise assisting in the power issues. This paper present the simulation of NAND and NOR logic gate by CMOS and PFAL logic moreover with the help of simulated result by OrCAD PSPICE tool, it can be shown that the NAND NOR used with adiabatic logic can reduce the power dissipation effectively than conventional CMOS circuit.", "venue": "", "year": 2015.0, "author_names": ["Amitab Saxena", "Deepti Shinghal", "Kshitij Shinghal"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 13902941, "title": "Design Of Ultra Low Power Vedic Multiplier using Adiabatic Logic", "abstract": "Low power circuit designs have been an important issue VLSI design areas. Multipliers play a major role in high performance systems. Vedic mathematics is world renowned for its algorithms that yield quicker results, be it for mental calculations or hardware design. The Urdhva Tiryagbhyam Vedic multiplier is one such multiplier which is effective both in terms of speed and power. Adiabatic logic style is said to be an attractive solution for low power electronic applications. By using Adiabatic techniques energy dissipation in PMOS network can be minimized and some of energy stored at load capacitance can be recycled instead of dissipated as heat. In analysis, two logic families, ECRL (Efficient Charge Recovery Logic) and PFAL (Positive Feedback Adiabatic Logic) are compared with EEAL(Energy Efficient Adiabatic Logic) for Vedic multiplier circuits. Tanner EDA tools are used for simulation. Keywords Low power, Adiabatic Logic, Vedic multiplier.", "venue": "", "year": 2015.0, "author_names": ["Shoba Mohan"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 15249542, "title": "Design and Implementation of Low Power 4:1 Multiplexer using Adiabatic Logic", "abstract": "The main and highly concerned issue in the low power VLSI design circuits is Power dissipation. The basic approaches that we used for reducing energy/power dissipation in conventional CMOS circuits include reducing the supply voltages, on decreasing node capacitances and minimize the switching activities with efficient charge recovery logic. The Adiabatic switching technique based upon the energy recovery principle is one of the techniques which is widely used to achieve low power VLSI design circuits. In the following paper the power dissipation of various adiabatic circuits is calculated and then simulated using T SPICE tool. From the results of calculation it is observed that among all of the techniques used for multiplexer implementation the efficient charge recovery logic (ECRL) multiplexer exhibits the minimum power dissipation. The adiabatic logic family has been proposed by implementing PMOS and NMOS transistors as pull down network and pull up network. With the help of calculated result, it has been shown that the multiplexer used with adiabatic logic can reduce the power dissipation than conventional CMOS circuit.", "venue": "", "year": 2013.0, "author_names": ["Jyoti Hooda", "Shweta Chawla"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 106545011, "title": "Adiabatic Logic Circuits", "abstract": "This chapter is concerned with adiabatic logic circuits. First, it introduces adiabatic charging which forms the basis of adiabatic circuits. The difference between adiabatic charging and conventional charging of a capacitor is highlighted. As amplification is a fundamental operation performed by electronic circuits to increase the current or voltage drive, adiabatic amplification is considered. The steps of realization of adiabatic logic gates starting with its static complementary metal oxide semiconductor (CMOS) counterpart are explained. The realization of pulsed power supply, which is the most fundamental building block of adiabatic circuits, is introduced. The realizations of both synchronous and asynchronous pulsed power supplies are explained. How stepwise charging and discharging can be used to minimize power dissipation is explained. Various partially adiabatic circuits such as efficient charge recovery logic (ECRL) positive feedback adiabatic logic (PFAL) and 2N 2N2P are introduced. Non adiabatic loss in adiabatic circuits highlighted. The impact of voltage scaling and threshold voltage scaling on the partially adiabatic circuits is discussed.", "venue": "", "year": 2015.0, "author_names": ["Ajit Pal"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 212550604, "title": "Low Power Design Of Asynchronous Fine Grain Power Gated Logic", "abstract": "In technology improvement power dissipation has one of the major factor well known short circuit dissipations, leakage dissipations and dynamic switching dissipations are major power dissipation sources of CMOS Chips. For reducing power dissipation in CMOS logic blocks various techniques were there among these techniques most effective new technique implemented with low power dissipation. That is \"low power design of Asynchronous fine grain power gated logic\"(LPAFPL) Low power AFPL is a new logic family. It consist of ECRL (efficient charge recovery logic gate) Pipeline system, C element and Partial Charge Reuse mechanism (PCR) Each pipeline stage is comprised efficient charge recovery logic gate gains power and it is became active when useful computations are there and does not requires power at idle stage. Thus gives negligible leakage power dissipation. PCR is the output node of the ECRL logic, To evaluate the CMOS logic circuit level. Then it automatically reduced the power dissipation in complete evaluation of CMOS circuits. KeywordsAsynchronous circuits, ECRL logic gate, C element, Power gating and Low power electronics. __________________________________________________**_________________________________________________", "venue": "", "year": 2015.0, "author_names": ["P BalaPadma"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 13248783, "title": "EFFECTIVE CONTROLLER IN OPTIMIZED ASYNCHRONOUS LOGIC", "abstract": "Asynchronous Fine grain power gated Logic (AFL) which includes Modified Efficient Charge Recovery Logic (M ECRL) gates to implement the logic function of the stage with a handshake controller which comprises of C element to handle the control signals with the neighboring stages and provides power to MECRL gate. AFL adopts an partial charge reuse (PCR) mechanism, part of the charge on the output nodes of MECRL gate which entering the discharge phase can be used to charge the output nodes of another M ECRL gate which is more enough to complete evaluate phase, thus reducing the power consumption. To design the efficient asynchronous styles with effective controller from their available CMOS topology. Moreover, study is to scrutinize the use of different controller implementations in a single design in order to generate hybrid and optimized designs. Index terms Asynchronous circuits, logic gates, lowpower electronics, power gating.", "venue": "", "year": 2015.0, "author_names": ["P N Sudha", "P Kavitha"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 17100942, "title": "Design and Analysis of Asynchronous 16*16 Adiabatic Vedic Multiplier Using ECRL and EEAL Logic", "abstract": "In this paper, we describe adiabatic Vedic multiplier using efficient charge recovery logic (ECRL) and energy efficient adiabatic logic (EEAL) In today's world low power hindrance have become a major important factor in modern VLSI design. Because of the increasingly draconian demands for battery space and weight in portable multimedia devices, energy productive and high yielding circuits are required, particularly in digital multipliers which are basic building blocks of digital signal processors. For speed and power criteria the Urdhva Tiryagbhayam Vedic multiplier is effective and adiabatic logic style is said to be an attractive solution for low power electronic applications. With adiabatic logic most of the energy is restored to the source instead of dissipating as heat. Proposed work focuses on the design of low power and area efficient adiabatic Vedic multiplier using TSMC0.18mm CMOS process technology in HSPICE G2012.06.", "venue": "", "year": 2015.0, "author_names": ["C Sreeja", "Nisha Yadav"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 35295455, "title": "Carry Select Adder Implementation using Asynchronous Fine Grain Power Gated Logic", "abstract": "This paper presents a low power logic family, called asynchronous fine grain power gated logic (AFPL) Each pipeline stage is comprised of the logic function called efficient charge recovery logic (ECRL) gatesand a handshake controller. ECRL gates have negligible leakage power dissipation. By incorporatingpartial charge reuse (PCR) mechanism the energy dissipation required to complete the evaluation of an ECRL gate can be reduced Moreover, AFPL PCR adopts a C element, in its handshake controllers. To mitigate the hardware overhead of the AFPL circuit, circuit simplificationtechniques have been developed.", "venue": "", "year": 2015.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 1}]} -{"query": "x ray diffraction review", "session_id": 3006594263196188, "user_id": 6742740819372626, "candidates": [{"corpus_id": 209771092, "title": "A review of basic crystallography and x ray diffraction applications", "abstract": "Although various researched works have been carried out in x ray crystallography and its applications, but there are still limited number of researches on crystallographic theories and industrial application of x ray diffraction. The present study reviewed and provided detailed discussion on atomic arrangement of single crystals, mathematical concept of Bravais, reciprocal lattice, and application of x ray diffraction. Determination of phase identification, crystal structure, dislocation density, crystallographic orientation, and gran size using x ray diffraction peak intensity, peak position, and peak width were discussed. The detailed review of crystallographic theories and x ray diffraction application would benefit majorly engineers and specialists in chemical, mining, iron, and steel industries.", "venue": "The International Journal of Advanced Manufacturing Technology", "year": 2019.0, "author_names": ["Enefola S Ameh"], "n_citations": 13, "n_key_citations": 0, "score": 1}, {"corpus_id": 98205079, "title": "A LITERATURE REVIEW OF CYCLODEXTRIN INCLUSION COMPLEXES CHARACTERIZATION PART II: X RAY DIFFRACTION, INFRARED SPECTROSCOPY AND NUCLEAR MAGNETIC RESONANCE", "abstract": "Cyclodextrins are cyclic oligosaccharides widely used to form inclusion complexes with poor water soluble drugs, with the aim to improve their solubility. The characterization of these complexes requires several analytical techniques. In a previous review part I, the analytical techniques used to characterize drug cyclodextrin complex phase solubility diagram, dissolution and scanning electron microscopy were described. The aim of this review is to detail other analytical tools also used in this characterization as X ray diffraction, infrared spectroscopy and nuclear magnetic resonance.", "venue": "", "year": 2012.0, "author_names": ["Andrea Ikeda Takahashi", "Francisco J Veiga", "Humberto Gomes Ferraz"], "n_citations": 19, "n_key_citations": 1, "score": 0}, {"corpus_id": 52091129, "title": "Applications of Powder X Ray Diffraction in Small Molecule Pharmaceuticals: Achievements and Aspirations.", "abstract": "Since the discovery of X ray diffraction and its potential to elucidate crystal symmetry, powder X ray diffraction has found diverse applications in the field of pharmaceutical sciences. This review summarizes significant achievements of the technique during various stages of dosage form development. Improved understanding of the principle involved and development of automated hardware and reliable software have led to increased instrumental sensitivity and improved data analysis. These advances continue to expand the applications of powder X ray diffraction to emerging research fields such as amorphous systems, mechanistic understanding of phase transformations, and \"Quality by Design\" in formulation development.", "venue": "Journal of pharmaceutical sciences", "year": 2018.0, "author_names": ["Naveen K Thakral", "Roger L Zanon", "Ron C Kelly", "Seema Thakral"], "n_citations": 25, "n_key_citations": 0, "score": 0}, {"corpus_id": 94781296, "title": "Single crystal X ray diffraction at extreme conditions: a review", "abstract": "The latest developments in single crystal X ray diffraction at high pressure and high temperature are described. Advances in diamond anvil cell designs and X ray sources allow collecting single crystal diffraction data at pressures up and above 100 GPa and at temperatures above 1000degC. The technical details of single crystal X ray diffraction at high pressure such as the choice of pressure transmitting media or the different methods for measuring pressures and temperatures have been reviewed. Examples of structural solution of complex structures and new materials, structural refinements of high pressure polymorphs as well as accurate compressibility data are described in order to outline the several advantages of using single crystals instead of powdered samples in high pressure diffraction experiments.", "venue": "", "year": 2013.0, "author_names": ["Tiziana Boffa Ballaran", "Alexander Kurnosov", "Dmytro M Trots"], "n_citations": 27, "n_key_citations": 1, "score": 0}, {"corpus_id": 72433233, "title": "A Review of High Energy X Ray Diffraction from Glasses and Liquids", "abstract": "This paper summarizes the scientific trends associated with the rapid development of the technique of high energy X ray diffraction over the past decade pertaining to the field of liquids, glasses, and amorphous materials. The measurement of high quality X ray structure factors out to large momentum transfers leads to high resolution pair distribution functions which can be directly compared to theory or combined with data from other experimental techniques. The advantages of combining highly penetrating radiation with low angle scattering are outlined together with the data analysis procedure and formalism. Also included are advances in high energy synchrotron beamline instrumentation, sample environment equipment, and an overview of the role of simulation and modeling for interpreting data from disordered materials. Several examples of recent trends in glass and liquid research are described. Finally, directions for future research are considered within the context of past and current developments in the field.", "venue": "", "year": 2012.0, "author_names": ["Chris J Benmore"], "n_citations": 47, "n_key_citations": 0, "score": 0}, {"corpus_id": 55029209, "title": "STRAIN INDUCED CRYSTALLIZATION OF NATURAL RUBBER: A REVIEW OF X RAY DIFFRACTION INVESTIGATIONS", "abstract": "Abstract Strain induced crystallization of natural rubber was discovered in 1925 by the means of x ray diffraction and has been widely investigated by this technique until today. The studies devoted to the structure of the crystalline phase of natural rubber are first reviewed. This structure is strongly anisotropic and can be related to the exceptionally good strength and fatigue properties of this material. The relationships between strain induced crystallization of natural rubber and its mechanical response, during static or tension retraction tests, are also reviewed and discussed; in particular, the hysteresis of the stress strain curve is mainly explained by strain induced crystallization. The kinetics of crystallization under both static and cyclic deformation is also discussed, as well as the influence of different factors, depending either on material composition (crosslink density, carbon black fillers) or on external parameters (temperature, strain rate.", "venue": "", "year": 2011.0, "author_names": ["Bertrand Huneau"], "n_citations": 149, "n_key_citations": 4, "score": 0}, {"corpus_id": 139657514, "title": "X Ray Diffraction under Extreme Conditions at the Advanced Light Source", "abstract": "The more than a century old technique of X ray diffraction in either angle or energy dispersive mode has been used to probe materials' microstructure in a number of ways, including phase identification, stress measurements, structure solutions, and the determination of physical properties such as compressibility and phase transition boundaries. The study of high pressure and high temperature materials has strongly benefitted from this technique when combined with the high brilliance source provided by third generation synchrotron facilities, such as the Advanced Light Source (ALS) (Berkeley, CA, USA) Here we present a brief review of recent work at this facility in the field of X ray diffraction under extreme conditions, including an overview of diamond anvil cells, X ray diffraction, and a summary of three beamline capabilities conducting X ray diffraction high pressure research in the diamond anvil cell.", "venue": "", "year": 2018.0, "author_names": ["Camelia V Stan", "Christine M Beavers", "Martin Kunz", "Nobumichi Tamura"], "n_citations": 14, "n_key_citations": 2, "score": 0}, {"corpus_id": 54655462, "title": "Determination of nanoparticulate magnetite stoichiometry by Mossbauer spectroscopy, acidic dissolution, and powder X ray diffraction: A critical review", "abstract": "Abstract A solid solution can exist of magnetite (Fe3O4) and maghemite (g Fe2O3) which is commonly referred to as nonstoichiometric or partially oxidized magnetite. The degree of stoichiometry in magnetite is quantitatively measured by determining the ratio of Fe2+ to Fe3+ Magnetite stoichiometry (x Fe2+/Fe3+ strongly influences several physical properties, including the coercitivity, sorption capacity, reduction potential, and crystalline structure. Magnetite stoichiometry has been extensively studied, although very little work exists examining the stoichiometry of nanoparticulate samples <100 nm) when the stoichiometry was measured for nanoparticulate samples, it was not validated with a secondary technique. Here, we review the three most common techniques to determine magnetite stoichiometry: (1) acidic dissolution; (2) Mossbauer spectroscopy; and (3) powder X ray diffraction (pXRD) specifically with nanoparticulate samples in mind. Eight samples of nonstoichiometric magnetite were synthesized with x ranging from 0 to 0.50 and with the particle size kept as similar as possible (BET specific surface area 63 7 m2/g; particle size 20 nm) Our measurements indicate excellent agreement between stoichiometries determined from Mossbauer spectra and by acidic dissolution, suggesting that Mossbauer spectroscopy may be a useful means for estimating magnetite stoichiometry in nanoparticulate, multi phases samples, such as those found in the environment. A significant linear correlation was also observed between the unit cell length (a) of magnetite measured by pXRD and magnetite stoichiometry, indicating that pXRD may also be useful for determining particle stoichiometry, especially for mixed phased samples", "venue": "", "year": 2010.0, "author_names": ["Christopher A Gorski", "Michelle M Scherer"], "n_citations": 168, "n_key_citations": 12, "score": 0}, {"corpus_id": 10607504, "title": "Introduction to Advanced X ray Diffraction Techniques for Polymeric Thin Films", "abstract": "X ray diffraction has been a standard technique for investigating structural properties of materials. However, most common applications in the organic materials community have been restricted to either chemical identification or qualitative strain analysis. Moreover, its use for polymeric thin films has been challenging because of the low structure factor of carbon and the thin film nature of the sample. Here, we provide a short review of advanced X ray diffraction (XRD) techniques suitable for polymeric thin films, including the type of analysis that can be done and measurement geometries that would compensate low signals due to low carbon structure factor and the thin film nature of the sample. We will also briefly cover the kh pole figure for texture analysis of ultra thin film that has recently become commonly used. A brief review of XRD theory is also presented.", "venue": "", "year": 2016.0, "author_names": ["Nicodemus Edwin Widjonarko"], "n_citations": 43, "n_key_citations": 1, "score": 0}, {"corpus_id": 94566614, "title": "X ray diffraction and X ray absorption spectroscopic analyses for intercalative nanohybrids with low crystallinity", "abstract": "Intercalation reactions can be achieved through ion exchange, pillaring, and exfoliation reassembling reactions to explore new intercalation compounds with desired electronic, electrochemical, and optical functions. Such intercalative nanohybrids with lamellar or porous structure have received much attention due to their potential applications such as catalysts, electrodes, selective adsorbents, stabilizing agents, and even drug delivery systems. In this review, we briefly introduce and highlight X ray diffraction and X ray absorption spectroscopy studies on the intercalative nanohybrids to understand their intracrystalline and electronic structures along with physicochemical functions.", "venue": "", "year": 2016.0, "author_names": ["Dae-hwan Park", "Jae-Hun Yang", "Ajayan Vinu", "Ahmed A Elzatahry", "Jin-Ho Choy"], "n_citations": 24, "n_key_citations": 0, "score": 0}]} -{"query": "Up 65 CM 2983", "session_id": 4278228753917186, "user_id": 2423746855600506, "candidates": [{"corpus_id": 199548517, "title": "Use of 65 cm large caliber Dryseal sheaths to facilitate delivery of the Edwards SAPIEN valve to dysfunctional right ventricular outflow tracts", "abstract": "The Edwards SAPIEN valve and its delivery system may complicate transit through the right heart during transcatheter pulmonary valve replacement (tPVR) We report our early experience using a large diameter, 65 cm delivery sheath to facilitate delivery of the SAPIEN valve to the right ventricular outflow tract (RVOT)", "venue": "Catheterization and cardiovascular interventions official journal of the Society for Cardiac Angiography Interventions", "year": 2019.0, "author_names": ["Damien Kenny", "Gareth J Morgan", "Matthew Murphy", "Khalid AlAlwi", "Luca Giugno", "Jenny E Zablah", "Mario Carminati", "Kevin P Walsh"], "n_citations": 12, "n_key_citations": 0, "score": 0}, {"corpus_id": 137946293, "title": "Characterization of discharge uniformity and performance via stimulated beam extraction of a 65 cm annular ion engine", "abstract": "The annular ion engine concept consists of a cylindrical ion thruster with a centrally located stalk that provides increased anode area. The increased electron collection area allows for increased discharge plasma current, and thus high power operation. The central stalk also provides support for the ion optics, allowing for larger beam area and scale up potential. Discharge performance of a 42 cm annular ion engine has previously been studied. Here, the discharge uniformity of a 65 cm annular ion engine during simulated beam extraction is assessed using three diagnostics: Faraday probes, Langmuir probes, and a fast camera. The percent uniformity was found to vary from 89.8% to 97.4% for discharge powers under 1.5 kW. The data presented herein demonstrates the feasibility of scalability of the annular ion engine. PhD candidate, Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Blvd. AIAA student member Professor, Nuclear Engineering and Radiological Sciences, 2355 Bonisteel Blvd. AIAA member Senior Technologist, Power and In Space propulsion Division, 2100 Brookpark Rd./MS 301 3, AIAA senior member Research Engineer, Propulsion and Propellants Branch, 21000 Brookpark Rd./MS 301 3, AIAA member Member of Technical Sta| Space Materials Laboratory, P.O. Box 92957 M2 341, AIAA member Senior Scientist, Space Materials Laboratory, P.O. Box 92957 M2 341, AIAA senior member", "venue": "", "year": 2015.0, "author_names": ["Neil A Arthur", "John E Foster", "Michael J Patterson", "Robert E Thomas", "Jason A Young", "Mark W Crofton"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 3517142, "title": "Potential benefits and harms of offering ultrasound surveillance to men aged 65 years and older with a subaneurysmal (2.5 2.9 cm) infrarenal aorta", "abstract": "Objective: The objective of this review was to perform a rapid evidence summary to determine the prevalence of subaneurysmal aortic aneurysms, growth rates, and risk factors that modulate growth in average risk men aged 65 years and older. Secondary objectives were to evaluate benefits and harms of lifelong ultrasound (US) surveillance and treatment outcomes for any large aneurysms that develop in the screened population. Methods: We searched multiple databases (eg, Ovid MEDLINE, Embase Classic and Embase, and the Cochrane Library) on February 16, 2016. Using a liberal accelerated method, two reviewers screened titles and abstracts for relevance and subsequently screened full text studies. General study characteristics (eg, country, study design, number of participants) and data (eg, number of men with subaneurysmal aortas, quality of life [QoL] mortality) were extracted. One reviewer performed data extraction and risk of bias assessments, and a second reviewer verified 100% of studies. Any disagreements were resolved by consensus. Results: The search identified 37 relevant studies ranging in size from 3 to 52,690 participants. Prevalence of subaneurysmal aortas ranged from 1.14% to 8.53% and 55% to 88% of these men progressed to a 3.0 cm aneurysm by 5 years of follow up. Risk factors for growth included the infrarenal aortic diameter at age 65 years, having a subaneurysmal aorta at age 65 years, and current smoking. The 36 Item Short Form Health Survey was the most commonly used tool to measure QoL, and QoL was typically lower in people with abdominal aortic aneurysm. Anxiety and depression levels did not differ significantly between comparison groups in any studies. Four studies reported on the number of men whose aorta was subaneurysmal on initial US who went on to surgery. Overall, 10% (57/547) of men initially measuring in the subaneurysmal range progressed to abdominal aortic aneurysm >5.4 cm and received elective surgery; 1% (6/547) received emergency surgery because of a ruptured aorta. Among those who did, mortality rates were much lower for elective (9.5% vs emergency surgery (50% Risk of bias was usually low for studies measuring prevalence and moderate and high for studies measuring psychological harms of screening and harms and benefits of surgery. Overall, using the Grading of Recommendations Assessment, Development, and Evaluation framework as guidance, the quality of the evidence was generally very low. Conclusions: Because of the limited evidence and the low quality of the existing evidence, it is not possible to determine confidently whether men with abdominal aortas measuring 2.5 to 2.9 cm should be observed in a lifelong US surveillance program.", "venue": "Journal of vascular surgery", "year": 2018.0, "author_names": ["Candyce Hamel", "Mona Ghannad", "Matthew D F McInnes", "John K Marshall", "Jonothan J Earnshaw", "Roxanne E Ward", "Becky Skidmore", "Chantelle Garritty"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 4785913, "title": "Vitamin D status and associated factors among Portuguese older adults: results from the Nutrition UP 65 cross sectional study", "abstract": "Objectives To evaluate vitamin D status and its associated factors in Portuguese older adults from the Nutrition UP 65 study. Design Cross sectional observational study. Participants and methods Nationwide cluster sample of 1500 Portuguese subjects =65 years old. Participants were classified, according to US Institute of Medicine cut offs, as presenting normal 25 hydroxyvitamin D (25(OH)D) levels =50.0 nmol/L) at risk of inadequacy (30.0 49.9 nmol/L) or at risk of deficiency <30 nmol/L) The association between individuals' characteristics and 25(OH)D levels was analysed through multinomial logistic regression analysis. Results Median 25(OH)D serum value was 36.1 (interquartile range (IQR) 35.5) nmol/L. According to the used cut offs, 39.6% of participants were at risk of 25(OH)D deficiency and 29.4% were at risk of 25(OH)D inadequacy. In the adjusted model, having higher skin pigmentation and waist circumference >88 cm for women and >102 cm for men were associated with higher odds of 25(OH)D deficiency. Otherwise, living in Lisbon Metropolitan Area and in Madeira, 1 12 years of schooling, being married or in a common law marriage, monthly income =EUR1000, alcohol consumption, medication or supplements with vitamin D supplement use, and blood samples collected in spring or summer were associated with lower odds of being at risk of 25(OH)D deficiency. In this model, season of blood sample collection, medication or supplements use, and waist circumference were the factors more strongly associated with 25(OH)D levels. Conclusions Despite using the conservative Institute of Medicine cut offs, over two thirds of these study participants presented inadequate 25(OH)D levels, warranting the implementation of corrective measures. Potentially modifiable factors were strongly associated with 25(OH)D levels in this study. These findings may be particularly relevant to the development of public health policies in southern European countries.", "venue": "BMJ Open", "year": 2017.0, "author_names": ["Alejandro Santos", "Teresa Freitas Amaral", "Rita S Guerra", "Ana S Sousa", "Luisa Alvares", "Pedro Moreira", "Patricia Padrao", "Claudia Afonso", "Nuno Borges"], "n_citations": 18, "n_key_citations": 1, "score": 0}, {"corpus_id": 5469357, "title": "SMARCB1/INI1 Loss in Epithelioid Schwannoma: A Clinicopathologic and Immunohistochemical Study of 65 Cases", "abstract": "The epithelioid variant of schwannoma is rare, and loss of SMARCB1/INI1 expression has been observed in a subset of cases. Our aim was to further define the clinicopathologic features and to evaluate SMARCB1/INI1 deficiency in a large cohort of 65 epithelioid schwannomas diagnosed between 2002 and 2015, which consisted of 32 men and 33 women with median age at diagnosis of 45 years (range, 13 to 75 y) Most tumors arose in the extremities (upper, 20, lower, 15) and trunk (17) 9 were visceral (8 gastrointestinal) Most somatic tumors were in dermis/subcutis (53/54) and encapsulated (53/54) with an epithelial membrane antigen positive perineurial capsule in 46 cases; visceral tumors were unencapsulated. No patients were reported to have any neurocristopathy. Three patients had multiple lesions (2 each) Tumor size range was 0.4 to 22.7 cm (median, 1.2 cm) Tumors showed multilobulated growth of uniform epithelioid cells in sheets and nests or singly dispersed within a frequently myxoid or hyalinized stroma. Tumor cells had round vesicular nuclei and abundant palely eosinophilic cytoplasm, usually lacking significant pleomorphism or hyperchromasia. Some tumors showed foci resembling conventional schwannoma (spindled morphology, 29; Antoni B foci or Verocay bodies, 8; hyalinized thick walled vessels, 16) Mitoses ranged from 0 to 9 per 10 high power fields (median count, 1) No tumor had necrosis. Twenty three cases showed degenerative nuclear atypia. Focally striking cytologic atypia was present in 7 tumors, 3 of which showed transformation to epithelioid malignant peripheral nerve sheath tumor. All tumors showed diffuse positivity for S 100 protein and consistent positivity for SOX10 (50/50) while INI1 expression was lost in 24 of 57. Other positive immunohistochemical results were: glial fibrillary acidic protein (15/37) and focal keratin (2/40) epithelial membrane antigen (0/53) and melanocytic markers were negative (Mart 1 0/29; HMB 45 0/23) Most patients underwent local excision (13 complete; 47 marginal/positive margins) Follow up data available for 31 patients (range, 1 to 108 mo; median, 37) indicated that no patient had developed metastatic disease, including 3 cases with cytologic atypia, one of which showed malignant transformation. One tumor without atypia developed local recurrence 48 months after marginal excision; all other patients were alive with no evidence of disease. Epithelioid schwannoma most commonly occurs as a superficial tumor on the extremities or trunk in adults. Loss of SMARCB1/INI1 expression is seen in 42% of tumors. Tumors follow a generally benign clinical course, although recurrence and malignant transformation are infrequent. Some tumors are characterized by notable cytologic atypia, the significance of which is uncertain but which may indicate a morphologic continuum with low grade epithelioid malignant peripheral nerve sheath tumor.", "venue": "The American journal of surgical pathology", "year": 2017.0, "author_names": ["Vickie Y Jo", "Christopher D M Fletcher"], "n_citations": 38, "n_key_citations": 3, "score": 0}, {"corpus_id": 7306150, "title": "Breast conserving surgery with or without irradiation in women aged 65 years or older with early breast cancer (PRIME II) a randomised controlled trial.", "abstract": "BACKGROUND For most older women with early breast cancer, standard treatment after breast conserving surgery is adjuvant whole breast radiotherapy and adjuvant endocrine treatment. We aimed to assess the effect omission of whole breast radiotherapy would have on local control in older women at low risk of local recurrence at 5 years. METHODS Between April 16, 2003, and Dec 22, 2009, 1326 women aged 65 years or older with early breast cancer judged low risk (ie, hormone receptor positive, axillary node negative, T1 T2 up to 3 cm at the longest dimension, and clear margins; grade 3 tumour histology or lymphovascular invasion, but not both, were permitted) who had had breast conserving surgery and were receiving adjuvant endocrine treatment, were recruited into a phase 3 randomised controlled trial at 76 centres in four countries. Eligible patients were randomly assigned to either whole breast radiotherapy (40 50 Gy in 15 25 fractions) or no radiotherapy by computer generated permuted block randomisation, stratified by centre, with a block size of four. The primary endpoint was ipsilateral breast tumour recurrence. Follow up continues and will end at the 10 year anniversary of the last randomised patient. Analyses were done by intention to treat. The trial is registered on ISRCTN.com, number ISRCTN95889329. FINDINGS 658 women who had undergone breast conserving surgery and who were receiving adjuvant endocrine treatment were randomly assigned to receive whole breast irradiation and 668 were allocated to no further treatment. After median follow up of 5 years (IQR 3*84 6*05) ipsilateral breast tumour recurrence was 1*3% (95% CI 0*2 2*3; n=5) in women assigned to whole breast radiotherapy and 4*1% (2*4 5*7; n=26) in those assigned no radiotherapy (p=0*0002) Compared with women allocated to whole breast radiotherapy, the univariate hazard ratio for ipsilateral breast tumour recurrence in women assigned to no radiotherapy was 5*19 (95% CI 1*99 13*52; p=0*0007) No differences in regional recurrence, distant metastases, contralateral breast cancers, or new breast cancers were noted between groups. 5 year overall survival was 93*9% (95% CI 91*8 96*0) in both groups (p=0*34) 89 women died; eight of 49 patients allocated to no radiotherapy and four of 40 assigned to radiotherapy died from breast cancer. INTERPRETATION Postoperative whole breast radiotherapy after breast conserving surgery and adjuvant endocrine treatment resulted in a significant but modest reduction in local recurrence for women aged 65 years or older with early breast cancer 5 years after randomisation. However, the 5 year rate of ipsilateral breast tumour recurrence is probably low enough for omission of radiotherapy to be considered for some patients. FUNDING Chief Scientist Office (Scottish Government) Breast Cancer Institute (Western General Hospital, Edinburgh)", "venue": "The Lancet. Oncology", "year": 2015.0, "author_names": ["Ian H Kunkler", "Linda Jane Williams", "Wilma J L Jack", "David Cameron", "J Michael Dixon"], "n_citations": 458, "n_key_citations": 7, "score": 1}, {"corpus_id": 31561392, "title": "THE MASS RADIUS RELATION FOR 65 EXOPLANETS SMALLER THAN 4 EARTH RADII", "abstract": "We study the masses and radii of 65 exoplanets smaller than 4 R with orbital periods shorter than 100 days. We calculate the weighted mean densities of planets in bins of 0.5 R and identify a density maximum of 7.6 g cm 3 at 1.4 R On average, planets with radii up to R P 1.5 R increase in density with increasing radius. Above 1.5 R the average planet density rapidly decreases with increasing radius, indicating that these planets have a large fraction of volatiles by volume overlying a rocky core. Including the solar system terrestrial planets with the exoplanets below 1.5 R we find rP 2.43 3.39(R P/R g cm 3 for R P 1.5 R which is consistent with rocky compositions. For 1.5 R P/R 4, we find M P/M 2.69(R P/R )0.93. The rms of planet masses to the fit between 1.5 and 4 R is 4.3 M with reduced kh2 6.2. The large scatter indicates a diversity in planet composition at a given radius. The compositional diversity can be due to planets of a given volume (as determined by their large H/He envelopes) containing rocky cores of different masses or compositions.", "venue": "", "year": 2014.0, "author_names": ["Lauren M Weiss", "Geoffrey W Marcy"], "n_citations": 423, "n_key_citations": 123, "score": 0}, {"corpus_id": 207636208, "title": "Is atlantoaxial instability the cause of Chiari malformation? Outcome analysis of 65 patients treated by atlantoaxial fixation.", "abstract": "OBJECT Understanding that atlantoaxial instability is the cause of Chiari malformation (CM) the author treated 65 patients using atlantoaxial stabilization. The results are analyzed. METHODS Cases of CM treated using atlantoaxial fixation during the period from January 2010 to November 2013 were reviewed and analyzed. Surgery was aimed at segmental arthrodesis. RESULTS The author treated 65 patients with CM in the defined study period. Fifty five patients had associated syringomyelia. Forty six patients had associated basilar invagination. Thirty seven patients had both basilar invagination and syringomyelia. Three patients had been treated earlier using foramen magnum decompression and duraplasty. According to the extent of their functional capabilities, patients were divided into 5 clinical grades. On the basis of the type of facetal alignment and atlantoaxial instability, the patients were divided into 3 groups. Type I dislocation (17 patients) was anterior atlantoaxial instability wherein the facet of the atlas was dislocated anterior to the facet of the axis. Type II dislocation (31 patients) was posterior atlantoaxial instability wherein the facet of the atlas was dislocated posterior to the facet of the axis. Type III dislocation (17 patients) was the absence of demonstrable facetal malalignment and was labeled as \"central\" atlantoaxial dislocation. In 18 patients, dynamic images showed vertical, mobile and at least partially reducible atlantoaxial dislocation. All patients were treated with atlantoaxial plate and screw fixation using techniques described in 1994 and 2004. Foramen magnum decompression or syrinx manipulation was not performed in any patient. Occipital bone and subaxial spinal elements were not included in the fixation construct. One patient died, and death occurred in the immediate postoperative phase and was related to a vertebral artery injury incurred during the operation. One patient had persistent symptoms. In the rest of the patients there was gratifying clinical improvement. More remarkably, in 7 patients, the symptoms of lower cranial nerve paresis improved. No patient worsened in their neurological function after surgery. Reductions in the size of the syrinx and regression of the CM were observed in 6 of 11 cases in which postoperative MRI was possible. During the follow up period, there was no delayed worsening of neurological function or symptoms in any patient. Sixty three patients improved after surgery, and the improvement was sustained during the average follow up period of 18 months. CONCLUSIONS On the basis of outcomes in this study, it appears that the pathogenesis of CM with or without associated basilar invagination and/or syringomyelia is primarily related to atlantoaxial instability. The data suggest that the surgical treatment in these cases should be directed toward atlantoaxial stabilization and segmental arthrodesis. Except in cases in which there is assimilation of the atlas, inclusion of the occipital bone is neither indicated nor provides optimum stability. Foramen magnum decompression is not necessary and may be counter effective in the long run.", "venue": "Journal of neurosurgery. Spine", "year": 2015.0, "author_names": ["Atul Goel"], "n_citations": 134, "n_key_citations": 1, "score": 0}, {"corpus_id": 29865676, "title": "Surgical safety and oncological completeness of robotic thyroidectomy for thyroid carcinoma larger than 2 cm", "abstract": "BackgroundThe safety of robotic thyroidectomy (RT) for small sized thyroid carcinomas has been well established. The surgical outcomes of bilateral axillo breast approach RT for thyroid carcinomas larger than 2 cm were evaluated and compared with those of open thyroidectomy (OT).MethodsThe medical records of patients who underwent total thyroidectomy or hemithyroidectomy followed by completion thyroidectomy for differentiated thyroid carcinomas measuring 2 4 cm were retrospectively reviewed.ResultsThe study included 86 patients who underwent RT (n 21) or OT (n 65) with mean ages of 30.8 and 51.6 years, respectively. The mean tumor size was 2.8 cm in both groups. There were no significant differences between the RT and OT groups in vocal cord palsy rate (transient, 19.0 vs. 9.2 permanent, 0 vs. 1.5 postoperative hypoparathyroidism rate (transient, 19.0 vs. 33.8 permanent, 4.8 vs. 1.5 and the number of retrieved central lymph nodes in papillary thyroid carcinoma patients (6.4 3.5 vs. 6.1 3.9, respectively) The proportion of the patients with serum stimulated thyroglobulin level of <1.0 ng/ml at the initial radioactive iodine treatment was 64.7 (11/17) for RT group and 66.0 (35/53) for OT group (p 0.920) There were three patients (1 RT and 2 OT) who had a biochemical incomplete response, and there was no case of anatomical recurrence or mortality during the median follow up period of 40.2 months.ConclusionRT is a safe and oncologically sound treatment option for differentiated thyroid carcinomas measuring 2 4 cm in a selected group of patients. The role of RT should be evaluated in correlation with technological advances and increased experience.", "venue": "Surgical Endoscopy", "year": 2016.0, "author_names": ["Young Jun Chai", "Hyunsuk Peter Suh", "Jung-Woo Woo", "Hyeong Won Yu", "Ra-Yeong Song", "Hyungju Kwon", "Kyu Eun Lee"], "n_citations": 31, "n_key_citations": 2, "score": 0}, {"corpus_id": 59047285, "title": "A 65 nm pixel readout ASIC with quick transverse momentum discrimination capabilities for the CMS Tracker at HL LHC", "abstract": "A readout ASIC for the hybrid pixel detector with the capability of performing quick recognition of particles with high transverse momentum has been designed for the requirements of the CMS Outer Tracker at the High Luminosity LHC. The particle momentum dicrimination capability represents the main challenge for this design together with the low power requirement: the constraint of low mass for the new tracker dictates a total power budget of less than 100 mW/cm(2) The choice of a 65 nm CMOS technology has made it possible to satisfy this power requirement despite the fairly large amount of logic necessary to perform the momentum discrimination and the continuous operation at 40 MHz. Several techniques for low power have been used to implement this logic that performs cluster reduction, position offset correction and coordinate encoding. A prototype chip including a large part of the final functionality and the full front end has been realized and comprises a matrix of 16 by 3 rectangular pixels of 100 mu m x 1446 mu m, providing 7.65 mm(2) of segmented active area. Measurements of the analog front end characteristics closely match the simulations and confirm the consumption of <30 mu A per pixel. Front end characterization and irradiation results up to 150 MRad are also reported.", "venue": "", "year": 2016.0, "author_names": ["Davide Ceresa", "Jan Kaplon", "Rui Francisco", "Alessandro Caratelli", "Kostas Kloukinas", "A Marchioro"], "n_citations": 13, "n_key_citations": 1, "score": 0}]} -{"query": "motivation in english language teaching", "session_id": 8136233520009742, "user_id": 3564088097807396, "candidates": [{"corpus_id": 148466489, "title": "Technology and Motivation in English Language Teaching and Learning", "abstract": "Advances in technology have made it easier for teachers and learners of English to access a wide range of resources in terms of authentic input and communication with native and non native speakers of English around the world. From the early days of computer assisted language learning (CALL) there has been discussion of how technologies can play a role in motivating learners in learning a language (e.g. Warschauer, 1996) and as technologies have become more sophisticated, the growing range of uses of technology in and out of the classroom increases the potential for enhanced motivation. My own teaching context is a large private university located in central Tokyo, where one might expect that technological advances are far more than those of many other countries around the world, including Europe and the United States. In my experience in discussions with colleagues and attending international conferences, there are more commonalities than differences in problems that are encountered regarding implementing technology for learning purposes and, for this reason, I have kept this discussion at a more general level, as the implications are likely to be of relevance to teachers regardless of where they are based.", "venue": "", "year": 2013.0, "author_names": ["Glenn Stockwell"], "n_citations": 70, "n_key_citations": 9, "score": 1}, {"corpus_id": 151284946, "title": "Place of Motivation in English Language Teaching", "abstract": "This paper is intended to deal with place of motivation in English language teaching. Motivation as one of topics of second and foreign language acquisition has always influenced on learning and teaching of English language. Language can be defined as the bond that links people together and binds them to their culture. The study of language has always played a crucial role in the history man. Man has tried to know his language, know how speech sounds relate to meaning when he/she is speaking or writing. Today, English language is used as one of the major important of languages among people over the world. Learning English language has been the main subject in schools, colleges and universities in the world. English language is used as foreign or second and even lingua franca among people in this world.A EnglishA A language is used as target language among learners in their schools, colleges, and universities. It is interesting to see how an English language learner learns English through motivation.", "venue": "", "year": 2018.0, "author_names": ["Ali Akbar Khansir", "Farhad Pakdel"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 149039543, "title": "The Impact of the School Leaving Certificate Examination on English Language Teaching and Student Motivation to Learn English", "abstract": "The Impact of the School Leaving Certificate Examination on English Language Teaching and Student Motivation to Learn English Book Section How to cite: Dawadi, Saraswati (2018) The Impact of the School Leaving Certificate Examination on English Language Teaching and Student Motivation to Learn English. In: Hayes, David ed. English Language Teaching in Nepal: Research, Reflection and Practice. British Council, pp. 133 164.", "venue": "", "year": 2018.0, "author_names": ["Saraswati Dawadi"], "n_citations": 5, "n_key_citations": 3, "score": 0}, {"corpus_id": 151861458, "title": "Using contextual factorsto promote student motivation in English language teaching", "abstract": "", "venue": "", "year": 2014.0, "author_names": ["Anita Muho", "Leonard Danglli"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 214249605, "title": "The Impact of Digital Storytelling on Academic Achievement of Sixth Grade Students in English Language and Their Motivation towards it in Jordan", "abstract": "This study aims to identify the impact of digital storytelling (DST) on academic achievement of sixth grade students in English language and their motivation towards it in Jordan.DST plays an important role in the maintenance and progress of English language.The research uses a quasi experimental method. The sample of the study consists of (50) male students were purposefully chosen from public schools at Jerash governorate. They were distributed into two groups: Experimental group which has (25) students learning English language through DST, and control group which has (25) students. They are taught the same content in traditional way. The findings of the study showed that there are statistically significant differences in students' academic achievement and students' motivation towards learning English language due to teaching method in favor of experimental group which DST strategy is the main method used in English language. Statistically significant differences were also found in students' motivation towards learning English language due to teaching method in favor of experimental group. In the light of the results, some recommendations were set like integrating DST in the teaching and learning English language.", "venue": "", "year": 2020.0, "author_names": ["Yousef Aljaraideh"], "n_citations": 3, "n_key_citations": 1, "score": 0}, {"corpus_id": 146859452, "title": "Intrinsic Motivation in English Language Teaching Ingilizce Ogretiminde Icsel Gudulenme", "abstract": "Human beings are bom with an intense need to explore, interact with, and make sense of their environment. However, with formal schooling, they seem to lose their enthusiasm and passion for learning. This fact implies that the school and its elements such as teachers, subjects, materials, have an important responsibility for increasing student motivation to learn. The aim of this article is to stress the importance of having an intrinsic motivation in English language teaching (ELT) Hence, this article attempts to define motivation, discuss its sources, display its criteria, present ways of increasing motivation and finally suggest some activities.", "venue": "", "year": 2004.0, "author_names": ["Aydan Ersoz"], "n_citations": 5, "n_key_citations": 1, "score": 0}, {"corpus_id": 196120162, "title": "Blended Learning: Improving Student's Motivation in English Teaching Learning Process", "abstract": "This research aims at revealing: the blended learning; the advantages of blended learning in the 21st century; the application of blended learning in the classroom. It is kinds of qualitative research which are aimed at revealing the blended learning for students' motivation in studying the English language. There is still lack of research about the blended learning in students' motivation; therefore, this research is significant to be conducted. The finding of the research can be described as follows: First, blended learning is learning the model that combine the positive sides of traditional mode such as face to face model with improved technology use to keep, improve, and engage the student's motivation and involvement the new star of teaching and learning process. Second, blended learning improves the learning access to materials and learning activities, and it can support and enhance the role of teachers, the experiences of the students and the social environment. Third, there are four main steps in applying blended learning, i.e. planning, designing and developing the blended learning elements, implementing, reviewing and evaluating the design.", "venue": "", "year": 2018.0, "author_names": ["I F Sari", "Ardiana Rahayu", "Dwi Indra Apriliandari", "Sulisworo Dwi"], "n_citations": 9, "n_key_citations": 1, "score": 0}, {"corpus_id": 151358220, "title": "The Factors Affecting Learners' Motivation in English Language Education", "abstract": "Teachers and researchers have broadly accepted motivation/demotivation as one of the most important elements in foreign language (L2) learning. The present research investigated the role of motivation and factors affecting students' motivation in teaching/learning English as foreign language. Parental, environmental, and teacher's attitude related factors were examined. Participants were 40 first grade students studying in English Language Teaching department. The participants were given a survey which consisted of several statements related with the mentioned factors. The current study showed that there were strategies and behaviours that motivate students but suppress positive attitudes towards English learning. The findings showed that learners were more motivated when their parents supported and encouraged them to learn English. The research also revealed that reinforcing the learner beliefs also motivated students and they were more motivated when they worked with their friends. Furthermore, the findings of this study suggested many behaviors and strategies which motivate learners.", "venue": "", "year": 2016.0, "author_names": ["Seda Ekiz", "Zahitjan Kulmetov"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 151865091, "title": "Effects of Tasks on Spoken Interaction and Motivation in English Language Learners.", "abstract": "Task based learning (TBL) or Task based learning and teaching (TBLT) is a communicative approach widely applied in settings where English has been taught as a foreign language (EFL) It has been documented as greatly useful to improve learners' communication skills. This research intended to find the effect of tasks on students' spoken interaction in English and motivation towards speaking English in the classroom. Thirty five adolescent tenth grade students from a public school in Bogota, Colombia, participated in the study. They reported positive influence of tasks in their English oral interaction improvement as well as on their motivation towards speaking English in the classroom.", "venue": "", "year": 2016.0, "author_names": ["Nubia Patricia Carrero Perez"], "n_citations": 8, "n_key_citations": 1, "score": 0}, {"corpus_id": 151021295, "title": "Motivation in english language teaching", "abstract": "", "venue": "", "year": 2009.0, "author_names": ["Fenyvesine Prohaszka Erzsebet"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Can Facebook Use Induce Well-Being?", "session_id": 244603922381275, "user_id": 5349408564644165, "candidates": [{"corpus_id": 12972279, "title": "Can Facebook Use Induce Well Being?", "abstract": "Over the past few decades, the widespread phenomenon of Internet abuse has gained attention from the public, academia, and the media. In a departure from this negative viewpoint, however, researchers and educators have devoted considerable effort in attempting to understand the influence of online communication on people's psychological well being. This study focuses specifically on Facebook, and proposes a research model to examine the relationships among Facebook use, online social support, general social support, and psychological well being. Our results show that using Facebook helped college students to obtain online social support, and that online social support is an extension of general social support. However, although general social support contributes to well being, online social support appears to have little direct effect on well being. The relationship between online social support and well being is mediated through the factor of general social support.", "venue": "Cyberpsychology Behav. Soc. Netw.", "year": 2013.0, "author_names": ["Chia-Yi Liu", "Chia-Ping Yu"], "n_citations": 115, "n_key_citations": 3, "score": 1}, {"corpus_id": 146579752, "title": "Reassessing the Facebook experiment: critical thinking about the validity of Big Data research", "abstract": "ABSTRACT The Facebook experiment of 2014 manipulated the contents of nearly 700,000 users' News Feeds to induce changes in their emotions. This experiment was widely criticized on ethical grounds regarding informed consent. This controversy, however, diverted attention from a more important concern the experiment was intended to address, which is the impact of Facebook use on well being. In this paper, I explore the well being concerns raised by prior research and argue that the experiment does not alleviate them, owing to poor research design. As the question of Facebook's impact on well being is of great importance, both to Facebook and to society overall, there is a pressing need for more experimental research that is both sensitive to informed consent and carefully designed to yield reliable results. In turn, the lessons of this case have implications for general issues of validity that emerge in Big Data research, now in vogue at major scientific venues.", "venue": "", "year": 2016.0, "author_names": ["Galen Panger"], "n_citations": 40, "n_key_citations": 1, "score": 0}, {"corpus_id": 228293778, "title": "SNS sayongja jungdog hyeongseong maekeonijeumgwa jungdog yebang jeonryag", "abstract": "Purpose The study examined the key factors influencing the formation mechanism of SNS addiction. Based on the use and gratification theory, we considered relationship maintenance, perceived enjoyment, and self expression as main desires to induce SNS addiction. The characteristics of SNS users were also considered as major factors affecting SNS addiction. In particular, self control and subjective well beings were considered to be prevention factors that could reduce SNS addiction, while SNS relational intimacy was considered to be a facilitator that would increase SNS addiction. Design/Methodology/Approach A structural equation modeling (SEM) method was used to test the theoretical framework based on a sample of 224 Facebook users who have used it more than 6 months. Confirmation factor analysis was conducted to check the reliability, convergent validity, and discriminant validity. Findings Relationship maintenance had a significant effect on self disclosure intention and SNS addiction, respectively. Perceived enjoyment was significantly related to self disclosure intention, while it was insignificantly associated with SNS addiction. However, self expression was not significantly related to both self disclosure intention and SNS addiction. Consistent with our expectations, both self control and subjective well beings had negative effects on SNS addiction. The analysis results found that SNS relational intimacy was positively related to SNS addiction.", "venue": "", "year": 2019.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 57393683, "title": "A Comparative Study on Distributed Storage and Erasure Coding Techniques Using Apache Hadoop Over NorNet Core", "abstract": "Both private and public sector organizations are constantly looking for new ways to keep their information safe and accessible at all times. Over the past few decades, replication has always been a reliable way to make sure data is constantly available, even though it has been proven to induce higher costs due to the additional required storage. Since the early 2000s, erasure codes have been developed as a means to drastically reduce the overhead, while enormously increasing efficiency and providing significant error correcting capabilities. One of the most well known erasure coding policies is Reed Solomon (RS) a highly consistent, reliable, and efficient technique to store and recover data, currently used at Facebook's data centers. Other frequently mentioned policies are Pyramid codes, a variant of Locally Repairable Codes (LRCs) that make use of a pyramid based scheme to generate additional parity groups for each level, and has been used at Microsoft's Windows Live servers. Apache Hadoop is an open source distributed framework used for scalable processing that has recently introduced erasure coding policies to their storage capabilities. NorNet Core (or NorNet Core Testbed1) a distributed academic network, will be used as the main scenario to measure, compare, and analyze these different erasure coding policies and their efficiency. Based on simulations of physically distributed storage, this thesis will show how minimal alterations in commonly known codes (such as RS codes) can converge in a Pyramid based code that could severely enhance fault tolerance and performance. Additionally, in a side to side comparison, it will be detailed how bigger codes (of higher dimension and length) more often than not, provide a more beneficial tradeoff. 1NorNet Core Testbed website: www.nntb.no.", "venue": "", "year": 2017.0, "author_names": ["Maximiliano Vela"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 71717273, "title": "Using Eye Tracking to Explore Facebook Use and Associations with Facebook Addiction, Mental Well being, and Personality", "abstract": "Social networking sites (SNSs) have become ubiquitous in our everyday lives, and for all its communicative benefits, excessive SNS use has been associated with a range of negative health implications. In the present study, the authors use eye tracking methodology to explore the relationship between individual differences in personality, mental well being, SNS usage, and the focus of Facebook users' visual attention. Participants (n 69, mean age 23.09, SD 7.54) completed questionnaire measures for personality and to examine changes in depression, anxiety, stress, and self esteem. They then engaged in a Facebook session while their eye movements and fixations were recorded. These fixations were coded as being directed to social and update areas of interest (AOI) of the Facebook interface. An exploratory analysis of personality factors revealed a negative correlation between openness to experience and inspection times for the updates AOI and an unexpected negative relationship between extraversion and inspection times for social AOI. There were correlations between changes in depression score and inspection of updates AOI, with reduced depression scores associated with increased inspection of updates. Finally, self reported duration of participants' typical Facebook sessions did not correlate with eye tracking measures but were associated with increased Facebook addiction scores and greater increases in depression scores. These initial findings indicate that there are differences in the outcomes of interacting with Facebook which can vary based on Facebook addiction, personality variables, and the Facebook features that individuals interact with.", "venue": "Behavioral sciences", "year": 2019.0, "author_names": ["Zaheer Hussain", "Boban Simonovic", "Edward J N Stupple", "Maggie Austin"], "n_citations": 10, "n_key_citations": 0, "score": 0}, {"corpus_id": 215822682, "title": "Relationship between Facebook Use and Psychological Well being for Baccalaureate Nursing Students at Benha University", "abstract": "Background: Facebook as one of the most visited online social networks provides massive opportunities and risk for users. In addition, the extensive use of Internet can increase the problematic effect of new media on students' psychological well being. Aim of the study: examine relationship between Facebook use and psychological well being for baccalaureate nursing students at BenhaUniversity. Research question: What is relationship between Facebook use and psychological well being for baccalaureate nursing students?.Design: Descriptive correlational design was utilized in this study. Setting: This study was conducted at the Faculty of Nursing Benha University. Subjects: This study was carried on all Baccalaureate nursing students. Tools for data collection: Socio demographic data questionnaire, The Face book Intensity scale and The Ryff's Psychological Well Being Scale. Results: results revealed that more than one third of the studied students have moderate score on Facebook intensity scale; while the minority of them has high score .There is a highly statistical significant correlation between Psychological Well Being and number of Facebook friends and also Facebook is part of their everyday activity. Conclusion: The students with moderate Facebook use will have moderate psychological well being. In addition, the more time a person spends using the Internet, the more addicted they will be affect our lives. Recommendations: It is necessary to spread awareness among children, youth groups and society at large to encourage the healthiest way to use the Internet in order not to negatively.", "venue": "", "year": 2018.0, "author_names": ["Mawaheb Mahmoud Zaki"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 219370579, "title": "Facebook Use and Individual Well Being: Like Me to Make Me Happier!", "abstract": "This paper aims to study how Facebook use influences individual well being. We use a survey conducted on a representative sample of 2,000 French Facebook users. Our results show that Facebook interferes with subjective well being through its effects on friendships and self esteem. Hence we find a positive relation between receiving a great number of Likes and comments from Facebook friends and the level of life satisfaction. By contrast, people that would like to receive more Likes tend to be more unsatisfied with their life. The latter result suggests that Facebook use can exacerbate frustration and envy. Finally, the time spent on Facebook, the intensity of online interactions as well as the number of Facebook friends have no direct impact on life satisfaction. All these findings underlines the ambivalence of Facebook use with both positive and negative psychological effects on well being.", "venue": "", "year": 2017.0, "author_names": ["Alexandre Mayol", "Thierry Penard"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 141964551, "title": "Facebook Use and Individual Well Being: Like Me to Make Me Happier!", "abstract": "This paper aims to study how Facebook use influences individual well being. We use a survey conducted on a representative sample of 2,000 French Facebook users. Our results show that Facebook interferes with subjective well being through its effects on friendships and self esteem. Hence we find a positive relation between receiving a great number of Likes and comments from Facebook friends and the level of life satisfaction. By contrast, people that would like to receive more Likes tend to be more unsatisfied with their life. The latter result suggests that Facebook use can exacerbate frustration and envy. Finally, the time spent on Facebook, the intensity of online interactions as well as the number of Facebook friends have no direct impact on life satisfaction. All these findings underlines the ambivalence of Facebook use with both positive and negative psychological effects on well being.", "venue": "", "year": 2017.0, "author_names": ["Alexandre Mayol", "Thierry Penard"], "n_citations": 7, "n_key_citations": 1, "score": 0}, {"corpus_id": 1801736, "title": "Do motivations for using Facebook moderate the association between Facebook use and psychological well being?", "abstract": "Previous investigations of the relationship between Facebook use and psychological well being have most commonly considered variables relating to the quantity (e.g. time spent online) and underlying motivations (e.g. making new friends) of Facebook consumption. However, previous research has reached contradictory conclusions in that quantity of Facebook use has been linked to both higher and lower levels of psychological well being. The current study investigated whether these contradictory findings of quantity of Facebook use could be explained by considering users' motivations for accessing Facebook. We predicted that quantity of use would be positively associated with psychological well being when users primarily accessed Facebook to maintain existing relationships but negatively associated with psychological well being when primarily accessed to create new relationships. In a sample of college undergraduates (N 119) we found that the relationship of quantity of Facebook use on psychological well being was moderated by the motivation of the user. Quantity of Facebook use was associated with higher levels of psychological well being among users that accessed Facebook for friendship purposes but was negatively associated with psychological well being among users that accessed Facebook for connection purposes (e.g. making new friends) We also replicated our results across dimensions of psychological well being (e.g. anxiety and life satisfaction) The current findings provide initial evidence that quantity and motivations of Facebook use interact with potentially serious implications for psychological well being and also provide a possible explanation for why quantity of Facebook use can be linked with both positive and negative psychological well being.", "venue": "Front. Psychol.", "year": 2015.0, "author_names": ["James R Rae", "Susan D Lonborg"], "n_citations": 45, "n_key_citations": 2, "score": 0}, {"corpus_id": 18836041, "title": "Commentary: Do motivations for using Facebook moderate the association between Facebook use and psychological well being?", "abstract": "Rae and Lonborg's (2015) findings are intriguing. They show greater Facebook (FB) use intensity can have beneficial or adverse effects on psychological well being (PWB) depending on the user's motives. This underlines that not all motives are equal: some motives are harmful and others helpful when it comes to PWB and psychopathology outcomes. Four central issues were raised: (1) exclusive focus on social (external) motives for FB use; (2) exclusive focus on PWB outcomes; (3) interesting pattern of findings in the supplemental analyses; and (4) exclusive focus on FB use motives as moderators. Each suggests exciting future possibilities for this nascent area of motives research.", "venue": "Front. Psychol.", "year": 2015.0, "author_names": ["Sherry Heather Stewart"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "Crisis and renewal: Meeting challenge of organizational change", "session_id": 3048106169072831, "user_id": 4020581546588111, "candidates": [{"corpus_id": 153454933, "title": "Crisis and Renewal: Meeting the Challenge of Organizational Change", "abstract": "It's coming again, the new collection that this site has. To complete your curiosity, we offer the favorite crisis and renewal meeting the challenge of organizational change management of innovation and change book as the choice today. This is a book that will show you even new to old thing. Forget it; it will be right for you. Well, when you are really dying of crisis and renewal meeting the challenge of organizational change management of innovation and change, just pick it. You know, this book is always making the fans to be dizzy if not to find.", "venue": "", "year": 1995.0, "author_names": ["James W Marcum"], "n_citations": 75, "n_key_citations": 3, "score": 0}, {"corpus_id": 221936619, "title": "Crisis and Renewal: Meeting the Challenge of Organizational Change", "abstract": "Table of Contents Introduction Chapter 1: The Wisdom of the Hunters Chapter 2: Learning and Performance Chapter 3: Boxes and Bubbles Chapter 4: Hunters of the Spirit Chapter 5: Growth and Renewal Chapter 6: Crisis Creation Chapter 7: Ethical Anarchy Notes Bibliography Index About the Author", "venue": "", "year": 1995.0, "author_names": ["David K Hurst"], "n_citations": 89, "n_key_citations": 3, "score": 1}, {"corpus_id": 144876974, "title": "Reviews: Crisis and Renewal, Meeting the Challenge of Organizational Change, David K. Hurst", "abstract": "(1999) Reviews: Crisis and Renewal, Meeting the Challenge of Organizational Change, David K. Hurst. Emergence: Vol. 1, No. 2, pp. 109 114.", "venue": "", "year": 1999.0, "author_names": ["Robert M Cutler", "James A Drake"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 150932847, "title": "Crisis Renewal: Meeting the Challenge of Organizational Change", "abstract": "Hurst, D. K. (1995) Crisis renewal: Meeting the challenge of organizational change. From The Management of Innovation and Change Series, Michael L Tushman and Andrew H. Van de Ven, Series Editors. Boston MA: Harvard Business School Press. By Virgil Smith Biola University This book will prove useful to the practitioner who believes his or her organization is growing stagnant. Hurst uses ideas from Population Ecology Theory and Chaos Theory, augmented by case histories of firms that have faced dramatic change, to explain why organizations cease to innovate, and what the owner/manager can do about it. He begins by using an analogy of an organizational system the Bushmen culture in the Kalahari Desert. This hunter gatherer culture, overtime, was transformed into a herder society the point is that organizations follow the same patterns. Hurst's argument is that many of the features that managers seek in the modern organization were present in hunter bands. Each person within the band was multi skilled, there was little hierarchy, and there was a high level of open communication, trust, and empowerment. These hunter bands were traditionally self organizing entities which appeared chaotic from the outside, but made perfect sense to the band itself. More recently the lure of material wealth has caused the Bushmen hunter gatherer culture to be transformed into a herder culture. With the advent of the possible ownership of material property the hunter bands suddenly needed a hierarchy to resolve disputes over ownership of the property, and the hierarchy diminished the power of the individuals at lower levels of the hierarchy. Also, band members tended to specialize for material productivity, becoming single skilled. Since members were afraid that someone might be trying to get what they had, communication became less open, and trust diminished. Once the transformation had occurred from a hunter way of life to a herder society the Bushmen culture looked very much like a traditional organization. While the hunter culture was self organizing and looked chaotic, the herder culture was structured from the top down and looked much more orderly. Hurst uses the analogy of the hunters and the herders throughout the book. The hunter culture with its open communication, trust, and empowerment allows for the learning necessary for quick, creative responses and adaptation to opportunities. This, in turn, allows the clan to deal with an environment that is treacherous and constantly changing. Indeed, when the Bushmen were hunters, they were constantly on the move, seeking sustenance where they could get it. On the other hand, the herder culture with its hierarchy and specialization allows the band to be maximally productive in a much more stable \"herding\" environment, where they stay in one place and gather possessions. Using the previous analogy, Hurst says that most entrepreneurial firms start out as hunters but, over time and with success, they are transformed into herders. However, they still need the innovative skills of the hunters. He says, \"[Y]oung businesses begin their lives as informal, learning organizations, but if successful, they become formal, performance organizations. It is thus helpful to think of learning and performing as two ends of a continuum with the young organizations starting off on the left hand side and moving toward the right as they age\" (Page 33, italics in original) According to Hurst, the current adoption of organizational methods such as interdisciplinary teams and networks is an attempt on the part of managers to go back to being hunters. Small, new organizations are held together through the self selection of members into the organization, and an overriding agreement with the organizational mission everything else is worked out as needed. Strategy in new organizations tends to be emergent, while it is top down in more established companies.", "venue": "", "year": 1999.0, "author_names": ["Virgil O Smith"], "n_citations": 36, "n_key_citations": 3, "score": 0}, {"corpus_id": 84277323, "title": "Governance of innovation in animal production: new roles for science, business and the public sector", "abstract": "Abstract To discuss the governance of innovation in animal production three innovation models are placed in the context of the phases of development of agriculture (according to Hurst, 1997, Crisis and renewal meeting the challenge of organizational change. Scriptum, Schiedam) The phases distinguished are spontaneous action (breakthrough of a new paradigm, rational action (heyday) and action under restrictions (new choices for science, business and public) The associated innovation models are the Participatory Technology Development (PTD) model, the linear model and the chain link model. It is argued that the linear model has been the predominant one in the past half century where food security was the prime drive for action. In the last decade this drive clearly fades away and new goals of animal production emerge, requiring another innovation model. Using the examples of two firms it is illustrated that the chain link model, along with the linear model seems an efficient way to deal with changing circumstances. To show the dynamics of the system, the model is extended into one in which the three models of innovation do not follow after each other (with the chain model as the end model) but where both PTD as the chain link model are starting points in situations of change to take aspects of the linear model on board when heyday emerges. It is argued that in the more dynamic context where heyday clearly is not predominant another role of researchers is required. Where during heyday, participation in the process of optimisation and finetuning of production systems is a successful approach, during the other two phases a problem observation role is required, where the researcher takes part in the public debate on direction and usefulness of solutions.", "venue": "", "year": 2001.0, "author_names": ["Gert Van Dijk", "P van Boekel"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 225002473, "title": "Leadership that Generates Resilience: An Introduction to Second Resilience Forum", "abstract": "In challenging and strenuous times such as during the current pandemic, public and private leadership is faced with extraordinary pressures on their leadership. On what basis should urgent yet critical decisions be made? And practically, how to legitimately lock down a society, closing down businesses and educational institutions, or decide to leave them open when such decisions carry heavy costs to people and organizations? In the context of resilient leadership, Hamel and Valikangas (2003) proposed the concept of 'Zero Trauma Transformation' as foundational to the quest for resilience. The notion was premised on being able to meet major changes before they turn into crises, including conquering denial of the need for change, valuing variety in strategic options and liberating resources to their most innovative uses, and embracing both efficiency and renewal. These four leadership challenges were identified as cognitive, strategic, political, and ideological, and meeting such challenges was suggested necessary for continuous strategic renewal. In a societal crisis situation such as the current COVID 19 epidemic that has profound implications for people's livelihoods, well being, and even political stability, there may be a further challenge worthy of contemplation. Namely, on what moral grounds may leadership be built? Even further: how might those decisions, and the accompanying leadership, be generative of resilience strengthening the society rather than diminishing future capability for coping? Such societal, and economic, resilience is about to be tested should the second, or third wave, of the COVID 19 virus spread. We conclude leadership is to become a moral endeavor should leaders wish to generate resilience in a major crisis. The exercise of leadership under such conditions can be informed by moral philosophy. Consider the perspective provided by John Rawls, a leading American philosopher known for his theory of justice as fairness (Rawls, 1971) Beyond everyone having equal claims to basic liberties, Rawls formulated the much debated second 'difference principle' which stated that any Management and Organization Review 16:4, October 2020, 737 739 doi: 10.1017/mor.2020.52", "venue": "Management and Organization Review", "year": 2020.0, "author_names": ["Liisa Valikangas"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 152338627, "title": "The Future of the International Labour Organization in the Global Economy", "abstract": "Introduction: Whither the ILO's Second Century? Persuasion at its Limits in the Global Economy I. The (false) dilemma: survival or integrity II. The real challenge: are 100 year old institutional choices still valid in the twenty first century? III. The core issue Part I Ninety Years of Transformations in the International System: Challenges Posed to ILO Persuasiveness 1. The Cold War and the trente glorieuses, a Not Quite Golden Age for the ILO and its Persuasiveness 2. Globalisation Ascendant: The ILO's raison d'etre Restored But the Gamble on Persuasion Nearly Lost I. Two decades of recurrent social disappointments and the corresponding demand for greater social regulation II. Which puts into question its capacity to meet the demand through the 'traditional' normative strategy 3. Have Recent Efforts at Institutional Renewal Already Fallen Behind the Pace of Change in the Economic Environment? I. A renewal in four stages II. Have the consequences of the financial crisis rendered the renewal old news? Part II The Proliferation of Multilateral Actors and the Challenge of Coherence 4. Social Goals: Doomed to Remain 'Country Cousins' of Economic Objectives at the Universal Level? I. From the pre war marginalisation of the ILO's social objectives to the Declaration of Philadelphia's failed attempt at a hostile takeover II. The 'mandated' segregation of social considerations in the (former? practice of the World Bank 5. Employment: Functional 'Common Ground' or Policy Fault line? I. Employment: free standing policy objective or economic windfall? II. The evolving employment dilemma: quantity vs quality III. The financial crisis: an (as yet) unexploited opportunity to give social objectives their due 6. Boosting the ILO's Capacity to Promote Coherence I. Promoting state level coherence II. Pursuing coherence through inter organisational dialogue III. Actively supporting the emergence of a relevant 'Epistemic Community' Part III ILO Influence and the Enduring Demand for Universal'Rules of the Game' 7. From the Impasse of the Social Clause Debate to the Delimitation of Fundamental Rights at Work as Shared 'Rules of the Game' I. From the misleading analogy of 'social dumping' II. Or ambiguous references to 'internationally recognised worker rights' III. To the ILO's functionalist approach, turning on 'fundamental rights as enabling rights' (and as potential rules of the game) 8. Can the Social 'Rules of the Game' be Made More Effective by Linking Them to Basic Trade Rules? I. Legitimacy, perceptions and barriers to a possible legal transplant II. Reading between the lines: from a formal marriage of trade rules and labour rights to a possible (clandestine) liaison? 9. Decentralised Linkages: A Mixed Blessing for the ILO? I. A mixed blessing from the viewpoint of implementing ILO standards and procedures II. A limited 'enforcement' blessing for workers III. Conclusion: Is there more to the phenomenon than meets the eye? Part IV The 'Market for Social Justice' to the Rescue of The ILO's Persuasive Capacities? 10. A Lopsided 'Market for Social Justice' Calling for Public Involvement I. Consumer preferences to the rescue of failing state resolve II. Public supply of information to overcome market failure 11. Meeting Transnational Demand with a Transnational Supply: A Jointly Established Labelling System I. From the difficulties of 'labelled at destination' to the possibilities of 'labelled at the origin' II. Need for a multilateral system of mutual recognition and impartial verification III. To guarantee what? Effective application by each party of legislation satisfying agreed international labour standards IV. In conclusion: Is there a market for a market based approach to social justice? Conclusion: Reinventing the ILO? I. Three necessary and feasible aspects of an institutional reinvention II. Which requires mobilising all actors and drawing on its full array of transformative powers III. And widening the horizon of social justice", "venue": "", "year": 2013.0, "author_names": ["Francis Maupain"], "n_citations": 43, "n_key_citations": 1, "score": 0}, {"corpus_id": 153294363, "title": "The principal's companion strategies for making the job easier", "abstract": "Foreword by Kent D. Peterson Preface Acknowledgments About the Authors Part I. The Principal's Many Roles 1. Leader as Learner Principal as Lifelong Learner Learning in Many Contexts The School as a Powerful Context for Learning A Global Perspective When Old and New Ideas Converge 2. Leader as Manager Good Leadership Requires Effective Management Management Responsibilities and Strategies Crisis Management Planning A Final Observation Regarding School Management 3. Leader as Shaper of School Culture Core Beliefs and Values Are the Heart of Culture The Physical Environment Reflects Core Values Rituals Display Core Values and Call Attention to What Is Important Celebrations Call Attention to What Is Important How People Spend Time Reflects Core Values Norms Are the Unwritten Rules of Culture Powerful Stories Communicate and Reinforce Cultural Values Reading, Transforming, or Shaping a Culture Final Thoughts on Culture Part II. Critical Skills for Effective Leadership 4. The Art of Human Relations: Getting the Job Done Task and Relationship Behaviors Differentiated Support Personality Styles Recommendations for Skillful Human Relations The Role of Emotions in the Organization: Remembering the Heart 5. Managing Time Brevity, Fragmentation, and Variety Techniques for Time Management Managing Bifocally Multi Tasking: A Modern Day Solution or Hazard? Final Thoughts on Using Time 6. Effectively Working With the Central Office and Other Schools: Forging Success Through Collaboration Caught in the Middle How Is the School District Governed? Communication Between the School and the Central Office Management Tips for Working With the Central Office Maintaining a Strong Relationship Between the Central Office and the School Forging a School and Central Office Partnership: Putting Staff and Student Learning First Part III. Honoring the School's Mission 7. Understanding, Planning, and Implementing Change Change Brings Loss and Resistance Influencing Individuals and the Institution Building Trust for Successful Change Conflict Can Contribute to Positive Change Strategies to Promote Trust Classical Insights Regarding Change and Continuous Improvement Three Phases of Change A Look at Change From the Individual's Perspective Stages of Concern Some Final Thoughts on Change 8. Building a Vision and a Mission Together Why Have a School Vision and Mission? School Activities That Highlight the Mission Joint Administrative and Faculty Mission Statement Mission Building Activity Developing Yearly School Improvement Goals to Accomplish the Mission Part IV. Working Together to Build a Learning Organization 9. Enhancing Teacher Growth Through Supervision and Evaluation Practices Designed to Promote Student Learning Issues and Dilemmas Essential Ingredients for Successful Supervision Effective Instructional Strategies Brain Compatible Teaching Practices Increasing Teacher and Administrative Reflection Through Clinical Supervision Tips for Conferencing and Observing Walkthroughs, Snapshots, or Drive Bys Guidelines Related to Evaluation and Legal Concerns Final Thoughts on Supervision and Evaluation 10. Maximizing Feedback About Teaching: Differentiated Professional Growth Options Reflections on Feedback Moving Toward Collaborative Feedback Differentiated Professional Growth Options: How the System Works Sources of Feedback: Categories and Approaches Self Assessment: Establishing Benchmarks of Progress Individual Reflection and Institutional Renewal 11. Building a Collaborative School: The Power of Teacher Leadership and Community Portrait of a Collaborative School: The Professional Learning Community An Image of Reality The Case for Collaboration Moving Toward Collaboration Necessary Conditions for a Collaborative School A Spectrum of Activities Teacher Leadership and the Collaborative School The Principal and Collaboration Some Final Thoughts on Collaboration 12. Fueling the Learning Organization Through Professional Development Why Professional Development? Professional Development Defined Creating an Atmosphere for Professional Development to Thrive: Some Guidelines Planning a Peer Coaching Program Facilitating the Individual's Professional Development Experience The Big Picture 13. Faculty Meetings: A Tool for Capacity Building Faculty Meetings as Learning Opportunities The School Mission and Faculty Meetings Increasing the Teachers' Roles in Faculty Meetings Some Successful Faculty Meeting Strategies A Final Thought 14. Asking the Right Questions About Curriculum, Instruction, and Assessment: Getting to Know the C.I.A. Keeping the Curriculum Relevant Asking the Right Questions Continuing the Curriculum Discussion Part V. Starting Effectively and Staying the Course 15. First Days of School Logistical Concerns Beginning of the Year Faculty Meetings Set a Tone Departmental and Grade Level Meetings Orienting Teachers Who Are New to the School Teacher Time in the Classroom Welcoming Students and Parents Be Visible on the First Days of School 16. Tips: Ideas That Work and Align With the School's Mission Organizing Your Time Making Record Keeping Easier Additional Helpful Ideas to Stay On Task Tips on Using Technology to Enhance a Principal's Performance Providing Experiences to Celebrate the School's Culture Tips on Opening a New School Practical Guidelines for Preparing Printed Materials for Internal and External School Community Members Using Tips in Your Setting Part VI. Understanding Your Constituencies 17. Working With Parents and Partnering With the Greater Community Effectively Communicating With Parents Building Bridges With the Parent Community Additional Ways to Bring Parents and Community Members Into School Broadening School Support and Partnerships Community Based Organizations Seeking School Support Through Educational Grants Reaching Out and Working With the Media A Reflection on Partnering With Parents and the Community 18. Making a Difference for Students: The Heart of the School Social Justice and the Challenge of Excellence and Equality \"Those Kids\" and Their Stories The Right to Be a Child and to Make Mistakes Maximizing Opportunities for Students With Disabilities Structuring Student Success Discipline Guidelines Effective Classroom Management: Handling Disciplinary Problems Reducing Bullying Behavior Cyberbullying and Social Responsibility The High School Dropout Crisis Student and Teacher Resiliency Final Thoughts on \"Those Kids\" Part VII. The Principal's Professional and Personal Worlds 19. The Newcomer to the Principalship Problems That Challenge New Principals A Profile of the New Principal Helping Prospective and New Principals Make the Grade Practical Suggestions for Newcomers Final Thoughts on the Newcomer Experience 20. Taking Care of Yourself The Selfish Nature of Martyrdom Taking Control of Your Schedule to Care for Yourself A Personal Mission Statement Gaining Perspective by Spending Time With Students Body and Mind: Healthy and Ill Together Maintaining Institutional and Individual Balance 21. Keeping the Professional Candle Lit Institutionalizing Professional Growth Activities Reflection as a Tool A Principal's Portfolio Other Growth Opportunities 22. Reflections on the Principalship Serving the School Community Where Do We Go From Here? The Good School Take Time to Smell the Roses References and Additional Readings Index", "venue": "", "year": 2009.0, "author_names": ["Pamela Clark Robbins", "Harvey B Alvy"], "n_citations": 18, "n_key_citations": 3, "score": 0}, {"corpus_id": 201039525, "title": "Weaving Teams into the Corporate Fiber Rebuilding Malden Mills Industries After a Destructive Fire", "abstract": "Virtually leveled by a fire, Malden Mills, manufacturer of high performance Polartec fabrics, embraced teaming as a fundamental building block of its renewal. This familyowned business transformed a natural disaster into organizational renewal. Teaming provided the engine for organizational alignment and employee empowerment, and changed the culture of the organization, facilitating the sharing of essential knowledge. We describe methods used to create the changes, and the internal resistance and external pressures that threatened crossfunctional teams. Finally, strategies for creating and sustaining resilient leadership teams near and at the top of the organization are discussed. Introduction This presentation considers strategies for developing and sustaining resilient crossfunctional teams in a crucible of recurring turbulence and threat. The original threat to the organization's continued existence the fire was external, and understandable to all. As the organization rebuilt its facilities physically, it became increasingly clear that great agility would be required to manage the market consequences of the fire, and crossfunctional teams were created to sustain the momentum of the reconstruction. More subtle and pernicious threats arose in the form of internal resistance to transformation, despite strong evidence supporting the need for change. The story begins, however, with Malden Mills facing a natural disaster. Celebrating his 70th birthday at Boston's Cafe Budapest with a small circle of friends and family, the CEO of Maiden Mills Industries received word from frantic managers that the Mill was in flames. Aaron and Louise Feuerstein quickly drove to nearby Lawrence, and joined thousands of anguished people who watched the six alarm fire rage out of control. \"It looked like Rome burning,\" said Louise, \"with 45 mile an hour winds whipping flames from one factory to a second factory to the five story main building:\" All that remained standing by morning was a lone brick tower. The livelihoods of 3,100 people appeared to be in ruins, like the ash and twisted rubble that remained. On the following morning, 1000 workers came to hear the CEO speak about the future. The CEO had options. He could manufacture off shore, like many others in the US textile business. He could accept the insurance payment and close the Mill. \"When all the textile mills in Lawrence ran out to get cheaper labor in the South, we stuck,\" Feuerstein said. We are going to stay and rebuild.\" Remember, everyone,\" Feuerstein added, \"we're playing to a higher judge. Don't tell me the job can't be done:\" The next day, stunned workers received their pay in full, including a $275 Christmas bonus, with a note from the CEO. \"Do not despair,\" he wrote, \"God bless each of you.\" How did the fire help pave the way to a team based approach? Several months before the fire, GLS Consulting, Inc. and Malden Mills managers piloted the team concept in the division that later burned to the ground. We conducted an organizational audit and provided feedback in the September, organized a joint unionmanagement Steering Committee, created a customized curriculum, and started team training in the beginning of November just six weeks before the fire. GLS principals were at the plant a day after the fire, pitching in wherever we could be useful. We were certain the team project would be sacrificed, given the crisis of rebuilding the Mill. For six months we offered pro bono services, and kept in close touch with the CEO and key managers. Most key executives perceived the team project as \"icing on the cake\" last in the long line of priorities for rebuilding. Despite opposition from some senior managers, the CEO strengthened his resolve to change the culture of the organization from a traditional manufacturing system to a more participative, cooperative, team based system. For the CEO, the fire created a window of opportunity for change. He strongly believed in the importance of attending to the \"human equation\" in management. He believed that Maiden Mills' employees made the company great, and that they deserved the best wages and benefits in the industry. A team based organization would take his ideas about the \"human equation\" beyond traditional HR policies, creating an infrastructure that would help Maiden Mills emerge from the ashes an even stronger organization. Manufacturing resumed on a small scale in undamaged facilities soon after the fire. Most customers remained loyal. A year after the fire, and despite a variety of setbacks and financial difficulties, a new state of the art manufacturing plant geared up to fill customer orders, and a team based strategy for renewal was well underway. It began in the smaller, upholstery fabrics division of the company, and was eventually extended to include the larger apparel fabrics division. What was the pre fire system like? Systemic problems revealed in our initial diagnostic audit of the smaller division were confirmed by a supplemental audit of the larger manufacturing division. Both studies pointed to the need for a structural approach that was very different from traditional \"who reports to whom\" reorganizations. The following are some of the undesirable characteristics of the command and control based system that had been in place for a long time. Competitive rather than collaborative relationships among divisions and departments were obstacles to meeting new and changing production challenges. Employees experienced work as poorly planned, poorly coordinated, and characterized by considerable chaos and inefficiency. The \"stove pipe\" relationships among manufacturing, R&D, and marketing/sales resulted in failures to make available valuable information, and in impoverished planning. Managers in different functions had much difficulty coming to agreements, and difficulty keeping agreements once they were made. Some senior managers devalued managers below them in the hierarchy; at the same time, the tendency to find people to blame for problems led people to try to protect themselves from risk, and from having to admit mistakes. The tendency to manage quality and other manufacturing problems as if they were moment tomoment crises was a chronic and critical issue for the organization. What strategy did we use to implement cross functional collaboration? The change strategy was a broad collaborative effort between key managers at Maiden Mills and our consulting organization. We operated on several levels simultaneously, and worked to avoid generating strong resistance too early. We also maintained a consistent link between the CEO's social principles and the effort to enhance collaboration at all levels. In out discussions and planning efforts we emphasized the synergy between: Teams and the company's strategic intent and vision Teams and measurable, tangible business results Teams and the capacity to become more agile and responsive to market changes to transform Maiden Mills into what deGeus (1997) has called a \"living organization\" The implementation strategy for creating these changes involved: Loosening the control of senior management over day to day decisions, and gradually turning it over to the people closest to the work. We flouted conventional wisdom, and began building from the middle of the organization rather than starting at the top. From the middle we worked our way both downward to the production floor, and upward to top management. We designed a cross functional teaming structure, and through a training program provided essential teamwork skills. The training program was also designed to repair and strengthen relationships among employees at all levels. Cross functional (and cross level) groups received 80 hours of teamwork and leadership development training. Thus we were able to create a community within which work could be accomplished more effectively. The Team Steering Committee became the engine for organizational renewal. Eventually an Executive Team, a broad based Strategy Council, a divisional Policy and Planning Team, a Manufacturing Operations Team, and a number of Production Support teams were created. At the same time, production operators began to work together toward their own local goals. In addition to training, we made available individual coaching for managers, so that the influence of the change effort could be extended. We created measures of team success measures of productivity, quality, and so on. We also created a bi monthly random sample survey that provided \"soft\" indicators of teamwork and morale. This measure helped us detect emerging problems that could be dealt with before they worsened. One of the internal champions of the project referred to our strategy for successful implementation as \"working under the radar screen.\" We stayed away from the limelight, and concentrated on working with people who could get the job done. The Director of Manufacturing was a strong champion of the effort, and provided us with enough stability and buffer so that the change effort could continue despite the recurrent turbulence in the organization. What about resistance to change? Resistance was part of the normal state of affairs at any given time. We saw resistance as an indicator that change was actually occurring. Senior executives initially resisted the change effort, and even the Steering Committee only reluctantly accepted its assignment. Each organizational level both hoped and feared that change could actually occur. Each level (and each function) showed evidence of a pervasive sense of helplessness; people seemed always to look to the level above to give orders, and for someone else to take responsibility for results. Empowering others meant losing managerial flexibility, control over tasks and results, and loss of control of information that could be damaging. Whenever resistance became apparent, it spoke of a safety and trust issue in t", "venue": "", "year": 2005.0, "author_names": ["Mindy L Gewirtz", "Peter Gumpert", "Yoram Shahar", "Malden Mills"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 32438211, "title": "Primary Care at a Crossroads", "abstract": "The term primary care means different things to different people. Many of us have championed primary care because of its theoretical and practical contributions to the health of individuals and populations. Despite the rise of scientific medicine and specialization, the concept of primary care has achieved an important place in the delivery of health services. Over the past century, primary care has developed from the idea of a family doctor who tended to the medical, and at times the emotional and social, needs of his patients to a new and much richer and idealized concept that includes prevention, continuity of care, health maintenance, and death with dignity, among others. The renaissance of primary care that began in the 1970s, however, has begun to wane. Today several organizational, economic, and social forces are presenting new challenges to primary care. How those challenges are addressed will largely determine the future of primary care and primary care's role in addressing the health needs of our population. The attributes of first contact, continuous, and comprehensive care make primary care an excellent entry point to and coordinator of care. These same characteristics, however, also make primary care an ideal gatekeeper for managed care and other systems that wish to control access to and utilization of services. The success of boutique providers who guarantee access, a continuous relationship with a primary care physician, and personalized referral to a specialist in return for a retainer fee (for those who can afford it) demonstrates another aspect of the attractiveness of primary care amidst a rising tide of consumerism and so called market driven medicine. Almost uniquely among health care specialties, primary care providers have been willing, even eager, to include within primary care's responsibilities a variety of functions that are not strictly medical in nature and that go well beyond a provider's practice site. The use of a team of providers with varied expertise is a logical extension of this broadening of the definition of primary care's role. This expansion of responsibility also prompts the question, however, of whether primary care has become too complex, taken on too much, promised too much to too many. Must a primary care provider care for all, be up to date on all of the latest discoveries and treatments and be an expert diagnostician, have the patience to care for persons with chronic illness, the compassion to care for those at the end of life, the sophistication to recognize behavioral and social problems, the communication skills to encourage patient behavior change? Have we raised the bar too high, put too much responsibility on the shoulders of primary care? In response to concerns about the future of primary care, the Robert Wood Johnson Foundation invited leaders in primary care and other health sectors to a small meeting to discuss the current and future challenges to primary care and to develop new and innovative ideas about how primary care might meet the needs of our current and future population. Forty five persons attended the meeting (see Appendix for a list of the attendees) The meeting was organized by a team at the University of California, San Francisco, that worked with an advisory committee and Robert Wood Johnson Foundation staff to define the meeting's goals, objectives, and content. Held in early October 2001, in Glen Cove, New York, the goal of The Future of Primary Care meeting was to identify a set of normative ideas or principles to help guide future discussions about the definition and creation of a health care system that will address the needs of our future population. The premise of the meeting was that primary care is at a crossroads; especially as our population and the financing and organization of the health care system change, the future of primary care should not be taken for granted. Rather, primary care must be able to justify its place in a system where, for example, specialist physicians and nurses are increasingly providing principal care and where patients often choose to go to nontraditional settings for their care. An objective of the meeting was to question the current definitions and assumptions associated with primary care and examine whether they are still relevant, either in today's or a redesigned health care system. The Glen Cove meeting was not a consensus conference. No votes were taken, and there were lively discussions and debates about many ideas and issues. The discussions focused on identifying and developing ideas that could describe and help define normative systems of care. The central question was, What principles can be identified for the organization and delivery of care to address the needs of our future population? Rather than focusing on adjusting current financing and workforce policies to today's fragmented and often dysfunctional health care system, the goal was to start a dialogue on how primary care should be delivered. The principles and ideas identified could then be used to construct new primary care systems, with finance and workforce policies created to support the development and use of the new systems. As preparation for the meeting, 15 papers were commissioned to provide background for the meeting's discussions. The papers described the current state of primary care and the components of primary care, and they provided ideas about how primary care might be reconstructed. Revised versions of 4 of these papers are included in this Annals supplement. Primary Care Medicine in Crisis: Toward Reconstruction and Renewal (1) by Gordon Moore and Jonathan Showstack, describes challenges facing primary care and how primary care's strengths can help address those challenges. Defining the Future of Primary Care: What Can We Learn from Patients? (2) by Dana Gelb Safran, discusses the patient's view of and experience with primary care. Chronic Illness Management in Primary Care: What Is the Role of Primary Care? (3) by Arlyss Anderson Rothman and Edward Wagner, describes the role of chronic illness management. Primary Care in a New Era: Disillusion and Dissolution? (4) by Lewis Sandy and Steven Schroeder, suggests that current forces in the health care system will create the dissolution of primary care as a single concept, replaced by alignment of providers by economic niche, not role. The final paper, Primary Care: The Next Renaissance (5) by Jonathan Showstack, Nicole Lurie, Eric Larson, Arlyss Anderson Rothman, and Susan Hassmiller, presents the ideas and suggestions, based in part on discussions at the meeting, about how to address primary care's current dilemmas. These and the other background papers commissioned for the meeting will be available as a book, The Future of Primary Care, to be published by Jossey Bass, an imprint of John Wiley Sons. We also call readers' attention to two closely related editorials in this issue. One (6) describes strategies for effecting change in primary care. The second (7) discusses payment for primary care services. Following the first editorial are the mission statements of the primary care service delivery programs of three large health care organizations. We hope that the papers in this supplement and the accompanying editorials and primary care mission statements stimulate additional thought and discussion about the current conditions and future of primary care.", "venue": "Annals of Internal Medicine", "year": 2003.0, "author_names": ["Jonathan A Showstack", "Arlyss Anderson Rothman", "Susan B Hassmiller"], "n_citations": 13, "n_key_citations": 0, "score": 0}]} -{"query": "allintitle: \"lean AND construction AND sustainability\"", "session_id": 202077938836733, "user_id": 4412265158126942, "candidates": [{"corpus_id": 198183257, "title": "Systematic Literature Review ICT Education for Girls and Women in Rural Africa", "abstract": "The purpose of this review was to get an overview of the current literature available about ICT education for women in rural Africa. This has been done with a systematic literature review. The research questions important for this review are focusing on the current state of ICT education for women in rural Africa, the opportunities to improve on this current state, and the challenges that could influence the quantity of the improvements that could be made. The results showed that the ICT usage and education needed for this is small under the African women. The results also provided a need for ICT to improve the information access for women on different fields. They need education, especially about how to use and maintain different ICTs. But there are different challenges which could be limitations to the opportunities for women and their ICT usage in rural Africa. Method The search query allintitle: Africa ICT women was used to execute a systematic literature review. Google Scholar gave in total 29 results. Results In total 8 literature results were useful for the literature review according to the inclusion and exclusion criteria. The idea that ICT is important for women started with the feminism movement. The results say that the education on ICT is not how it should be. Women in Africa have an information need on multiple subjects, for example health. They need to overcome the different challenges, for example the literacy problem and the self esteem problem to ensure that ICTs can be used and will be maintained by women in rural Africa. Conclusion A broad search query is used, but this obtained not that many results. The useful results give an overview of the current situation, opportunities and challenges. However, more focus on solutions and how to develop on ICT for women in rural Africa", "venue": "", "year": 2019.0, "author_names": ["R de Bok"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 164522212, "title": "OPTIMASI RISET KEYWORD DENGAN TEKNIK ALLINTITLLE PADA MESIN PENCARI GOOGLE", "abstract": "Online information needs have evolved in the real direction. These needs include the latest information, government services, and commercial products. The research question is how to describe and optimize keyword research with the allintitle technique on the google search engine. The development method used in this research is the prototype method because it is considered able to be evaluated directly on the user. The system testing is done for 3 months by placing keywords on several websites on Google. The conclusion that can be taken is to use the allintitle technique, the search results for the web are easier to find. And this web based allintitle technique can overcome the challenges of captcha verification from the Google search engine. Keywords: Allintitle, Google's Search Engine, Keyword competition.", "venue": "", "year": 2019.0, "author_names": ["Hengki Tamando Sihotang"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 14501557, "title": "Google Scholar is not enough to be used alone for systematic reviews", "abstract": "Background: Google Scholar (GS) has been noted for its ability to search broadly for important references in the literature. Gehanno et al. recently examined GS in their study: 'Is Google scholar enough to be used alone for systematic reviews? In this paper, we revisit this important question, and some of Gehanno et al.'s other findings in evaluating the academic search engine. Methods: The authors searched for a recent systematic review (SR) of comparable size to run search tests similar to those in Gehanno et al. We selected Chou et al. (2013) contacting the authors for a list of publications they found in their SR on social media in health. We queried GS for each of those 506 titles (in quotes \" one by one. When GS failed to retrieve a paper, or produced too many results, we used the allintitle: command to find papers with the same title. Results: Google Scholar produced records for ~95% of the papers cited by Chou et al. (n=476/506) A few of the 30 papers that were not in GS were later retrieved via PubMed and even regular Google Search. But due to its different structure, we could not run searches in GS that were originally performed by Chou et al. in PubMed, Web of Science, Scopus and PsycINFO(r) Identifying 506 papers in GS was an inefficient process, especially for papers using similar search terms. Conclusions: Has Google Scholar improved enough to be used alone in searching for systematic reviews? No. GS' constantly changing content, algorithms and database structure make it a poor choice for systematic reviews. Looking for papers when you know their titles is a far different issue from discovering them initially. Further research is needed to determine when and how (and for what purposes) GS can be used alone. Google should provide details about GS' database coverage and improve its interface (e.g. with semantic search filters, stored searching, etc. Perhaps then it will be an appropriate choice for systematic reviews.", "venue": "Online journal of public health informatics", "year": 2013.0, "author_names": ["Dean M Giustini", "Maged N Kamel Boulos"], "n_citations": 106, "n_key_citations": 2, "score": 0}, {"corpus_id": 69439644, "title": "Payroll system: A bibliometric analysis of the literature", "abstract": "Payroll processing is an imperative process in an organization; it involves many tasks to ensure accurate and timely payments of the workforces' services, and to protect organization's reputation through effective record keeping compliance with the government authorities' employment legislations. Despite its important function in the organization process, studies on payroll processing is quite limited, as compared to other transaction processing systems such as sales and purchase. This paper observes the trend of articles published on payroll system that has been indexed by the Google Scholar as at February 2018. This study aims to provide insights into the characteristics of the issues related to payroll system using a bibliometric analysis. Articles that matched with the keywords [allintitle: payroll system OR systems OR application OR applications OR software] in the Google Scholar has been obtained and analyzed. After conducted the cleaning process i.e. by completing the meta data of the articles and removing some of the irrelevant and duplicate articles, 170 articles are available for further analysis. It is found that, the number of published payroll system articles are increasing in the past five years. Most of the articles has been published as journal articles, academic dissertations and conference papers. The output of this study can help researchers to understand the landscape of the global research and issues on payroll system and establish further research directions in this field.", "venue": "", "year": 2018.0, "author_names": ["Fariza Rusly", "Aidi Ahmi", "Yurita Yakimini Abdul Talib", "Khairina Rosli"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 108548028, "title": "Peanut Core Collection Established in China and Compared with ICRISAT Mini Core Collection", "abstract": "The core collection has been well accepted as a useful way to improve the efficiency of crop germplasm evaluation, conservation and utilization. Around 6 390 accessions of cultivated peanut (Arachis hypogaea L. have been collected in China. In order to characterize and utilize the germplasm more efficiently for further crop improvement, the available morphological and biochemical data were analyzed to develop a core collection. The entire collection was first stratified by botanical types and then grouped by origin locations. Based on the data of 15 morphological and biochemical characters, the accessions in each botanicalhttp:/scholar.google.co.in/scholar?hl=en&q=allintitle%3A+%22Peanut+Core+Collection+Established+in+China+and+Compared+with+ICRISAT+Mini+Core+Collection%22&btnG=Search&as_sdt=0%2C5&as_ylo=&as_vis=0 type were clustered by SAS method. From each cluster, five to ten percent of the accessions were randomly selected to form a core collection consisting of 576 accessions, which was 9.01% of the entire collection. The genetic variation in the entire collec tion was well presented in the core collection based on comparison of diversity index of the involved traits in both entire and core collections. Comparison between the newly selected Chinese peanut core collection and the introduced mini core collection con sisting of 184 lines established at the International Crops Research Institute for the Semi Arid Tropics (ICRISAT) indicated that there were wider diversities in the var. hirsuta and vulgaris as well as in leaf length, leaf width, seed length, seed width in the Chinese core collection. The ICRISAT peanut collection had wider diversities in var. hypogaea and fastigiata as well as in plant height and number of total branches than Chinese peanut resource.", "venue": "", "year": 2008.0, "author_names": ["Jiang Huifang", "Ren Xiaoping", "Liao Bo-shou", "Huang Jiaquan", "Lei Yong", "C Ben-Yin", "Baozhu Guo", "Carl Corley Holbrook", "Hari D Upadhyaya"], "n_citations": 22, "n_key_citations": 0, "score": 0}, {"corpus_id": 82588747, "title": "Comparison of Genetic Diversity between Peanut Mini Core Collections from China and ICRISAT by SSR Markers", "abstract": "A core collection or mini core is a subset of accessions from the entire collection that covers most of available genetic diversity of a species. Extensive investigation of core collections is an efficient approach to enhance evaluation and utilization for crop germplasm. The mini core collections of peanut (Arachis hypogaea L. from China consisting of 298 accessions and from International Crops Research Institute for the Semi Arid Tropics (ICRISAT) consisting of 168 accessions were comparatively analysed by SSR method. Twenty six polymorphic SSR markers screened from 206 primer pairs were used to investigate the similarity and genetic distance among the peanut accessions involved. The similarity coefficients between the genotype pairs among the 466 accessions ranged from 0.49 to 0.99. The larghttp:/scholar.google.co.in/scholar?hl=en&q=allintitle%3A+%22Comparison+of+Genetic+Diversity+between+Peanut+Mini+Core+Collections+from+China+and+ICRISAT+by+SSR+Markers%22&btnG=Search&as_sdt=0%2C5&as_ylo=&as_vis=0est genetic distance was between L2 Gangguo (a Chinese genotype) and ICG12625 (an ICRISAT genotype) with a similarity coefficient of 0.49. Among the six botanical types in peanut, accessions of fastigiata and hypogaea were more diversified than other types. There was considerable genetic difference between the Chinese peanut accessions and some ICRISAT accessions especially with the aequatoriana genotype ICG12625. The genetic diversity was greater among the Chinese peanut mini core than that among ICRISAT mini core in terms of the similarity coefficient and genetic diversity index.", "venue": "", "year": 2010.0, "author_names": ["I Hui-Fang", "Ren Xiaoping", "Zhang Xiaojie", "Huang Jiaquan", "Lei Yong", "Yang Liying", "Liao Bo-shou", "Carl Corley Holbrook"], "n_citations": 8, "n_key_citations": 3, "score": 0}, {"corpus_id": 16926689, "title": "Sequoia An Approach to Declarative Information Retrieval", "abstract": "In this work, we propose an approach that allows to query heterogeneous data sources on the Web in a declarative fashion. Such an approach gives means for a generic way to formulate various information needs, much more powerful than simple keyword queries. Particularly appealing is the ability to combine (join) information from different sources and the ability to compute simple statistics that can be used to select promising information pieces. What might sound like a hopeless effort due to the inherent complexity expressible by SQL style queries is at second glance not complicated to understand and to use. Already very simple combinations (i.e. joins) of different data sources (i.e. tables) offer a surprisingly large set of interesting use cases. In particular, using sliding window joins that limit the scope of interest to recent information, obtained, for instance, from the live stream of Twitter Tweets. This goes far beyond keyword queries enriched with operators like allintext: or allintitle: or site: as can be used, for instance, in the Google search engine.", "venue": "Datenbank Spektrum", "year": 2012.0, "author_names": ["Christoph Pinkel", "Foteini Alvanaki", "Sebastian Michel"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 63353742, "title": "Chapter 2 Advanced Operators", "abstract": "Publisher Summary Beyond the basic searching techniques, Google offers special terms known as advanced operators to help perform more advanced queries. Advanced operators are additions to a query designed to narrow down the search results. These operators, used properly, can help get to exactly the information that one is looking for without spending too much time poring over page after page of search results. When advanced operators are not provided in a query, Google locates the search terms in any area of the Web page, including the title, the text, the Uniform Resource Locator (URL) or the like. This chapter looks at the following advanced operators: intitle, allintitle, inurl, allinurl, filetype, allintext, site, link, inanchor, daterange, cache, info, related, phonebook, rphonebook, bphonebook, author, group, msgid, insubject, stocks, and define. Thus, Google offers plenty of options when it comes to performing advanced searches. URL modification, which can provide with lots of options for modifying a previously submitted search, but advanced operators are better used within a query. Easier to remember than the URL modifiers, advance operators are the truest tools of any Google hacker's arsenal. As such, they should be the tools used when considering the protection of Web based information.", "venue": "", "year": 2008.0, "author_names": ["Johnny Long"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 214462111, "title": "Exploring the relationship between lean construction and environmental sustainability: A review of existing literature to decipher broader dimensions", "abstract": "Abstract Lean construction and sustainable construction are perceived to be two individual philosophies that possess distinct goals. Lean construction is oriented towards the process related parameters of construction associated with improving flow, enhancing productivity, eliminating waste and reducing delays. Meanwhile, sustainable construction aims at reducing the harmful impacts on the environment due to construction activities along with due attention to the economic and social aspects of the project. However, both these paradigms are found to hold certain common objectives in the form of promoting resource efficiency and minimizing waste. This paper reviews the studies published in the domain of lean construction and environmental sustainability by categorizing the links between them within various dimensions of the lean philosophy such as lean principles, lean wastes, lean tools, and other associated lean tenets. The review deciphers the influence of these individual realms of lean construction on some environmental parameters such as resource use, emissions, pollution, and energy use. The review provides insights into important and distinct linkages between lean and the environment and further motivates the expansion of the boundaries of lean construction philosophy into the operation phase of a project's life cycle where its influence is found to be sparse. Hence, the study briefly examines the potential of considering energy waste under the ambit of lean construction philosophy and the prospects of research in this domain. The knowledge synthesized in this paper will motivate the implementation of lean construction, seeking broader benefits in terms of environmental sustainability.", "venue": "", "year": 2020.0, "author_names": ["Ann Marie Francis", "Albert Thomas"], "n_citations": 20, "n_key_citations": 0, "score": 0}, {"corpus_id": 211359981, "title": "Toward a holistic view on lean sustainable construction: A literature review", "abstract": "The need for sustainable built environment is pressing; an urgency that spans environmental, economic and social values of sustainability. Since late 1980s, the Lean philosophy has been adopted in the construction sector, with a focus on efficiency, predominantly as a function of economic competence. More recently, however, the Lean principles and practices have been revisited and increasingly used to create and preserve social and environmental values as well. The result was a growing, but dispersed, body of knowledge on sustainability and Lean construction, and hence, equivocal about how Lean contributes to sustainability. By means of a Systematic Literature Review (SLR) based on 118 journal articles from 1998 to 2017, this article aims to provide a comprehensive understanding of \"how Lean helps achieve and maintain sustainability in construction sector\" The findings are structured into a holistic framework, which underlines a multidimensional approach toward sustainability, i.e. focus on stakeholders, across various construction phases, while simultaneously being heedful of concerns regarding people, planet, and profit. It became clear that the current body of knowledge is mainly skewed toward economic values, which calls for more research in the social and environmental aspects of construction. This study assembles a palette of existing best practices, based on which scholars' and practitioners' can balance their efforts across three dimensions of sustainability. Moreover, it identifies several under researched areas of Lean sustainable construction that have the potential to be expanded in by future researchers.", "venue": "", "year": 2020.0, "author_names": ["Sam Solaimani", "Mohamad Sedighi"], "n_citations": 22, "n_key_citations": 1, "score": 1}]} -{"query": "Jobs Housing Balance", "session_id": 8590968992893725, "user_id": 1973561059087272, "candidates": [{"corpus_id": 109453085, "title": "Jobs Housing Balance of Bus Commuters in Beijing", "abstract": "Jobs housing studies have rarely used smart card data provided by public transportation agencies or focused on bus commuters. In this study, massive smart card data were used to estimate 216,844 bus commuters' workplace and residence locations in Beijing. These data enabled a jobs housing study of bus commuters in the metropolis with a much larger sample size than in most other studies. The study found that Beijing's bus commuters had a shorter actual required commute (ARC) and a shorter minimum required commute (MRC) than commuters in four other auto dependent Western cities with comparable population and land use size. The study also indicated that Beijing's bus commuters had a longer ARC and a longer MRC than commuters of all modes in Guangzhou, a metropolis in southern China half the size of Beijing. Consultations with local experts, field surveys, and information provided by online housing search engines were used to supplement the smart card data. The study established five land use prototypes of jobs housing imbalance and proposed countermeasures to address the imbalance.", "venue": "", "year": 2014.0, "author_names": ["Jiangping Zhou", "Ying Long"], "n_citations": 21, "n_key_citations": 4, "score": 0}, {"corpus_id": 190440281, "title": "Longitudinal Cluster Analysis of Jobs Housing Balance in Transit Neighborhoods", "abstract": "Longitudinal Cluster Analysis of Jobs Housing Balance in Transit Neighborhoods 1 2 3 4 5 Robert E. Hibberd (Corresponding author) 6 Department of Geography and Development 7 University of Arizona 8 rhibberd@email.arizona.edu 9 1064 E Lowell St, Tucson, AZ 85719 10 11 Arthur C. Nelson 12 School of Landscape Architecture and Planning 13 University of Arizona 14 acnelson@arthurcnelson.com 15 16 Text: 6,058 words 17 Tables and Figures: 5 250 each 1,250 words 18 Total: 7,308 words 19 20 21", "venue": "", "year": 2018.0, "author_names": ["Robert Hibberd", "Arthur Christian Nelson"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 189408355, "title": "Impact of Jobs housing Balance on Traffic Safety", "abstract": "Jobs housing balance refers to the situations where the employment (work) and housing (house) opportunity are coincided in certain geographical area. This paper aims to examine the impact of jobs housing balance to traffic safety. In pursuing the above, this paper particularly focuses on modeling the traffic accidents by metropolitan area. The main results are as follows. First, three generalized linear models which are all statistically significant are developed. Jobs housing balance factors are judged to significantly influence on traffic accidents in all models. Second, among common variables, the housing supply rate is analyzed to impact to decreasing, and economically active population and commuting trip attraction are analyzed to impact to increasing. Hence, the alleviation of jobs housing mismatch is evaluated to be important. Finally, the jobs housing and business trip rates in Seoul metropolitan area, and the cross commuting rate in Busan Ulsan metropolitan area are judged to be essential to transportation safety policies.", "venue": "", "year": 2018.0, "author_names": ["Tae Yang Kim"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 167596143, "title": "Jobs Housing Balance: The Right Ratio for the Right Place", "abstract": "The concept of Jobs housing balance (JHB) has attracted many city and transportation planning agencies for the interest of increasing place quality and reducing travel demand. Operationalizing JHB, however, has been a challenge. There are several critical questions in the application of JHB: what is a good ratio? How should JHB be quantified for guiding land use development? And, to what extent could jobs housing ratio be effectively used as an intervention instrument?", "venue": "", "year": 2015.0, "author_names": ["Qianli Wu", "Ming Zhang", "Daniel Yang"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 115346695, "title": "Jobs housing balance based on Baidu thermodynamic diagram", "abstract": "", "venue": "", "year": 2016.0, "author_names": ["Tan Xin", "Huang Daquan", "Zhao Xingshuo", "Yu Ying", "Leng Bing-rong", "Feng Lei"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 154142849, "title": "Rethinking Accessibility and Jobs Housing Balance", "abstract": "Abstract Through estimation of a discrete choice model of residential location, this study argues that commute time remains a dominant determinant of residential location at the regional scale, and that provision of affordable housing near employment concentrations can influence residential location decisions for low to moderate income, single worker households. But the significance of jobs housing balance is not in reducing congestion; even when successful, such policies will have little impact on average travel speeds. Rather, the relaxation of suburban regulation that could lead to improved matches between home and workplace is seen as enhancing the range of households' choices about residence and transportation.", "venue": "", "year": 1998.0, "author_names": ["Jonathan Levine"], "n_citations": 282, "n_key_citations": 16, "score": 1}, {"corpus_id": 15006535, "title": "Is Jobs Housing Balance a Transportation Issue?", "abstract": "Jobs housing balance has become a major planning and public policy issue. Despite its popularity and apparent acceptance among public policy makers as a solution for traffic congestion and air pollution problems, there is little consensus on what jobs housing balance means and little evidence that a jobs housing balance policy would have any significant effect on these problems. The jobs housing balance policy is premised on the idea that job and housing location choices are closely linked, and that policy intervention is required to achieve a balance of housing and jobs. Existing evidence suggests that the relationship between where people choose to live and work is complex, and may have little to do with job access considerations. Further, patterns of urban growth and travel indicate that balancing occurs as part of the urban development process. It is concluded that jobs housing balance is not an effective solution for traffic congestion and air pollution concerns. Rather, these problems are better addressed in a more direct way.", "venue": "", "year": 1991.0, "author_names": ["Genevieve Giuliano"], "n_citations": 197, "n_key_citations": 19, "score": 0}, {"corpus_id": 15830978, "title": "Jobs Housing Balance and Job Accessibility in Beijing", "abstract": "Jobs housing balance is shown to reduce commuting demand in previous studies. In order to find out what factors influence the jobs housing balance and how to improve the job accessibility in Chinese megacities where the Danwei system has been phasing out and the economy has become market oriented, this empirical study examines the case of Beijing. The result of analysis shows that accessibility to transport infrastructure has no influence on the individual's workplace choice, and more job opportunities in the dwelling place, lower income level lead to more residents choosing to work in the dwelling place. It is also shown that the co location hypothesis is not supported in Beijing: low income workers in the periphery cannot reduce travel time and cost by changing their workplace or residence because of the deficiency of job opportunities in the peripheral area and because the price of public transport is low. Finally it is concluded the jobs housing balance along the rail transit corridor will increase job accessibility and reduce car dependence.", "venue": "", "year": 2014.0, "author_names": ["Haixiao Pan", "Yanbo Ge"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 131211894, "title": "Jobs housing balance and commute efficiency in cities of central and western China:A case study of Xi'an", "abstract": "Jobs housing balance is an inevitable topic and even strategy in most urban plans and policies aimed at reducing car dependence, increasing public transportation's attractiveness and/or improving quality of life. Existing studies of jobs housing balance have rarely focused on developing cities, in particular, those in central and western China. This manuscript proposes that there are six groups of factors affecting jobs housing balance, which notably influence commute efficiency. Those factors exert different impacts on jobs housing balance in cities at different development stages. There is need to single out specific factors influencing developing cities' jobs housing balance so as to better improve their jobs housing balance and commute efficiency. Improved jobs housing balance and commute efficiency should help the developing cities gain advantages in terms of attracting and keeping talented workers and increasing their competitiveness in traffic mobility, quality of life and sustainability. Based on 59,967 samples of the 2011 Xi'an City Wide Household Travel Survey, this paper investigates the commute efficiency, jobs housing balance and excess commute in Xi'an. It also compares relevant indictors of Xi'an with those in other Chinese and international cities whenever possible. It finds that Xi'an has a shorter actual average commuting distance and higher commute efficiency than most other cities that have been studied in existing literature. Average commuting distance in Xi'an is found to be negatively correlated to jobs resident ratio and total number of employment. Danwei compounds in the city still have decent jobs housing balance and commuting efficiency but this pattern is changing.", "venue": "", "year": 2013.0, "author_names": ["Jiangping Zhou", "Xiaojian Chen", "Wei Huang", "Pengqiu Yu", "Chun Zhang"], "n_citations": 12, "n_key_citations": 0, "score": 0}, {"corpus_id": 168490665, "title": "A Study on Urban Spatial Structure in the Context of the Jobs Housing Balance: A Case of Suzhou, China", "abstract": "Reducing the number of trips and the length of travel is an important objective in the movement towards sustainable development and low carbon cities. In the pursuit of less travel and shorter travel distances, local governments try to achieve a jobs housing balance. In this study, based on survey data from different spatial scales, we use GIS technology and job housing balance index measures, the independent index, average commuting distance and commuting time, and explore the spatial structure of Suzhou City, China. According to research, the overall jobs housing index in Suzhou is 0.75, the average commuting distance is 9.6 km, and the average commuting time is 26 min. There are about 32 of the residents whose commuting distances are more than the average number. This study provides some reference for urban planning and urban spatial optimization for Suzhou City.", "venue": "", "year": 2017.0, "author_names": ["Zhenlong Zhang"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "A Blockchain-based secure PHR data storage and sharing framework", "session_id": 3474748702456429, "user_id": 5464411562773691, "candidates": [{"corpus_id": 233136510, "title": "A Blockchain based secure PHR data storage and sharing framework", "abstract": "The appearance of Covid 19 proves the deficiency of world health care systems to handle exponentially infected patients and to deal with lack of personal health records (PHR) As it is known, data is valuable in such situation. Current solutions for storing and sharing PHR data adopt centralized solutions, such as cloud based totally centralized data centers, which requires a fully trusted third party. Therefore, it suffers from single point of failure, data deleting and network delay. To overcome these issues, we propose in this paper a Blockchain based secure PHR data storage and sharing framework that leverages the benefits of IPFS (Inter Planetary File System) Our aim is to ensure privacy with patient full control over his data and to enhance scalability. To this end, we use steganography to hide sensitive data within PHR data, and then it is uploaded to IPFS network. However, IPFS Hash delivered by IPFS network is divided into n secret shares using Shamir's Secret Sharing (SSS) algorithm, which ensures security. Moreover, Ethereum smart contract automates the execution of access control strategies defined by the owner, as well as, traceability and auditability insurance.", "venue": "2020 6th IEEE Congress on Information Science and Technology (CiSt)", "year": 2020.0, "author_names": ["Ayoub Ghani", "Ahmed Zinedine", "Mohammed el Mohajir"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 213866320, "title": "Industrial blockchain based framework for product lifecycle management in industry 4.0", "abstract": "Abstract Product lifecycle management (PLM) aims to seamlessly manage all products and information and knowledge generated throughout the product lifecycle for achieving business competitiveness. Conventionally, PLM is implemented based on standalone and centralized systems provided by software vendors. The information of PLM is hardly to be integrated and shared among the cooperating parties. It is difficult to meet the requirements of the openness, interoperability and decentralization of the Industry 4.0 era. To address these challenges, this paper proposed an industrial blockchain based PLM framework to facilitate the data exchange and service sharing in the product lifecycle. Firstly, we proposed the concept of industrial blockchain as the use of blockchain technology in the industry with the integration of IoT, M2M, and efficient consensus algorithms. It provided an open but secured information storage and exchange platform for the multiple stakeholders to achieve the openness, interoperability and decentralization in era of industry 4.0. Secondly, we proposed and developed customized blockchain information service to fulfill the connection between a single node with the blockchain network. As a middleware, it can not only process the multi source and heterogeneous data from varied stages in the product lifecycle, but also broadcast the processed data to the blockchain network. Moreover, smart contract is used to automate the alert services in the product lifecycles. Finally, we illustrated the blockchain based application between the cooperating partners in four emerging product lifecycle stages, including co design and co creation, quick and accurate tracking and tracing, proactive maintenance, and regulated recycling. A simulation experiment demonstrated the effectiveness and efficiency of the proposed framework. The results showed that the proposed framework is scalable and efficient, and hence it is feasible to be adopted in industry. With the successful development of the proposed platform, it is promising to provide an effective PLM for improving interoperability and cooperation between stakeholders in the entire product lifecycle.", "venue": "Robotics Comput. Integr. Manuf.", "year": 2020.0, "author_names": ["Xiaoping Liu", "Wai Ming Wang", "Hanyang Guo", "Ali Vatankhah Barenji", "Zhi Li", "George Q Huang"], "n_citations": 41, "n_key_citations": 1, "score": 0}, {"corpus_id": 56178064, "title": "Blockchain Based Mobile Edge Computing Framework for Secure Therapy Applications", "abstract": "Mobile edge computing (MEC) is being introduced and leveraged in many domains, but few studies have addressed MEC for secure in home therapy management. To this end, this paper presents an in home therapy management framework, which leverages the IoT nodes and the blockchain based decentralized MEC paradigm to support low latency, secure, anonymous, and always available spatiotemporal multimedia therapeutic data communication within an on demand data sharing scenario. To the best of our knowledge, this non invasive, MEC based IoT therapy platform is first done by our group. This platform can provide a full body joint range of motion data for physically challenged individuals in a decentralized manner. With MEC, the framework can provide therapy diagnostic and analytical data on demand to a large portion of humanity who are either born with disabilities or became disabled due to accidents, war time injuries, or old age. For security, the framework uses blockchain Tor based distributed transactions to preserve the therapeutic data privacy, ownership, generation, storage, and sharing. Our initial test results from a complete implementation of the framework show that it can support a sufficiently large number of users without considerable increase in mean processing time.", "venue": "IEEE Access", "year": 2018.0, "author_names": ["Md Abdur Rahman", "M Shamim Hossain", "George Loukas", "Elham Hassanain", "Syed Sadiqur Rahman", "Mohammed F Alhamid", "Mohsen Guizani"], "n_citations": 74, "n_key_citations": 1, "score": 0}, {"corpus_id": 231699907, "title": "PSO Blockchain based image steganography: towards a new method to secure updating and sharing COVID 19 data in decentralised hospitals intelligence architecture", "abstract": "Secure updating and sharing for large amounts of healthcare information (such as medical data on coronavirus disease 2019 [COVID 19] in efficient and secure transmission are important but challenging in communication channels amongst hospitals. In particular, in addressing the above challenges, two issues are faced, namely, those related to confidentiality and integrity of their health data and to network failure that may cause concerns about data availability. To the authors' knowledge, no study provides secure updating and sharing solution for large amounts of healthcare information in communication channels amongst hospitals. Therefore, this study proposes and discusses a novel steganography based blockchain method in the spatial domain as a solution. The novelty of the proposed method is the removal and addition of new particles in the particle swarm optimisation (PSO) algorithm. In addition, hash function can hide secret medical COVID 19 data in hospital databases whilst providing confidentiality with high embedding capacity and high image quality. Moreover, stego images with hash data and blockchain technology are used in updating and sharing medical COVID 19 data between hospitals in the network to improve the level of confidentiality and protect the integrity of medical COVID 19 data in grey scale images, achieve data availability if any connection failure occurs in a single point of the network and eliminate the central point (third party) in the network during transmission. The proposed method is discussed in three stages. Firstly, the pre hiding stage estimates the embedding capacity of each host image. Secondly, the secret COVID 19 data hiding stage uses PSO algorithm and hash function. Thirdly, the transmission stage transfers the stego images based on blockchain technology and updates all nodes (hospitals) in the network. As proof of concept for the case study, the authors adopted the latest COVID 19 research published in the Computer Methods and Programs in Biomedicine journal, which presents a rescue framework within hospitals for the storage and transfusion of the best convalescent plasma to the most critical patients with COVID 19 on the basis of biological requirements. The validation and evaluation of the proposed method are discussed.", "venue": "Multim. Tools Appl.", "year": 2021.0, "author_names": ["A H Mohsin", "A A Zaidan", "Bilal Bahaa Zaidan", "K I Mohammed", "Osamah Shihab Albahri", "Ahmed Shihab Albahri", "M A Alsalem"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 227067087, "title": "PatientDataChain: A Blockchain Based Approach to Integrate Personal Health Records", "abstract": "Currently there is not a single trusted infrastructure used for the exchange and storage of medical data along the healthcare value chain and, thus, there is no platform used for monitoring patients' traceability within the entire healthcare chain. This situation leads to difficult communication and increased procedural costs, and thus it limits healthcare players from developing a better understanding and know how of patients' traceability that could further boost innovation and development of the best fitted health services. PatientDataChain blockchain based technology is a novel approach, based on a decentralized healthcare infrastructure that incorporates a trust layer in the healthcare value chain. Our aim was to provide an integrated vision based on interoperability principles, that relies on the usage of specific sensors from various wearable devices, allowing us to collect specific data from patients' medical records. Interconnecting different healthcare providers, the collected data is integrated into a unitary personal health records (PHR) system, where the patient is the owner of his/her data. The decentralized nature of PatientDataChain, based on blockchain technology, leveraged the proper context to create a novel and improved data sharing and exchange system, which is secure, flexible, and reliable. This approach brings increased benefits to data confidentiality and privacy, while providing secure access to patient medical records. This paper presents the design, implementation, and experimental validation of our proposed system, called PatientDataChain. The original contributions of our paper include the definition of the concept of unifying the entire healthcare value chain, the design of the architectural model of the system, the development of the system components, as well as the validation through a proof of concept (PoC) conducted with a medical clinic from Bucharest, using a dataset of 100 patients and over 1000 transactions. The proof of concept demonstrated the feasibility of the model in integrating the personal health records from heterogeneous sources (healthcare systems and sensors) in a unified, decentralized PHR system, with enhanced data exchange among healthcare players.", "venue": "Sensors", "year": 2020.0, "author_names": ["Alexandra Cernian", "Bogdan Tiganoaia", "Ioan Stefan Sacala", "Adrian Pavel", "Alin Iftemi"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 221473230, "title": "Health Information Exchange with Blockchain amid Covid 19 like Pandemics", "abstract": "The COVID 19 pandemic is stress testing existing health information exchange systems. There exists an increasing demand for sharing patient information and efficiently responding to patient medial data requests. Current health information technologies lack data fluidity, especially for remotely sharing medical data beyond their protected, local data storage. This paper presents a blockchain based data sharing framework that leverages the properties of immutability and decentralization to ensure a secure, user centric approach for accessing and controlling access to sensitive medical data. The proposed framework builds its foundations on a peer to peer network fueled by the distributed InterPlanetary File System combined with on chain tagging, and on the use of cryptographic generation techniques for enabling a secure way of sharing medical data. The flow of information is orchestrated by a smart contract deployed on a blockchain based protocol to ensure traceability and data integrity. The effectiveness of the framework is demonstrated with the implementation of the framework over a pilot study.", "venue": "2020 16th International Conference on Distributed Computing in Sensor Systems (DCOSS)", "year": 2020.0, "author_names": ["Klitos Christodoulou", "Panayiotis Christodoulou", "Zinonas Zinonos", "Elias G Carayannis", "Savvas A Chatzichristofis"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 222417579, "title": "Enhancing Vendor Managed Inventory Supply Chain Operations Using Blockchain Smart Contracts", "abstract": "Supply chain networks have grown in complexity and size due to increased globalization leading to a variety of challenges and opportunities for improvement. Optimizing inventory levels and adjusting replenishment policies have significant effects on the operational performance and profitability of supply chains. Vendor Managed Inventory (VMI) is a mutually beneficial arrangement between supplier and buyer, where the supplier is responsible for making inventory and replenishment decisions based on buyers' inventory status. Potential benefits of VMI include reducing inventories, enabling information sharing, eliminating safety stock, and reducing purchasing related costs across the supply chain. In today's supply chains, VMI operations face critical challenges related to data integrity, transparency, traceability, and single point of failure due to its centralized architecture. Blockchain technology is a distributed ledger that ensures a transparent, safe, and secure exchange of data among supply chain stakeholders. The advantages of adopting blockchain technology for VMI operations in a supply chain include decentralized control, security, traceability, and auditable time stamped transactions. In this paper, we present a blockchain based approach using smart contracts to transform VMI supply chain operations. We propose a generic framework using Ethereum smart contracts and decentralized storage systems to automate the processes and information exchange and detailed algorithms that capture the interactions among supply chain stakeholders. The smart contract code was developed and tested in Remix environment. We present cost and security analysis incurred by the stakeholders in the supply chain. Adopting a blockchain based solution to VMI operations in supply chains is economically viable and provides a streamlined, secure, trusted, and transparent mode of communication among various stakeholders.", "venue": "IEEE Access", "year": 2020.0, "author_names": ["Ilhaam A Omar", "Raja Jayaraman", "Khaled Salah", "Mazin Debe", "Mohammed A Omar"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 211529289, "title": "Secure and Decentralized Live Streaming using Blockchain and IPFS", "abstract": "Decentralized cloud systems are proving to be much more advantageous than centralized cloud systems. They distribute power away from a central authority, cut down operation cost, have greater fault tolerance, fewer trust requirements between storage providers and data owners and is less prone to attacks. InterPlanetary File System, a protocol to create a content addressable, peer to peer method of storing and sharing hypermedia in a distributed file system can revolutionize how we share the media content over the internet. We provide an overview of the current systems to stream media over the internet and describe various problems that these systems face with regards to media delivery, governance, and distribution. We exhibit, how with the help of IPFS, Blockchain based Smart Contracts and HTTP Live Streaming (HLS) it is possible to minimize, avoid and diminish the problems associated with the traditional media delivery system and how we can improve the overall efficiency of media delivery systems. We explain how the conventional framework of media delivery can be transformed by IPFS based delivery network supported by HLS streaming for all kinds of distribution model (live or on demand) We also propose a novel method to decentralize the cloud storage system using a separate server and client side applications. Keywords IPFS, Internet media delivery networks, HLS, Streaming with IPFS, Distributed Ledger Technologies", "venue": "", "year": "", "author_names": ["Anish Mishra", "Shreya Saha"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 169873635, "title": "Performance Perspective on Private Distributed Ledger Technologies for Industrial Networks", "abstract": "Blockchain based Distributed Ledger Technology (DLT) is a novel paradigm to create tamper resistant execution environments and data storage for distributed applications on top of a peer to peer network. This technology has shown to be of interest in many use cases, especially in industrial processes where multiple shareholders would like to process and share data in a secure and accountable way. In this work, we evaluate the performance of a DLT based system via modeling and a quantitative performance evaluation, focusing on the impact of the underlying communication network. Our numerical evaluation is based on the Hyperledger Fabric DLT framework, its benchmarking tool Caliper, and a dedicated test bed, where network properties such as latency or packet loss can be artificially influenced. Our experiments show that the validation of the transactions in a DLT based system is the main contributor in the transaction latency. We also demonstrate that the properties of the communication network can influence the performance largely, even in the case where only one of the participants in the DLT system has poor network access.", "venue": "2019 International Conference on Networked Systems (NetSys)", "year": 2019.0, "author_names": ["Fabien Geyer", "Holger Kinkelin", "Hendrik Folke Leppelsack", "Stefan Liebald", "Dominik Scholz", "Georg Carle", "Dominic A Schupke"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 220258859, "title": "Blockchain Enabled Industrial Internet of Things: Advances, Applications, and Challenges", "abstract": "T revolutionary industry digitization coupled with the proliferation of the Internet of Things causes a paradigm shift for industrial and manufacturing companies, renowned as Smart Industry or the Industrial Internet of Things (IIoT) This concept, also advertised as Industry 4.0, leverages the power of smart machines fused with real time analytics, cyber physical systems, cloud and cognitive computing to capture and exploit massively produced and communicated data. This aims at promoting multi disciplinary business intelligence and supporting efficient quality control and a traceable supply chain, predictive maintenance, enhanced field services, asset tracking, as well as sustainable green practices. An IIoT ecosystem mainly focuses on effectively controlling the physical world comprising smart devices distributed among the entire industry to collect and securely exchange and analyze massive ambient data. While cloud computing constitutes the fertile soil for handling such issues, it however necessitates high end servers and high speed networks to provision storage related/computation related services. In this regard, a centralized cloud enabled IIoT framework is perceived by IoT services as a black box with impeding factors of resilience, adaptability, fault tolerance, trust, security and privacy, maintenance costs, and asthenic time critical IoT applications' support. Stepping toward coping with these challenges, blockchain represents one of the most suitable candidate technologies able to support a secure and distributed IIoT ecosystem. The blockchain is an amalgamation of cryptography, public key infrastructure, and economic modeling, applied to peer to peer networking and decentralized consensus to achieve distributed database synchronization. Its perks of decentralization, immutability, auditability, and fault tolerance render it more attractive to enclasp the benefits of the decentralized framework in the IIoT environment. Various industry solutions and platforms from Lola, COSMOS, Dajie, Filament, Slock.it, SmartAxiom, BlockVerify, Xage Security, Ubirch, Multichain, ShoCard, Chronicled, Uniquid, Riddle and Code, and Datum are already floated in the market for public, private, and federated blockchains to address privacy, monetization, security, trust, identity and data management issues. From the perspective of blockchain employment across the wide range of IIoT use cases (e.g. in the food industry, cybersecurity, voting, music, real estate, healthcare, insurance, supply chain and logistics, energy and smart grid, apparel, textile and fashion industry) there exist numerous operational and technical exposures to the development and deployment of IIoT related applications outlining significant challenges that stand in the way of achieving absolute IIoT decentralization using blockchain, given the vast diversity of the devices these applications involve. Such technical challenges include but are not limited to risks and regularity issues as well as other associated integrating factors related to processing, storage, communication, and availability, together with the appropriate role assignment that jointly considers issues of security, privacy, trust and scalability in addition to the choice of suitable consensus algorithms. This special issue of IEEE Internet of Things Magazine (IoTM) has solicited high quality manuscripts that: Describe in depth the breadth of real world blockchain based multi disciplinary IIoT deployments that go in line with the above elaborated special issue. Present actual experiences in resolving contextual blockchain related challenges. Develop and share best practices, vision realizations and lessons learned in this integrated environment. Establish guiding principles for technical, operational and business successes. The special issue received around 40 high quality articles that were general, independent of technical or business specialty, and intended for an audience consisting of all members of the IoT community. The Guest Editors (GEs) assigned these articles to expert reviewers whose comments and reviews were to the point and highly beneficial in improving the quality, readability and presentation of the manuscripts. After the final round of reviews, the GEs have been exposed to quite a difficult selection process of nine out of 11 accepted papers for publication within this SI, whereas the remaining two papers have been highly recommended for publication as regular papers that will appear in upcoming issues of the IoTM. Below is a brief summary for each one of these accepted papers.", "venue": "IEEE Internet of Things Magazine", "year": 2020.0, "author_names": ["Mohamed Abdallah", "Octavia A Dobre", "Pin-Han Ho", "Sohail Jabbar", "Maurice J Khabbaz", "Joel J P C Rodrigues"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "dementia emotion analysis", "session_id": 8920028422125211, "user_id": 3387244930727160, "candidates": [{"corpus_id": 209322762, "title": "Emotion Recognition in Dementia: Advancing technology for multimodal analysis of emotion expression in everyday life", "abstract": "This paper provides an overview of my PhD project that focuses on recognizing emotions in dementia by analyzing multi modal expressions in autobiographical memories of older adults with dementia. The project aims for a better understanding how dementia influences emotional expressions and how dementia differs from the normal aging process. For this reason, spontaneous emotions will be elicited in autobiographical memories in two groups of older adults, one with dementia the other without, for comparison. Audio, video and physiological data will be collected at their home resulting in real life environments. The emotional expressions can then be analyzed by extracting verbal, non verbal, facial and gestural features from the audio, video and physiological data collected. In addition, a longitudinal study will be conducted with the older adults with dementia to investigate the longitudinal effect of dementia on emotions. A database of the emotional memories of these vulnerable groups will then be developed to contribute to the advancement of technologies for (automatic) multi modal emotion recognition. The database will then be made available for the research community. Lastly, we will also develop visualization and statistical models to assess multi modal patterns of emotion expression in these groups.", "venue": "2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW)", "year": 2019.0, "author_names": ["Deniece S Nazareth"], "n_citations": 1, "n_key_citations": 0, "score": 1}, {"corpus_id": 20512934, "title": "Expressed Emotion in relatives of persons with dementia: a systematic review and meta analysis", "abstract": "Abstract Objectives: Expressed Emotion (EE) refers to a number of key aspects of interpersonal relationships which have been shown to relate to outcomes in relatives of people with health conditions. Design: A systematic review and meta analysis of EE and outcomes in relatives of persons with dementia is reported. Potential research studies were identified via a search of three electronic databases; PsychINFO, MEDLINE and the Web of Science between 1960 and 2015. Results: We reviewed 12 studies investigating correlations between EE and well being in relatives of patients with dementia. Factors hypothesised to influence EE including attributions, social support, coping strategies and relationship quality were also reviewed. Conclusion: High EE relatives were found to have increased levels of burden (Z 6.967, P 0.001) and greater levels of depression (Z 5.842, P 0.001) Compared to low EE relatives, high EE relatives were more likely to attribute the patient's problems to factors that were personal to and controllable by the patient. Relatives with less social support, inefficient coping strategies and a poor relationship with the patients, were more likely to be classified as high EE.", "venue": "Aging mental health", "year": 2017.0, "author_names": ["Roxanne Safavi", "Katherine Berry", "Alison Wearden"], "n_citations": 27, "n_key_citations": 2, "score": 0}, {"corpus_id": 36274369, "title": "Meta Analysis of Facial Emotion Recognition in Behavioral Variant Frontotemporal Dementia", "abstract": "Behavioral disturbances and lack of empathy are distinctive clinical features of behavioral variant frontotemporal dementia (bvFTD) in comparison to Alzheimer disease (AD) The aim of this meta analytic review was to compare facial emotion recognition performances of bvFTD with healthy controls and AD. The current meta analysis included a total of 19 studies and involved comparisons of 288 individuals with bvFTD and 329 healthy controls and 162 bvFTD and 147 patients with AD. Facial emotion recognition was significantly impaired in bvFTD in comparison to the healthy controls (d 1.81) and AD (d 1.23) In bvFTD, recognition of negative emotions, especially anger (d 1.48) and disgust (d 1.41) were severely impaired. Emotion recognition was significantly impaired in bvFTD in comparison to AD in all emotions other than happiness. Impairment of emotion recognition is a relatively specific feature of bvFTD. Routine assessment of social cognitive abilities including emotion recognition can be helpful in better differentiating between cortical dementias such as bvFTD and AD.", "venue": "Journal of geriatric psychiatry and neurology", "year": 2016.0, "author_names": ["Emre Bora", "Dennis Velakoulis", "Mark Walterfang"], "n_citations": 59, "n_key_citations": 1, "score": 0}, {"corpus_id": 143435012, "title": "Responding the \"Wrong Way\" The Emotion Work of Caring for a Family Member With Dementia.", "abstract": "BACKGROUND AND OBJECTIVES Although it is generally acknowledged that the changing behaviors of some people living with dementia can be emotionally exhausting for family members, there has been little research on how carers actually interpret and manage their emotional responses when interacting with persons with dementia in context and over time. Applying the concept of emotion work, this analysis examines when and where carers feel they are responding \"the right way\" to their kin and when and where they resist normative emotions around family care. RESEARCH DESIGN AND METHODS Semi structured qualitative interviews (N 20) and diaries (N 11) were conducted with, and collected from, family carers in Manitoba, Canada to explore how they negotiate their emotions and emotional displays when caring for a family member whose behaviors are changing. RESULTS Carers expressed feelings of frustration, anger, and resentment and identified putting on a positive attitude, putting the person with dementia first, protecting the person with dementia, and avoiding conflict and arguing as the \"right way\" to respond to these feelings. They identified challenges responding the \"right way,\" however, in relation to household chores, and situations that also involved isolation, fear, verbal aggression, and fatigue. DISCUSSION AND IMPLICATIONS Programs and policies must recognize the complex emotion work of family carers. There is a need for more nuanced education materials, support with household tasks, inclusion of carers' emotional needs in transition planning, and support for carers to exit the caring role when necessary.", "venue": "The Gerontologist", "year": 2019.0, "author_names": ["Rachel V Herron", "Laura Megan Funk", "Dale Spencer"], "n_citations": 9, "n_key_citations": 0, "score": 0}, {"corpus_id": 217362846, "title": "Expressed Emotion in caregivers of persons with dementia and the relationship with psychological outcomes in caregivers and persons with dementia", "abstract": "s screened (N 213) Records excluded (N 3129) Full text articles assessed for eligibility (N 65) Full text articles excluded, due to duplication or inadequate measures. (N 53) Studies included in narrative synthesis (N 12) Studies included in quantitative synthesis (meta analysis) (N 6) Records excluded, due to absent sample type and/or inadequate measures. (N 148)", "venue": "", "year": 2018.0, "author_names": ["Roxanne Safavi"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 20442610, "title": "How Preserved is Emotion Recognition in Alzheimer Disease Compared With Behavioral Variant Frontotemporal Dementia?", "abstract": "Background: Emotion deficits are a recognised biomarker for behavioural variant frontotemporal dementia (bvFTD) but recent studies have reported emotion deficits also in Alzheimer's disease (AD) Methods: A hundred and twenty three participants (33 AD, 60 bvFTD, 30 controls) were administered a facial emotion recognition test, to investigate the clinical factors influencing the diagnostic distinction on this measure. Binomial regression analysis revealed that facial emotion recognition in AD was influenced by disease duration and MMSE, whereas the same was not true for bvFTD. Based on this information, we median split the AD group on disease duration (3 years) or MMSE (24) and compared the facial emotion recognition performance of mild AD, moderate AD, bvFTD patients and controls. Results: Results showed that very mild AD performed consistently at control levels for all emotions. By contrast, mild/moderate AD and bvFTD were impaired compared to controls on most emotions. Interestingly, mild/moderate AD were significantly impaired compared to very mild AD on total score, anger and sadness subscores. Logistic regression analyses corroborated these findings with ~94% of very mild AD being successfully distinguished from bvFTD at presentation, while this distinction was reduced to ~78% for mild/moderate AD. Conclusions: Facial emotion recognition in AD is influenced by disease progression, with very mild AD being virtually intact for emotion performance. Mild/moderate AD and bvFTD show consistent impairment in emotion recognition, with bvFTD being worse. A disease progression of over 3 years or a MMSE lower than 24 should warrant caution to put too much emphasis on emotion recognition performance in the diagnostic distinction of AD and bvFTD.", "venue": "Alzheimer disease and associated disorders", "year": 2015.0, "author_names": ["Maxime Bertoux", "Leonardo Cruz de Souza", "Marie Sarazin", "Aurelie Funkiewiez", "Bruno Dubois", "Michael Hornberger"], "n_citations": 43, "n_key_citations": 3, "score": 0}, {"corpus_id": 24247763, "title": "Differentiating between right lateralised semantic dementia and behavioural variant frontotemporal dementia: an examination of clinical characteristics and emotion processing", "abstract": "Background and purpose Right lateralised semantic dementia (right SD) and behavioural variant frontotemporal dementia (bvFTD) appear clinically similar, despite different patterns of underlying brain changes. This study aimed to elucidate distinguishing clinical and cognitive features in right SD versus bvFTD, emphasising emotion processing and its associated neural correlates. Methods 12 patients with right SD and 19 patients with bvFTD were recruited. Clinical features were documented. All patients were assessed on standardised neuropsychological tests and a facial emotion processing battery. Performance was compared to 20 age matched and education matched controls. Grey matter intensity was related to emotion processing performance using whole brain voxel based morphometry analysis. Results Patients with right SD exhibited disproportionate language dysfunction, prosopagnosia and a suggestion of increased obsessive personality/behavioural changes versus patients with bvFTD. In contrast, patients with bvFTD demonstrated pronounced deficits in attention/working memory, increased apathy and greater executive dysfunction, compared to patients with right SD. Decreased empathy, disinhibition and diet changes were common to both dementia subtypes. Emotion processing deficits were present in both FTD syndromes but were associated with divergent patterns of brain atrophy. In right SD, emotion processing dysfunction was associated with predominantly right medial and lateral temporal integrity, compared to mainly left temporal, inferior frontal and orbitofrontal and right frontal gyrus integrity in bvFTD. Conclusions This study demonstrates comparable deficits in facial emotion processing in right SD and bvFTD, in keeping with their similar clinical profiles. These deficits are attributable to divergent neural substrates in each patient group, namely, right lateralised regions in right SD, versus predominantly left lateralised regions in bvFTD.", "venue": "Journal of Neurology, Neurosurgery Psychiatry", "year": 2014.0, "author_names": ["Jody Kamminga", "Fiona Kumfor", "James R Burrell", "Olivier Piguet", "John R Hodges", "Muireann Irish"], "n_citations": 71, "n_key_citations": 3, "score": 0}, {"corpus_id": 152111250, "title": "Appraisal of caregiving burden, expressed emotion, and psychological distress in families of people with dementia: A systematic review", "abstract": "xiii Chapter 1: Introduction 1 Statement of the Problem and General Aims 1 Background and Significance 4 Dementia 4 Caregiver Burden 5 Coping and Expressed Emotion 7 Psychological Distress 11 Study Rationale and Research Objectives 12 Chapter 2: Methodology 15 Review and Analysis Procedures 15 Search Applications 15 Measures 15 Caregiver Burden 16 Coping and Expressed Emotion 16 Psychological Distress 16 Organization of the Literature Table 17 Analysis Procedures .17 Construction of the Data Tables 18 Restrictions Applied to Data for Study Inclusion 19 Chapter 3: Results 20 Measurement Instruments 20 Caregiver Burden .20 Zarit Burden Interview .20 Caregiver Burden Inventory .21 Expressed Emotion 21 Camberwell Family Interview .21 Five Minute Speech Sample .22 Level of Expressed Emotion .22 Brief Coping Orientations to Problems Experienced .22", "venue": "", "year": 2015.0, "author_names": ["Susan Sprokay"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 205301146, "title": "The association of eating performance and environmental stimulation among older adults with dementia in nursing homes: A secondary analysis.", "abstract": "BACKGROUND Nursing home residents with dementia experience increased risk for compromised eating performance due to intrapersonal, interpersonal, and environmental factors. Environmental stimulation is physical, social, and/or sensory stimulation present in the environment that can potentially trigger individuals' emotion or motivate physical reactions. Beyond the personal factors, there is a lack of evidence on how environmental stimulation influences individuals' eating performance at mealtimes. OBJECTIVES This study examined the association between environmental stimulation and eating performance among nursing home residents with dementia. DESIGN This study was a secondary analysis using baseline videos selected from a communication intervention study, where videos were recorded to capture staff resident interactions during care activities for nursing home residents with dementia. Videos were included in this study only if residents demonstrated eating activities at mealtimes. SAMPLE AND SETTING A total of 36 videos were selected (mean length=4min) The sample included 15 residents with dementia (mean age=86) and 19 certified nursing assistants (mean age=36) in 8 nursing homes. METHODS The dependent variable was eating performance as measured by the Level of Eating Independence scale (range: 15 36, with higher scores indicating better eating performance) The independent variables were characteristics of environmental stimulation measured by the Person Environment Apathy Rating Environment subscale (stimulation clarity, stimulation strength, stimulation specificity, interaction involvement, physical accessibility, and environmental feedback) Each characteristic was rated on a 1 4 scale with higher scores indicating more desirable environmental stimulation. Multilevel models were used to examine the association between eating performance and environmental stimulation, adjusting for resident characteristics (i.e. age, gender, dementia stage, function, comorbidity, psychoactive medication use) and nesting effects of residents and staff. RESULTS Resident participants demonstrated moderate levels of eating performance (M=27.08, SD=5.16) Eating performance was significantly lower among older residents, those with more advanced dementia, and higher comorbidity. After controlling for resident characteristics, eating performance was significantly associated with stimulation specificity (how the stimulation is delivered and tailored to the resident) and was not associated with other environmental stimulation characteristics. For each 1 point increase in stimulation specificity, eating performance increased by 8.78 points (95% CI=0.59, 16.97) CONCLUSIONS Environmental stimulation that is personally tailored to a resident' needs and preferences and directly offered to a resident contributed to better eating performance among residents with dementia. The findings will direct future development and implementation of person directed mealtime care programs and dining environment arrangements for residents with dementia in nursing homes.", "venue": "International journal of nursing studies", "year": 2017.0, "author_names": ["Wen Liu", "Ying-Ling Jao", "Kristine Williams"], "n_citations": 17, "n_key_citations": 0, "score": 0}, {"corpus_id": 7004486, "title": "Emotion Work in Family Caregiving for Persons with Dementia", "abstract": "Emotion work enhances emotional well being and emotional support in relationships between two people. Emotion work is a part of family work but has not been described in the context of caring for a family member with dementia. Content analysis applied to 11 interviews of informal caregivers describing their interactions with a person with dementia resulted in four categories of emotion work: (1) managing feelings, (2) weighing options, (3) being parental, and (4) ensuring emotional well being. Caregivers performed emotion work to meet the feeling rules of being a good caregiver, but often with emotional dissonance between the caregivers' true feelings.", "venue": "Issues in mental health nursing", "year": 2013.0, "author_names": ["Cherie Simpson", "Gayle J Acton"], "n_citations": 18, "n_key_citations": 1, "score": 0}]} -{"query": "bien etre au travail", "session_id": 3485834849585592, "user_id": 711327756693273, "candidates": [{"corpus_id": 219783791, "title": "Influence du leadership habilitant sur le bien etre au travail et l'engagement organisationnel etude comparative entre une organisation habilitante et une organisation classique", "abstract": "Resume Cet article propose une etude comparative entre deux types d'organisations afin d'examiner l'influence du style de leadership habilitant sur le bien etre et l'engagement affectif des employes. Sur la base d'un questionnaire rempli par 428 employes, les resultats mettent en evidence l'effet positif du leadership habilitant sur l'engagement affectif des collaborateurs, via une mediation partielle du bien etre. Contrairement a nos attentes, cet effet n'est pas plus fort au sein de l'organisation habilitante compare a l'organisation classique La mise en evidence d'effets specifiques pour chacune des quatre dimensions du leadership habilitant (Ahearne et al. 2005) apporte des eclairages theoriques nouveaux et invite les organisations a favoriser des pratiques manageriales s'appuyant sur le sens au travail et la confiance.", "venue": "", "year": 2020.0, "author_names": ["Alain Caille", "Nathalie Courtois", "J -M Galharret", "Christine Jeoffrion"], "n_citations": 2, "n_key_citations": 1, "score": 1}, {"corpus_id": 213380164, "title": "Enseigner ce que l'on est quand la concordance de valeurs rime avec bien etre au travail. Le cas des enseignants d'EPS de l'academie de Lille", "abstract": "L'enseignant est, dans son exercice professionnel, guide par des motivations personnelles qui se nourrissent de ses propres valeurs. Celles ci se traduisent par des comportements, des discours et des attitudes et in fine, caracterisent un style pedagogique. Leur importance est relative et cree une hierarchie pouvant etre differente d'un enseignant a l'autre. Des lors, se pose la question de savoir si certaines valeurs permettraient d'etre davantage en bien etre au travail. Plus encore, le fait d'agir en coherence par rapport a ses valeurs dans son enseignement serait il un facteur propice a ce bien etre L'objectif de la these consiste a etudier les relations entre le bien etre au travail et les valeurs des enseignants d'Education Physique et Sportive (EPS) En s'inscrivant dans le cadre theorique des valeurs de base de la personne (Schwartz, 1992) un outil de mesure a ete concu pour examiner les valeurs des enseignants d'EPS dans le contexte particulier de l'enseignement de l'EPS avec 599 enseignants d'EPS. Ensuite, le travail a ete mene en deux temps. En premier lieu, 396 enseignants d'EPS de l'academie de Lille ont complete un questionnaire permettant d'identifier leur systeme de valeurs general, leur systeme de valeurs operationnalise en EPS et leur niveau de bien etre subjectif au travail. Les resultats issus des analyses statistiques multifactorielles montrent que les valeurs sont determinantes pour expliquer le bien etre au travail. Ainsi, ils revelent que les valeurs d'ouverture au changement et de depassement de soi sont plus vertueuses que les valeurs de continuite pour le bien etre des enseignants d'EPS. Si la nature des valeurs permet, en partie, d'expliquer le bien etre au travail, le fait d'agir en accord avec son systeme general de valeurs est un facteur plus determinant. Ainsi, la concordance entre ses valeurs et ses pratiques professionnelles apparait comme un objectif prioritaire pour ameliorer le bien etre au travail. De plus, les resultats permettent d'identifier quatre profils caracteristiques d'enseignants selon leurs systemes de valeurs et leur niveau de bien etre les harmonieux, les compositeurs, les desaccordes et les sans partitions. Parallelement a ces enquetes, douze entretiens semi directifs ont ete menes aupres d'enseignants d'EPS typiques des profils identifies (trois par profil) Les resultats issus de l'analyse des entretiens permettent non seulement d'affiner la comprehension des profils d'enseignants d'EPS mais egalement de mieux comprendre le lien entre leurs systemes de valeurs et leur niveau de bien etre au travail. Par ailleurs, les resultats revelent que le partage de valeurs avec ses pairs est un facteur mediateur du bien etre au travail des enseignants d'EPS. En conclusion, ce travail de recherche base sur une methodologie mixte permet d'amorcer une reflexion pedagogique et didactique autour de l'importance des valeurs et de leur concordance dans l'enseignement. Il souleve egalement l'importance de clarifier collectivement les valeurs au sein des equipes pedagogiques. Une reflexion et un travail sur ces deux aspects devraient permettre d'ameliorer le bien etre au travail des enseignants.", "venue": "", "year": 2019.0, "author_names": ["Clement Llena"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 226881213, "title": "ENTRE PARTAGE DE VECUS, SOUTIEN SOCIAL ET BIEN ETRE AU TRAVAIL L'EXPERIENCE D'UN GROUPE DE PAROLE MENE AUPRES D'ENSEIGNANTS", "abstract": "Le soutien social est un facteur frequemment cite pour favoriser le bien etre psychologique au travail chez les enseignants, il peut etre favorise dans le cadre d'activites collectives offertes sur le lieu de travail. Dans une perspective systemique, nous presentons les resultats d'une recherche qualitative abordant les perceptions des enseignants apres leur participation a des groupes de parole qui avaient l'objectif de developper leur bien etre au travail. Les resultats soulignent l'apport de cette activite sur leur reseau de soutien, mais aussi l'importance du cadre et du partage du vecu entre collegues.", "venue": "", "year": 2019.0, "author_names": ["Caterina Mamprin"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 226854872, "title": "L'utilisation des technologies nomades, vectrice de bien etre au travail Une approche par la methode QCA.", "abstract": "Les technologies numeriques prennent un caractere ambivalent. Selon la maniere dont elles sont percues par les individus, elles sont porteuses de bien etre pour les uns et/ou generatrices de stress pour les autres. Dans le cadre de la transformation numerique du travail, bien etre au travail et technostress sont des defis majeurs a relever pour les organisations. Cette recherche tente ainsi de reflechir plus en profondeur sur l'influence des technologies nomades sur le bien etre au travail (BET) et le technostress. L'objectif est de mettre en exergue au travers de la Methode de l'Analyse Comparee (QCA) et de l'analyse de discours, les combinaisons de conditions qui generent du BET et du technostress dans le cadre de l'utilisation de technologies nomades au travail. Ainsi, en mobilisant les apports theoriques de l'analyse de l'activite et de l'ergonomie constructive, il n'est pas question de proposer une tendance de variables d'influence mais bien une relation causale d'une combinaison de ces variables, regulees par les individus et engagees dans la construction d'environnements capacitants. Par ailleurs, l'approche salutogenique de cette communication propose d'ouvrir les discussions sur l'idee qu'a l'instar du technostress, les technologies numeriques pourraient s'envisager comme vectrices de techno BET et revenir a un fondamental leur vocation a aider les individus dans leur tache.", "venue": "", "year": 2019.0, "author_names": ["Pierre Loup", "Marie-Laure Weber", "Florence Nande"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213488015, "title": "La justice organisationnelle du groupe et le bien etre au travail une approche dynamique", "abstract": "La justice globale du groupe fait reference aux perceptions du traitement juste ou injuste recu de la part des membres du groupe de travail ou des collegues de travail. Les perceptions de justice organisationnelle ont longtemps ete considerees comme stables dans le temps, qui ne pouvaient changer que sur le niveau interindividuel. Cependant, il a ete demontre empiriquement que les perceptions de justice varient dans le temps sur les deux niveaux inter et intra individuels que ce soit d'une periode a une autre, d'une semaine a une autre, d'un jour a un autre et meme au cours d'une seule et meme journee. La justice organisationnelle est connue pour avoir une influence significative sur un certain nombre de comportements et attitudes au travail tels que la performance, la satisfaction, l'intention de depart, l'engagement ou encore l'implication au travail. Recemment, il a ete revele que les perceptions de justice avaient egalement une influence sur le sentiment de bien etre au travail des individus. Ainsi, la presente recherche examine l'effet des perceptions de justice globale du groupe sur le bien etre au travail en adoptant une approche dynamique appelee la methode du journal personnel ou la Diary Study. Cette recherche etudie egalement les mecanismes explicatifs de cette relation.", "venue": "", "year": 2019.0, "author_names": ["S Dounia Bensemmane"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 213965376, "title": "Bien etre au travail et performance de l'entreprise une analyse par les paradoxes", "abstract": "A l'heure ou les entreprises, confrontees a de nombreux bouleversements, sont plus que jamais en recherche de performance, et a l'heure ou les salaries, denoncant les conditions de travail et les pratiques manageriales, n'ont jamais ete aussi demandeurs de bien etre au travail, reconcilier le bien etre des salaries et la performance de l'entreprise est un sujet d'actualite et un enjeu strategique pour les entreprises.La revue de la litterature et les resultats d'une analyse qualitative exploratoire menee a l'aide d'entretiens semi directifs aupres de 55 salaries du groupe RESSIF (Reseau des Services Sociaux Interentreprises de France) nous amenent a envisager le bien etre au travail et la performance de l'entreprise en termes de meta perspective paradoxale et a proposer des voies de resolution de ce paradoxe organisationnel.Pour ce faire, nous avons mene deux etudes quantitatives. La premiere etude est basee sur 5300 observations issues de l'enquete conditions de travail du Ministere francais du travail. La deuxieme est basee sur les reponses de 270 entreprises a un questionnaire en ligne portant sur les pratiques de gestion des ressources humaines.Finalement, nos resultats empiriques concluent que les facteurs permettant de concilier le bien etre au travail et la performance de l'entreprise sont, parmi les conditions de travail, la lutte contre l'intensite et l'insoutenabilite du travail et, parmi les pratiques de ressources humaines, le developpement de la participation des salaries aux decisions de l'entreprise, la formation, les promotions et perspectives de carriere et, dans une moindre mesure, l'evaluation de la performance.Pour conclure ce travail, sont presentees les contributions theoriques, methodologiques et manageriales, ainsi que les voies futures de recherche.", "venue": "", "year": 2019.0, "author_names": ["Nathalie Bernard"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 57815926, "title": "Bien etre au travail et qualite de vie des enseignants quelles differences selon l'anciennete", "abstract": "Resume Objectif de l'etude L'enseignement est une profession a risque d'epuisement professionnel. Des donnees preliminaires suggerent que l'age augmenterait ce risque. L'objectif a ete d'evaluer les differences de bien etre au travail et general des enseignants selon leur anciennete et d'en deduire des preconisations. Methodes Dans le cadre de l'enquete postale nationale Qualite de vie des enseignants (Fondation MGEN/Education nationale) 2320 enseignants des 1 er et 2 nd degres ont ete interroges sur leur bien etre au travail (bilan de l'experience professionnelle, evolution de l'exercice du metier depuis cinq ans, trois dimensions du Maslach Burnout Inventory) et general (qualite de vie, sante percue et quatre scores du questionnaire WHOQOL BREF) Ces indicateurs de bien etre ont ete modelises en fonction de l'anciennete categorisee en trois classes 5 ans, 6 29 ans, 30 ans) dans des modeles de regression ajustes sur divers facteurs sociodemographiques et professionnels. Resultats Par rapport aux enseignants plus experimentes, les enseignants en debut de carriere avaient des conditions d'exercice moins favorables et un score de sante environnementale moins bon 3 points 95 %IC 5,1) 1,0) p 0,005) Les enseignants en fin de carriere etaient plus susceptibles que leurs homologues en milieu de carriere de juger l'exercice du metier de plus en plus difficile (OR 2,6 [2,0 3,4] p burnout Ils etaient moins satisfaits de leur qualite de vie (OR 0,7 [0,5 0,9] p 0,009) de leur sante (OR 0,7 [0,5 0,9] p 0,002) notamment physique 5,4 points 7,1) 3,8) p p 0,001) Discussion Cette etude plaide en faveur d'un affaiblissement du bien etre des enseignants en fin de carriere et appuie l'interet de realiser des actions de prevention et d'accompagnement cible. Une attention doit aussi etre portee aux enseignants en debut de carriere qui peuvent etre confrontes a des contextes particulierement difficiles malgre leur inexperience.", "venue": "", "year": 2017.0, "author_names": ["Laurent Zavidovique", "Fabien Gilbert", "Marie-Noel Vercambre-Jacquot"], "n_citations": 6, "n_key_citations": 0, "score": 0}, {"corpus_id": 56971280, "title": "Mesurer le bien etre au travail construction et validation factorielle du BET", "abstract": "Resume Objectif de l'etude Cet article porte sur le developpement d'une mesure de bien etre au travail. Plus precisement, notre objectif etait de mesurer le bien etre au travers des manifestations du bien etre hedonique et eudemonique. La mesure presentee comporte deux dimensions du bien etre hedonique les emotions positives au travail, la satisfaction au travail et une dimension large de bien etre eudemonique, nommee fonctionnement optimal. Methode Les salaries de trois entreprises francaises ont complete notre mesure de bien etre au travail (BET) une breve mesure de satisfaction envers l'environnement de travail, une mesure de soutien social percu et une mesure de stress percu. Resultats L'analyse factorielle confirmatoire confirme la presence de trois facteurs latents distincts interpretes en termes de d'emotions positives au travail, de satisfaction au travail et de fonctionnement optimal au travail. Ces trois sous echelles presentent une bonne consistance interne et une validite de critere adequate. Les scores des trois dimensions du bien etre au travail sont positivement associes aux scores de satisfaction envers l'environnement de travail et de soutien social percu. Conclusion En conclusion, notre echelle de bien etre au travail (BET) presente des qualites psychometriques acceptables. Ce questionnaire peut etre utilise par des chercheurs et des praticiens dans l'examen de la sante positive au travail.", "venue": "", "year": 2017.0, "author_names": ["Julie Collange", "R Gaucher", "Maya George", "L Saunder", "E Albert"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 225504242, "title": "Claudia Senik, Bien etre au travail. Ce qui compte", "abstract": "Comment ameliorer le bien etre au travail C'est une question qui occupe le devant de la scene sociale aujourd'hui, tant la perception d'une violence au travail, c'est a dire de comportements hostiles (mauvais traitements, remarques desobligeantes, moqueries, mepris, propos sexistes, etc. reste pregnante. Claudia Senik, specialiste de l'economie du bonheur a l'Ecole d'economie de Paris et a Sorbonne Universite et co directrice de l'Observatoire du Bien etre, s'interroge egalement. Elle ins.", "venue": "", "year": 2020.0, "author_names": ["Claire Federspiel"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 219412711, "title": "Les pratiques de bien etre au travail dans les entreprises au Maroc: enseignements d'une etude de cas", "abstract": "L'apparition de certaines maladies professionnelles et risques psychosociaux preoccupe les DRHs, les menacant de perdre leur capital humain. Les pratiques de management du bien etre au travail mises en place apportent des ameliorations remarquables en faveur des employes. En effet, Dagenais Desmarais et al. (2006) soulignent que le bien etre est un vecteur indeniable de l'efficacite organisationnelle. Apres une synthese de l'ancrage theorique du bien etre au travail, sont ensuite presentes la methodologie mobilisee ainsi que les resultats d'une etude de cas realisee aupres des cadres d'une entreprise privee du secteur de l'immobilier au Maroc.", "venue": "", "year": 2020.0, "author_names": ["M Orabi"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "answer and verify", "session_id": 1675602196831270, "user_id": 1617895971556684, "candidates": [{"corpus_id": 52041587, "title": "Read Verify: Machine Reading Comprehension with Unanswerable Questions", "abstract": "Machine reading comprehension with unanswerable questions aims to abstain from answering when no answer can be inferred. In addition to extract answers, previous works usually predict an additional \"no answer\" probability to detect unanswerable cases. However, they fail to validate the answerability of the question by verifying the legitimacy of the predicted answer. To address this problem, we propose a novel read then verify system, which not only utilizes a neural reader to extract candidate answers and produce no answer probabilities, but also leverages an answer verifier to decide whether the predicted answer is entailed by the input snippets. Moreover, we introduce two auxiliary losses to help the reader better handle answer extraction as well as no answer detection, and investigate three different architectures for the answer verifier. Our experiments on the SQuAD 2.0 dataset show that our system obtains a score of 74.2 F1 on test set, achieving state of the art results at the time of submission (Aug. 28th, 2018)", "venue": "AAAI", "year": 2019.0, "author_names": ["Minghao Hu", "Furu Wei", "Yuxing Peng", "Zhen Xian Huang", "Nan Yang", "Ming Zhou"], "n_citations": 95, "n_key_citations": 17, "score": 1}, {"corpus_id": 19186315, "title": "Multi Passage Machine Reading Comprehension with Cross Passage Answer Verification", "abstract": "Machine reading comprehension (MRC) on real web data usually requires the machine to answer a question by analyzing multiple passages retrieved by search engine. Compared with MRC on a single passage, multi passage MRC is more challenging, since we are likely to get multiple confusing answer candidates from different passages. To address this problem, we propose an end to end neural model that enables those answer candidates from different passages to verify each other based on their content representations. Specifically, we jointly train three modules that can predict the final answer based on three factors: the answer boundary, the answer content and the cross passage answer verification. The experimental results show that our method outperforms the baseline by a large margin and achieves the state of the art performance on the English MS MARCO dataset and the Chinese DuReader dataset, both of which are designed for MRC in real world settings.", "venue": "ACL", "year": 2018.0, "author_names": ["Yizhong Wang", "Kai Liu", "Jing Liu", "W He", "Yajuan Lyu", "Hua Wu", "Sujian Li", "Haifeng Wang"], "n_citations": 67, "n_key_citations": 11, "score": 0}, {"corpus_id": 52010325, "title": "Knowledge as A Bridge: Improving Cross domain Answer Selection with External Knowledge", "abstract": "Answer selection is an important but challenging task. Significant progresses have been made in domains where a large amount of labeled training data is available. However, obtaining rich annotated data is a time consuming and expensive process, creating a substantial barrier for applying answer selection models to a new domain which has limited labeled data. In this paper, we propose Knowledge aware Attentive Network (KAN) a transfer learning framework for cross domain answer selection, which uses the knowledge base as a bridge to enable knowledge transfer from the source domain to the target domains. Specifically, we design a knowledge module to integrate the knowledge based representational learning into answer selection models. The learned knowledge based representations are shared by source and target domains, which not only leverages large amounts of cross domain data, but also benefits from a regularization effect that leads to more general representations to help tasks in new domains. To verify the effectiveness of our model, we use SQuAD T dataset as the source domain and three other datasets (i.e. Yahoo QA, TREC QA and InsuranceQA) as the target domains. The experimental results demonstrate that KAN has remarkable applicability and generality, and consistently outperforms the strong competitors by a noticeable margin for cross domain answer selection.", "venue": "COLING", "year": 2018.0, "author_names": ["Yang Deng", "Ying Shen", "Min Yang", "Yaliang Li", "Nan Du", "Wei Fan", "Kai Lei"], "n_citations": 20, "n_key_citations": 5, "score": 0}, {"corpus_id": 2803049, "title": "Verify Consistency between Security Policy and Firewall Policy with Answer Set Programming", "abstract": "Firewalls are core elements in network security, the effectiveness of firewall security is dependent on configuring firewall policy correctly.Firewall policy is a lower level policy which describes how firewall actually implements security policy. Security policy is a higher level policy which defines the access that will be permitted or denied from the trusted network. Compare with software engineer, security policy is a design, firewall policy is a set of codes. It is useful to discover inconsistency between security policy and firewall policy. In this paper, we present a method of verifying consistency between security policy and firewall policy, which applies the idea of model checking. First of all,two policies and the consistency are represented with logic programs. Then the verification is applied by testing whether the logic formula of the consistency is satisfied in the semantics of the logic programs. Furthermore, We prove that the method has an unique answer which can be computed in polynomial time.", "venue": "2008 International Conference on Computer Science and Software Engineering", "year": 2008.0, "author_names": ["Yiwen Liang", "Wenjun Deng"], "n_citations": 5, "n_key_citations": 0, "score": 0}, {"corpus_id": 12999574, "title": "Scaling short answer grading by combining peer assessment with algorithmic scoring", "abstract": "Peer assessment helps students reflect and exposes them to different ideas. It scales assessment and allows large online classes to use open ended assignments. However, it requires students to spend significant time grading. How can we lower this grading burden while maintaining quality? This paper integrates peer and machine grading to preserve the robustness of peer assessment and lower grading burden. In the identify verify pattern, a grading algorithm first predicts a student grade and estimates confidence, which is used to estimate the number of peer raters required. Peers then identify key features of the answer using a rubric. Finally, other peers verify whether these feature labels were accurately applied. This pattern adjusts the number of peers that evaluate an answer based on algorithmic confidence and peer agreement. We evaluated this pattern with 1370 students in a large, online design class. With only 54% of the student grading time, the identify verify pattern yields 80 90% of the accuracy obtained by taking the median of three peer scores, and provides more detailed feedback. A second experiment found that verification dramatically improves accuracy with more raters, with a 20% gain over the peer median with four raters. However, verification also leads to lower initial trust in the grading system. The identify verify pattern provides an example of how peer work and machine learning can combine to improve the learning experience.", "venue": "L@S", "year": 2014.0, "author_names": ["Chinmay Kulkarni", "Richard Socher", "Michael S Bernstein", "Scott R Klemmer"], "n_citations": 78, "n_key_citations": 2, "score": 0}, {"corpus_id": 26149743, "title": "Interactive Connectivity Establishment (ICE) A Protocol for Network Address Translator (NAT) Traversal for Offer/Answer Protocols", "abstract": "This document describes a protocol for Network Address Translator (NAT) traversal for multimedia session signaling protocols based on the offer/answer model, such as the Session Initiation Protocol (SIP) This protocol is called Interactive Connectivity Establishment (ICE) ICE makes use of existing protocols, such as Simple Traversal of UDP Through NAT (STUN) and Traversal Using Relay NAT (TURN) ICE makes use of STUN in peer to peer cooperative fashion, allowing participants to discover, create and verify mutual connectivity.", "venue": "RFC", "year": 2010.0, "author_names": ["Jonathan D Rosenberg"], "n_citations": 430, "n_key_citations": 51, "score": 0}, {"corpus_id": 4249242, "title": "A Probabilistic Optimization Framework for the Empty Answer Problem", "abstract": "We propose a principled optimization based interactive query relaxation framework for queries that return no answers. Given an initial query that returns an empty answer set, our framework dynamically computes and suggests alternative queries with less conditions than those the user has initially requested, in order to help the user arrive at a query with a non empty answer, or at a query for which no matter how many additional conditions are ignored, the answer will still be empty. Our proposed approach for suggesting query relaxations is driven by a novel probabilistic framework based on optimizing a wide variety of application dependent objective functions. We describe optimal and approximate solutions of different optimization problems using the framework. We analyze these solutions, experimentally verify their efficiency and effectiveness, and illustrate their advantage over the existing approaches.", "venue": "Proc. VLDB Endow.", "year": 2013.0, "author_names": ["Davide Mottin", "Alice Marascu", "Senjuti Basu Roy", "Gautam Das", "Themis Palpanas", "Yannis Velegrakis"], "n_citations": 49, "n_key_citations": 1, "score": 0}, {"corpus_id": 7903394, "title": "XACML 3.0 in Answer Set Programming", "abstract": "We present a systematic technique for transforming XACML 3.0 policies in Answer Set Programming (ASP) We show that the resulting logic program has a unique answer set that directly corresponds to our formalisation of the standard semantics of XACML 3.0 from [9] We demonstrate how our results make it possible to use off the shelf ASP solvers to formally verify properties of access control policies represented in XACML, such as checking the completeness of a set of access control policies and verifying policy properties.", "venue": "LOPSTR", "year": 2012.0, "author_names": ["Carroline Dewi Puspa Kencana Ramli", "Hanne Riis Nielson", "Flemming Nielson"], "n_citations": 15, "n_key_citations": 2, "score": 0}, {"corpus_id": 14469303, "title": "Impact of automated short answer marking on students' learning: IndusMarker, a case study", "abstract": "IndusMarker is an automated short answer marking system based on structure editing and structure matching rather than extensive use of linguistic features analysis. Since IndusMarker cannot guarantee 100% human system agreement rate, the use of IndusMarker has therefore been limited to conducting practice tests. It was expected that such a use of IndusMarker will lead to improvements in student learning and instructor student interactions. The main aim of this paper is to verify these claims. The results indicate that such a use of IndusMarker leads to improvements in both student learning and instructor student interactions. In addition, IndusMarker is also shown to give reasonably high human system agreement rates even after the removal of all linguistic analysis features from the software.", "venue": "2013 5th International Conference on Information and Communication Technologies", "year": 2013.0, "author_names": ["Raheel Siddiqi"], "n_citations": 4, "n_key_citations": 0, "score": 0}, {"corpus_id": 1543877, "title": "Answer type validation in question answering systems", "abstract": "In open domain question answering systems, numerous questions wait for answers of an explicit type. For example, the question \"Which president succeeded Jacques Chirac?\" requires an instance of president as answer. The method we present in this article aims at verifying that an answer given by a system corresponds to the given type. This verification is done by combining criteria provided by different methods dedicated to verify the appropriateness between an answer and a type. The first types of criteria are statistical and compute the presence rate of both the answer and the type in documents, other criteria rely on named entity recognizers and the last criteria are based on the use of Wikipedia.", "venue": "RIAO", "year": 2010.0, "author_names": ["Arnaud Grappy", "Brigitte Grau"], "n_citations": 18, "n_key_citations": 0, "score": 0}]} -{"query": "Requisitos de usabilidade", "session_id": 5510757629673159, "user_id": 2066135832849352, "candidates": [{"corpus_id": 212841593, "title": "A criacao de uma checklist de requisitos de usabilidade em paralelo a Lei de Acesso a Informacao do Brasil como ferramenta de analise de portais de transparencia", "abstract": "O alinhamento de novas ferramentas de tecnologia digital a gestao e uso da informacao publica, somado ao contexto de regulamentacao da Lei de Acesso a Informacao Lei 12.527, promoveram a criacao dos portais de transparencia e seus e SICs (Servicos de Informacao ao Cidadao) nos varios setores da administracao publica brasileira. Essas plataformas digitais tem como proposta elevar a transparencia publica, possibilitando aos cidadaos e cidadas fiscalizar os gastos publicos e assegurar o direito de acesso as informacoes. Nesse cenario, o presente trabalho teve como ponto de partida analisar a adequacao do Portal de Transparencia da Cidade de Bananeiras, no interior da Paraiba, aos parametros da Lei de Acesso a Informacao e aos requisitos de usabilidade e para isso elaborou uma checklist que pode servir de modelo de verificacao junto a outros portais de mesma categoria. Metodologicamente, utilizou se a abordagem qualitativa, aplicando os metodos de observacao direta a partir da lista de quesitos elaborados para a pesquisa, tomando como base tres fontes de analise: ISO/IEC9126 1(2003) LAI (2011) e e GOV (2010) Os resultados trazem a criacao de uma lista com 20 perguntas que podem nortear analises de portais de transparencia brasileiros e apontam, ainda, a necessidade de gestao documental na instituicao e dificuldades parciais causadas devido ao nao cumprimento dessas exigencias, algumas inclusive legais.", "venue": "", "year": 2019.0, "author_names": ["Henrique Elias Cabral Franca", "Maria das Gracas Freitas dos Santos"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 213656157, "title": "Requisitos de usabilidade para softwares aplicados ao e learning: uma proposta para elaboracao de User Stories", "abstract": "Software development for e learning has become increasingly complex, just as User Stories (US) are artifacts that have been widely used in the conception phase of the software. This paper presents an approach to support developers in the writing of US that addresses usability requirements in the elearning domain. An experimental study was conducted with 19 participants that evaluated positively the use of the approach. The results also revealed that the usability guidelines for e learning are an important tool for specifying usability aspects into the US. Resumo. O desenvolvimento de software para e learning vem se tornando cada vez mais complexo, assim como User Stories (US) sao artefatos que tem sido artefatos amplamente utilizados na fase de concepcao do software. Este artigo apresenta uma abordagem para apoiar os desenvolvedores na escrita de US que contemplem requisitos de usabilidade no dominio de e learning. Foi realizado um estudo experimental com 19 participantes que avaliaram positivamente o uso da abordagem. Os resultados revelaram que as diretrizes de usabilidade para e learning auxiliaram os participantes na especificacao de aspectos de usabilidade.", "venue": "Anais do XXX Simposio Brasileiro de Informatica na Educacao (SBIE 2019)", "year": 2019.0, "author_names": ["Larissa Albano Lopes", "Eduardo Gouveia Pinheiro", "Tiago Rodrigues da Silva", "Luciana A M Zaina"], "n_citations": 1, "n_key_citations": 1, "score": 1}, {"corpus_id": 198504187, "title": "Analise da Percepcao de Importancia de Requisitos de Usabilidade no Desenvolvimento de um Sistema Web com Scrum", "abstract": "As metodologiasageis sao utilizadas nos projetos de desenvolvimento de softwares, por seu perfil dinamico e eficiente, visando atender a satisfacao dos clientes. Porem, em muitos casos, essas metodologias sao utilizadas isoladamente, sem considerar aspectos importantes na relacao usuariosistema. Dessa forma, o artigo descreve um caso de estudo sobre a importancia percebida da usabilidade em um time Scrum de uma empresa de softwares para o varejo e, como significante resultado deste trabalho, vale ressaltar que o time em questao nao leva em consideracao o estudo do usuario como pratica essencial para atender requisitos de usabilidade do sistema.", "venue": "", "year": 2018.0, "author_names": ["Crissia de Santana Marcelino", "Francisco Nascimento"], "n_citations": 1, "n_key_citations": 1, "score": 0}, {"corpus_id": 180471801, "title": "PRINCIPIOS E REQUISITOS DE USABILIDADE NA CONCEPCAO DE UMA FERRAMENTA DE SUPORTE A GESTAO DE DESIGN", "abstract": "", "venue": "", "year": 2015.0, "author_names": ["Carina da Silva", "Eugenio Andres Diaz Merino", "Giselle Merino", "Luiz Fernando Goncalves de Figueiredo"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 85532908, "title": "Requisitos de Qualidade de Usabilidade: Analise da Utilizacao em Sistemas de uma Instituicao Financeira", "abstract": "", "venue": "WER", "year": 2018.0, "author_names": ["Angelica Toffano Seidel Calazans", "Eloisa Toffano Seidel Masson", "Roberto Avila Paldes", "Fernando de A Guimaraes", "Kiane Mabel Rezende", "Ricardo Ajax Kosloski"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 34697163, "title": "MERUSA: architecture oriented safety and usability requirements specification methodology (MERUSA: metodologia de especificacao de requisitos de usabilidade e seguranca orientada para arquitetura)", "abstract": "", "venue": "", "year": 2005.0, "author_names": ["Valter Fernandes Avelino"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 187885861, "title": "App ENEM: Uma proposta de Aplicativo Movel com Base em Heuristicas de Usabilidade", "abstract": "Interfaces eficientes e atrativas sao essenciais para o sucesso de qualquer software. Em aplicacoes educacionais, tais requisitos sao ainda mais relevantes. Considerando a crescente utilizacao de aplicativos educativos e os diversos estilos de aprendizagem, e essencial assegurar que essas ferramentas contemplem requisitos de usabilidade, pois, a qualidade da interface e fundamental para a satisfacao de seus usuarios. Logo, este trabalho apresenta uma proposta de prototipo de um aplicativo educativo para dispositivos moveis inspirado em guias e recomendacoes de usabilidade.", "venue": "", "year": 2018.0, "author_names": ["Raiza Portilho Nunes", "Isadora Mendes dos Santos"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 209056122, "title": "Metodos, Tecnicas e Ferramentas de Processos de Usabilidade Alinhado com as Diretrizes de Acessibilidade: Uma Revisao Sistematica da Literatura", "abstract": "Nos ultimos anos, houve um aumento significativo no interesse cientifico em processos de usabilidade e acessibilidade na web. Nao obstante, ainda ha uma parcela significativa de usuarios que enfrentam barreiras durante as interacoes na internet, especificamente, usuarios cegos. Dessa forma, processos de requisitos de usabilidade alinhadas as Diretrizes de Acessibilidade do Conteudo na Web tornam se primordiais. Assim, esta revisao sistematica da literatura incidiu sobre as publicacoes dos ultimos seis anos, objetivando a identifica cao dos principais metodos, tecnicas e ferramentas aplicadas nos processos de alinhamento dos requisitos de usabilidade e acessibilidade. Por meio das analises, foram identificados 486 artigos cientificos, os quais enderecavam processos de usabilidade e acessibilidade. Aplicando se os criterios de inclusao e exclusao, foram selecionados 86 artigos. Os resultados demonstram a escassez de trabalhos que verificam a eficiencia das ferramentas e das principais tecnicas que sao empregadas em processos de acessibilidade e usabilidade.", "venue": "SBSI", "year": 2017.0, "author_names": ["Gabriel de Jesus Rodrigues", "Tiago do Carmo Nogueira", "Deller J Ferreira"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 116103755, "title": "DESIGN CENTRADO NO USUARIO: REQUISITOS PARA AVALIACAO DE PRODUTOS DURANTE O DESENVOLVIMENTO DE PROJETOS COM BASE NA USABILIDADE E DESIGN UNIVERSAL", "abstract": "", "venue": "", "year": 2017.0, "author_names": ["Lucas Jose Garcia", "Giselle Merino", "Eugenio Andres Diaz Merino"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 171780190, "title": "Proposta de um Conjunto de Heuristicas para Avaliacao da Usabilidade de Aplicativos Moveis Educacionais", "abstract": "A adocao de aplicativos moveis no contexto educacional vem crescendo e abre espaco para o mobile learning Diante dessa crescente utilizacao e dos diferentes perfis de usuarios, e relevante garantir que esses aplicativos contemplem requisitos de usabilidade, uma vez que eles possuem objetivos de aprendizado. Nesse contexto, esse trabalho tem como objetivo apresentar um conjunto de heuristicas especificas para avaliar a usabilidade dos aplicativos moveis educacionais. Por meio de avaliacoes com especialistas da area e usuarios, foi possivel evidenciar a eficiencia e a eficacia das heuristicas propostas que foram consideradas relevantes para apreciacao da usabilidade de aplicativos do dominio educacional.", "venue": "", "year": 2017.0, "author_names": ["Deborah D'Carlo", "Glivia Angelica Rodrigues Barbosa", "Erica R de Oliveira"], "n_citations": 2, "n_key_citations": 0, "score": 0}]} -{"query": "Health Monitoring of Tree-Trunks Using Ground Penetrating Radar", "session_id": 5056126166209354, "user_id": 5488960220333684, "candidates": [{"corpus_id": 197480832, "title": "Health Monitoring of Tree Trunks Using Ground Penetrating Radar", "abstract": "Ground penetrating radar (GPR) is traditionally applied to smooth surfaces in which the assumption of half space is an adequate approximation that does not deviate much from reality. Nonetheless, using GPR for internal structure characterization of tree trunks requires measurements on an irregularly shaped closed curve. A typical hyperbola fitting has no physical meaning in this new context since the reflection patterns are strongly associated with the shape of the tree trunk. Instead of a clinical hyperbola, the reflections give rise to complex shaped patterns that are difficult to be analyzed even in the absence of clutter. In this paper, a novel processing scheme is described which can interpret complex reflection patterns assuming a circular target subject to any arbitrary shaped surface. The proposed methodology can be applied using commercial hand held antennas in real time, avoiding computationally costly tomographic approaches that require the usage of custom made bespoke antenna arrays. The validity of the current approach is illustrated both with numerical and real experiments.", "venue": "IEEE Transactions on Geoscience and Remote Sensing", "year": 2019.0, "author_names": ["Iraklis Giannakis", "Fabio Tosti", "Livia Lantini", "Amir Morteza Alani"], "n_citations": 15, "n_key_citations": 1, "score": 1}, {"corpus_id": 226836530, "title": "A Tomographic Inversion Approach for the Detection of Decay and Cavities in Tree Trunks using Ground Penetrating Radar", "abstract": "Summary A variety of tree species, such as ash and oak trees, are nowadays under serious threat in the United Kingdom and European territories as a result of the action of aggressive fungal diseases. To this effect, Ground Penetrating Radar (GPR) is an effective geophysical tool capable of collecting information on the internal structure of trees. Nevertheless, traditional processing methods can provide only limited indications for health monitoring purposes. In this study, a demonstration of the GPR potential and the use of a tomographic inversion approach in detecting decay and cavities is provided. In that context, a set of finite difference time domain (FDTD) simulations of different complexity (i.e. internal trunk configurations and dimensions of the targets) were used to assess the performance of the proposed strategy. The results prove the viability of the proposed approach in identifying the position of cavities and decay in tree trunks.", "venue": "", "year": 2019.0, "author_names": ["Amir Morteza Alani", "Francesco Soldovieri", "Gianluca Gennarelli", "Iraklis Giannakis", "Ilaria Catapano", "Livia Lantini", "Giovanni Ludeno", "Fabio Tosti"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 219077131, "title": "Reverse time migration for evaluating the internal structure of tree trunks using ground penetrating radar", "abstract": "Abstract Modern socioeconomic factors such as global timber trade and international travelling have contributed to the rapid increase of Emerging Infectious Diseases (EIDs) of trees with devastating effects to the European forests and woodlands. To that extent, numerous non destructive methodologies have been suggested as diagnostic tools in order to effectively monitor and maintain potential outbreaks. Ground penetrating radar (GPR) is an appealing method for tree monitoring as it provides with a trivially deployable and efficient detection tool, suitable for large scale forestry applications. Nonetheless, traditional GPR approaches are tuned for surface measurements and they are not compatible with the unique measurement configurations associated with forestry applications. Within that context, we present a novel processing framework, which is capable of addressing features with irregular measurements on closed surfaces. A positioning method is described that exploits a wheel measuring device in order to accurately associate each A Scan with its corresponding coordinates. In addition, a processing pipeline is presented that aims at eliminating the ringing noise due to the layered nature of the trees. Lastly, a reverse time migration is applied to the processed B Scan in order to effectively map the reflectors present within the trunk. The suggested scheme is successfully tested in both numerical and real field experiments, indicating the validity of the current approach.", "venue": "", "year": 2020.0, "author_names": ["Amir Morteza Alani", "Iraklis Giannakis", "Lilong Zou", "Livia Lantini", "Fabio Tosti"], "n_citations": 2, "n_key_citations": 0, "score": 0}, {"corpus_id": 195485012, "title": "Evaluating the internal structure of tree trunks using ground penetrating radar", "abstract": "Evaluating and assessing the internal structure of tree trunks is of great importance both for industrial as well as environmental purposes [1] Non destructive geophysical techniques with minimum intrusion can assist on tree monitoring by providing fast and cheap tools for assessing the internal structure of tree trunks. In the current work we evaluate the capabilities of ground penetrating radar (GPR) on locating tree decays in different stages and in different tree types. GPR has been widely applied to smooth surfaces that can be sufficiently approximated as half spaces. In that context, interpretation approaches like hyperbola detection make the assumption that the targets of interest are buried inside a dielectric half space. Nonetheless, the shape of the tree trunks is rather stochastic and the only valid and safe assumption that can be made is that the shape of the tree is a closed curve with arbitrary shape. Due to that, the reflection patterns arising from decays inside the tree deviate from the traditional hyperbolic features that often occur in typical GPR surveys. Under these conditions and without the usage of tomographic approaches a reliable interpretation is difficult to be made. Tomographic approaches are time consuming with high computational demands that often applied to bespoke custom made antenna systems that the end user has no access to. Our work tries to overcome these issues by suggesting a universal \"hyperbola\" fitting scheme that can be applied in any arbitrary given shape. Prior to the \"hyperbola\" fitting, a singular value decomposition (SVD) [2] is applied in an effort to decrease the ringing noise and the unwanted clutter. The validity of our method is tested through numerical and lab experiments. The minimum computational requirements of the proposed method combined with the fact that can be coupled with any commercial antenna, makes our approach commercially appealing for large scale applications. Results presented in this abstract are part of a major research project that the authors have undertaken for the last two years.", "venue": "", "year": 2019.0, "author_names": ["Iraklis Giannakis", "Fabio Tosti", "Livia Lantini", "Amir Morteza Alani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 235082075, "title": "Assessment of health status of tree trunks using ground penetrating radar tomography", "abstract": "", "venue": "", "year": 2021.0, "author_names": ["Maria Sudakova", "Evgeniya Terentyeva", "Alexey Yu Kalashnikov"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 135275872, "title": "Imaging changes in moisture within living tree trunks using ground penetrating radar", "abstract": "", "venue": "", "year": 2018.0, "author_names": ["Adam R Mangel", "John Hamilton Bradford", "Kamini Singha"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 226134265, "title": "Tree Monitoring Using Ground Penetrating Radar: Two Case Studies Using Reverse Time Migration", "abstract": "Non destructive testing (NDT) for health monitoring of trees is a suitable candidate for detecting signs of early decay [1] Recent developments [2,3,4] have highlighted that ground penetrating radar (GPR) has the potential to provide with a robust and accurate detection tool with minimum computational and operational requirements in the field. In particular, a processing framework is suggested in [2] that can effectively remove ringing noise and unwanted clutter. Subsequently, an arc length parameterisation is employed in order to utilise a wheel measurement device to accurately position the measured traces. Lastly, two migration schemes; Kirchhoff and reversetime migration, are successfully applied on numerical and laboratory data in [3] In the current paper, the detection scheme described in [2,3] using reverse time migration is tested in two case studies that involve diseased urban trees within the greater London area, UK (Kensington and Gunnersbury park) Both of the trees were cut down after the completion of the measurements and furthermore cut into several slices to get direct information with regards to their internal structure. The processing scheme described in [3,4] managed to adequately detect the internal decay present in both trees. The aforementioned case studies provide coherent evidences to support the premise that GPR is capable of detecting decay in diseased trunks and therefore has the potential to become an accurate and efficient diagnostic tool against emerging infectious diseases of trees.", "venue": "", "year": 2020.0, "author_names": ["Iraklis Giannakis", "Fabio Tosti", "Lilong Zou", "Livia Lantini", "Amir Morteza Alani"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 215960158, "title": "\"Test site operations for the health monitoring of railway ballast using Ground Penetrating Radar\"", "abstract": "Abstract Effective maintenance of railway infrastructures requires a comprehensive knowledge of the actual condition of the involved construction materials. In this regard, Ground Penetrating Radar (GPR) stands as a viable alternative to the invasive and time consuming traditional techniques for railway inspections. This work reports the experimental activities carried out on a test site area within a railway depot in Rome, Italy. Specifically, a 30 m long railway stretch was divided into 10 sub stretches reproducing different various physical and structural conditions of the track bed. In particular, combinations of varying scenarios of fragmentation and fouling of the ballast were reproduced. The set up was then investigated using different multi frequency GPR horn antenna systems. These were towed along the rail sections by means of a dedicated railway cart. Main electromagnetic parameters of railway ballast were estimated for each scenario using time and frequency domain signal processing techniques. Interpretation of results has shown viability of the GPR method in detecting signs of decay at the network scale, thereby proving this technique to be worthy for implementation in asset management systems.", "venue": "", "year": 2020.0, "author_names": ["Luca Bianchini Ciampoli", "Alessandro Calvi", "Emanuele Oliva"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 134001262, "title": "Health monitoring of a matured tree using ground penetrating radar investigation of the tree root system and soil interaction (Extended Abstract)", "abstract": "In this study, a demonstration of the ground penetrating radar (GPR) potential in the health monitoring of a matured tree has been given. The main objectives of the research were to provide an effective mapping of the tree roots as well as reliable simulation scenarios representing a variety of possible internal defects in terms of shape and formation. To these purposes, the soil around a 70 year old fir tree, with a trunk circumference of 3.40 m and an average radius of 0.55 m, was investigated. A ground coupled multi frequency GPR system equipped with 600 MHz and 1600 MHz central frequency antennas was used for testing purposes. In addition to the above objective, finite difference time domain (FDTD) simulations of the electromagnetic field propagation through the cross section of a trunk (consistent with the investigated fir tree) were carried out. A variety of defects representing cavities created due to decay were also simulated. The results from the simulations demonstrated significant potential for the interpretation of complex decay phenomena within the trunk.", "venue": "", "year": 2017.0, "author_names": ["Amir Morteza Alani", "Luca Bianchini Ciampoli", "Fabio Tosti", "Maria Giulia Brancadoro", "Daniele Pirrone", "Andrea Benedetto"], "n_citations": 5, "n_key_citations": 1, "score": 0}, {"corpus_id": 113793057, "title": "Health monitoring of an ancient tree using ground penetrating radar investigation of the tree root system and soil interaction", "abstract": "The sensibility towards environmental issues along with the attention on preserving natural heritage, especially ancient trees and rare plants, has greatly increased, and the management and the control of the forestall heritage and the floral system has become accordingly a high priority objective to achieve. One of the main factors of tree decay which originally gained public attention is the presence of unknown pathogens carried along by the wind, which can lead to epidemic phenomena and often to a quick death of entire forests. In such an emergency situation, two main approaches can be followed, namely, i) active measures (i.e. the avoidance of any contact between the pathogenic spores and the trees by using bio security measures) and ii) passive measures (i.e. the application of policies for the control and the management of the forestall heritage aimed at identifying the early stage symptoms of the disease) Since the latest approach is based on the monitoring of living trees, invasive methods of health assessment like cutting off branches or incremental coring are increasingly discouraged, and non destructive evaluation proves to be the only option to undertake. The applications of non destructive testing (NDT) techniques in forestry sciences are often self standing and not integrated with one another. This is often due to a lack of knowledge from the NDT users towards the physics and the bio chemical processes which mainly govern the life cycle of trees and plants. Such an issue is emphasized by the evident complexity of the plant and trunk systems themselves. Notwithstanding this, the ground penetrating radar (GPR) technique has proved to be one of the most effective, due to its high versatility, rapidity in collecting data and the provision of reliable results at relatively limited costs. The use of GPR can provide invaluable information about the effective tree trunk assessment and appraisals, tree roots mapping, soil interaction with tree and plants. In addition, the use of simulation can be a supporting tool for the development of a clear understanding of the decay processes in trees. In this study, a demonstration of the GPR potential in the health monitoring of an ancient tree has been given. The main objectives of the research were to provide an effective mapping of the tree roots as well as reliable simulation scenarios representing a variety of possible internal defects in terms of shape and formation. To these purposes, the soil around a 100 years old fir tree, with a trunk circumference of 3.40 m and an average radius of 0.55 m, was investigated. Nine radial scans, 0.30 m spaced each to one another, were carried out all around the tree circumference starting from 0.50 m the outer surface of the bark. A ground coupled multi frequency GPR system equipped with 600 MHz and 1600 MHz central frequency antennas was used for testing purposes. In order to reach the maximum penetration depth of the root system, only the 600 MHz frequency was considered for data processing purposes. After the application of a dedicated signal processing scheme, it was possible to produce a tomographic map of amplitudes covering a swept circle with an outer radius of 3.45 m and an inner radius of 1.05 m up to a maximum depth of 1.56 m. By using a set of specially developed algorithms it was possible to extract signal amplitude information reliably related to the position of the tree roots under the soil. In addition to the above objective, finite difference time domain (FDTD) simulations of the electromagnetic field propagation through the cross section of a trunk were carried out. To this purpose, the numerical simulator package gprMax 2D was used. The freeware tool E2GPR aided the design of the gprMax models and their distributed execution on multicore machines. The dimensions and the dielectric properties of the simulated trunk were consistent with the investigated fir tree (actual data collected) Furthermore, a variety of defects representing cavities created due to decay was simulated. The results from the simulations demonstrated significant potential for the interpretation of complex decay phenomena within the trunk as well as for mapping and comparison of the actual field data.", "venue": "", "year": 2017.0, "author_names": ["Amir Morteza Alani", "Luca Bianchini Ciampoli", "Fabio Tosti", "Maria Giulia Brancadoro", "Daniele Pirrone", "Andrea Benedetto"], "n_citations": 1, "n_key_citations": 0, "score": 0}]} -{"query": "Van der Waals Forces", "session_id": 7206598661467881, "user_id": 3909225994109420, "candidates": [{"corpus_id": 120922760, "title": "The Influence of Retardation on the London van der Waals Forces", "abstract": "The influence of retardation on the energy of interaction between two neutral atoms is investigated by means of quantum electrodynamics. As a preliminary step, Part I contains a discussion of the interaction between a neutral atom and a perfectly conducting plane, and it is found that the influence of retardation leads to a reduction of the interaction energy by a correction factor which decreases monotonically with increasing distance $R$ This factor is equal to unity for $R$ small compared with the wave lengths corresponding to the atomic frequencies, and is proportional to {R}\\ensuremath{ }1} for distances large compared with these wave lengths. In the latter case the total interaction energy is given by \\ensuremath{ \\frac{3\\ensuremath{\\hbar}c\\ensuremath{\\alpha}{8\\ensuremath{\\pi}{R}{4} where \\ensuremath{\\alpha} is the static polarizability of the atom. Although the problem of the interaction of two atoms discussed in Part II is much more difficult to handle mathematically, the results are very similar. Again the influence of retardation can be described by a monotonically decreasing correction factor which is equal to unity for small distances and proportional to {R}\\ensuremath{ }1} for large distances. In the latter case the energy of interaction is found to be \\ensuremath{ \\frac{23\\ensuremath{\\hbar}c{\\ensuremath{\\alpha}}_{1}\\ensuremath{\\alpha}}_{2}{4\\ensuremath{\\pi}{R}{7}", "venue": "", "year": 1948.0, "author_names": ["Hendrik B G Casimir", "D Polder"], "n_citations": 1742, "n_key_citations": 32, "score": 1}, {"corpus_id": 3915290, "title": "van der Waals forces in density functional theory: a review of the vdW DF method.", "abstract": "A density functional theory (DFT) that accounts for van der Waals (vdW) interactions in condensed matter, materials physics, chemistry, and biology is reviewed. The insights that led to the construction of the Rutgers Chalmers van der Waals density functional (vdW DF) are presented with the aim of giving a historical perspective, while also emphasizing more recent efforts which have sought to improve its accuracy. In addition to technical details, we discuss a range of recent applications that illustrate the necessity of including dispersion interactions in DFT. This review highlights the value of the vdW DF method as a general purpose method, not only for dispersion bound systems, but also in densely packed systems where these types of interactions are traditionally thought to be negligible.", "venue": "Reports on progress in physics. Physical Society", "year": 2015.0, "author_names": ["Kristian Berland", "Valentino R Cooper", "Kyuho Lee", "Elsebeth Schroder", "Timo Thonhauser", "Per Hyldgaard", "Bengt I Lundqvist"], "n_citations": 409, "n_key_citations": 4, "score": 0}, {"corpus_id": 205448688, "title": "The role of van der Waals forces in the performance of molecular diodes.", "abstract": "One of the main goals of organic and molecular electronics is to relate the performance and electronic function of devices to the chemical structure and intermolecular interactions of the organic component inside them, which can take the form of an organic thin film, a self assembled monolayer or a single molecule. This goal is difficult to achieve because organic and molecular electronic devices are complex physical organic systems that consist of at least two electrodes, an organic component and two (different) organic/inorganic interfaces. Singling out the contribution of each of these components remains challenging. So far, strong p p interactions have mainly been considered for the rational design and optimization of the performances of organic electronic devices, and weaker intermolecular interactions have largely been ignored. Here, we show experimentally that subtle changes in the intermolecular van der Waals interactions in the active component of a molecular diode dramatically impact the performance of the device. In particular, we observe an odd even effect as the number of alkyl units is varied in a ferrocene alkanethiolate self assembled monolayer. As a result of a more favourable van der Waals interaction, junctions made from an odd number of alkyl units have a lower packing energy (by ~0.4 0.6 kcal mol( 1) rectify currents 10 times more efficiently, give a 10% higher yield in working devices, and can be made two to three times more reproducibly than junctions made from an even number of alkyl units.", "venue": "Nature nanotechnology", "year": 2013.0, "author_names": ["Nisachol Nerngchamnong", "Li Yuan", "Dongchen Qi", "Jiang Li", "Damien Thompson", "Christian A Nijhuis"], "n_citations": 213, "n_key_citations": 0, "score": 0}, {"corpus_id": 22061141, "title": "Insight into the description of van der Waals forces for benzene adsorption on transition metal (111) surfaces.", "abstract": "Exploring the role of van der Waals (vdW) forces on the adsorption of molecules on extended metal surfaces has become possible in recent years thanks to exciting developments in density functional theory (DFT) Among these newly developed vdW inclusive methods, interatomic vdW approaches that account for the nonlocal screening within the bulk [V. G. Ruiz, W. Liu, E. Zojer, M. Scheffler, and A. Tkatchenko, Phys. Rev. Lett. 108, 146103 (2012) and improved nonlocal functionals [J. Klimes, D. R. Bowler, and A. Michaelides, J. Phys. Condens. Matter 22, 022201 (2010) have emerged as promising candidates to account efficiently and accurately for the lack of long range vdW forces in most popular DFT exchange correlation functionals. Here we have used these two approaches to compute benzene adsorption on a range of close packed (111) surfaces upon which it either physisorbs (Cu, Ag, and Au) or chemisorbs (Rh, Pd, Ir, and Pt) We have thoroughly compared the performance between the two classes of vdW inclusive methods and when available compared the results obtained with experimental data. By examining the computed adsorption energies, equilibrium distances, and binding curves we conclude that both methods allow for an accurate treatment of adsorption at equilibrium adsorbate substrate distances. To this end, explicit inclusion of electrodynamic screening in the interatomic vdW scheme and optimized exchange functionals in the case of nonlocal vdW density functionals is mandatory. Nevertheless, some discrepancies are found between these two classes of methods at large adsorbate substrate separations.", "venue": "The Journal of chemical physics", "year": 2014.0, "author_names": ["Javier Carrasco", "Wei Liu", "Angelos Michaelides", "Alexandre Tkatchenko"], "n_citations": 143, "n_key_citations": 2, "score": 0}, {"corpus_id": 2550740, "title": "Microwaves Probe Dipole Blockade and van der Waals Forces in a Cold Rydberg Gas.", "abstract": "We show that microwave spectroscopy of a dense Rydberg gas trapped on a superconducting atom chip in the dipole blockade regime reveals directly the dipole dipole many body interaction energy spectrum. We use this method to investigate the expansion of the Rydberg cloud under the effect of repulsive van der Waals forces and the breakdown of the frozen gas approximation. This study opens a promising route for quantum simulation of many body systems and quantum information transport in chains of strongly interacting Rydberg atoms.", "venue": "Physical review letters", "year": 2015.0, "author_names": ["R Celistrino Teixeira", "Carla Hermann-Avigliano", "T L Hoai Nguyen", "Tigrane Cantat-Moltrecht", "Jean-Michel Raimond", "Serge Haroche", "S'ebastien Gleyzes", "Michel Brune"], "n_citations": 34, "n_key_citations": 0, "score": 0}, {"corpus_id": 121087605, "title": "The general theory of van der Waals forces", "abstract": "Publisher Summary The van der Waals forces refer to the attractive forces acting between any two neutral atoms or molecules that are separated by distance large compared to their own dimensions. These forces are of a long range nature, which decrease with distance according to a power law. The basic idea of the theory is that the interaction between the bodies is considered to take place through a fluctuating electromagnetic field. This field is always present in the interior of a material medium and it also extends beyond its boundaries because of the thermodynamic fluctuations. Any change in the electrical proper ties of the medium in a certain region will, by Maxwell's equations, lead to a change in the fluctuation field that extends beyond that region. Therefore, the part of the free energy that is related to electromagnetic fluctuations is not determined by the properties of the substance solely at the point considered.", "venue": "", "year": 1961.0, "author_names": ["I E Dzyaloshinskii", "Evgenii Mikhailovich Lifshitz", "Lev Petrovich Pitaevskii"], "n_citations": 1119, "n_key_citations": 14, "score": 0}, {"corpus_id": 19590419, "title": "Protein adsorption into mesopores: a combination of electrostatic interaction, counterion release, and van der Waals forces.", "abstract": "Bovine heart cytochrome c has been immobilized into the mesoporous silica host material SBA 15 in both its native folded and urea unfolded state. The comparison of the two folding states' behavior casts doubt on the commonly used explanation of cytochrome c adsorption, that is, the electrostatic interaction model. A detailed investigation of the protein binding as a function of pH and ionic strength of the buffer solution reveals the complex nature of the protein silica interaction. Electrostatic interaction, van der Waals forces, and entropic contributions by counterion release each contribute to adsorption on the silica pore walls.", "venue": "Langmuir the ACS journal of surfaces and colloids", "year": 2014.0, "author_names": ["Sebastian T Moerz", "Patrick Huber"], "n_citations": 45, "n_key_citations": 2, "score": 0}, {"corpus_id": 5194276, "title": "The role of van der Waals forces in water adsorption on metals.", "abstract": "The interaction of water molecules with metal surfaces is typically weak and as a result van der Waals (vdW) forces can be expected to be of importance. Here we account for the systematic poor treatment of vdW forces in most popular density functional theory exchange correlation functionals by applying accurate non local vdW density functionals. We have computed the adsorption of a variety of exemplar systems including water monomer adsorption on Al(111) Cu(111) Cu(110) Ru(0001) Rh(111) Pd(111) Ag(111) Pt(111) and unreconstructed Au(111) and small clusters (up to 6 waters) on Cu(110) We show that non local correlations contribute substantially to the water metal bond in all systems, whilst water water bonding is much less affected by non local correlations. Interestingly non local correlations contribute more to the adsorption of water on the reactive transition metal substrates than they do on the noble metals. The relative stability, adsorption sites, and adsorption geometries of competing water adstructures rarely differ when comparing results obtained with semi local functionals and the non local vdW density functionals, which explains the previous success of semi local functionals in characterizing adsorbed water structures on a number of metal surfaces.", "venue": "The Journal of chemical physics", "year": 2013.0, "author_names": ["Javier Carrasco", "Jiri Klimes", "Angelos Michaelides"], "n_citations": 142, "n_key_citations": 2, "score": 0}, {"corpus_id": 24352935, "title": "Cooperative interplay of van der Waals forces and quantum nuclear effects on adsorption: H at graphene and at coronene.", "abstract": "The energetic barriers that atoms and molecules often experience when binding to surfaces are incredibly important to a myriad of chemical and physical processes. However, these barriers are difficult to describe accurately with current computer simulation approaches. Two prominent contemporary challenges faced by simulation are the role of van der Waals forces and nuclear quantum effects. Here we examine the widely studied model systems of hydrogen on graphene and coronene using a van der Waals inclusive density functional theory approach together with path integral molecular dynamics at 50 K. We find that both van der Waals and quantum nuclear effects work together in a cooperative manner to dramatically reduce the barriers for hydrogen atoms to adsorb. This suggests that the low temperature hydrogenation of graphene is easier than previously thought and in more general terms that the combined roles of van der Waals and quantum tunnelling can lead to qualitative changes in adsorption.", "venue": "ACS nano", "year": 2014.0, "author_names": ["Erlend R M Davidson", "Jiri Klimes", "Dario Alfe", "Angelos Michaelides"], "n_citations": 36, "n_key_citations": 0, "score": 0}, {"corpus_id": 122058280, "title": "Buckling and stability analysis of a piezoelectric viscoelastic nanobeam subjected to van der Waals forces", "abstract": "Abstract A study on the buckling and dynamic stability of a piezoelectric viscoelastic nanobeam subjected to van der Waals forces is performed in this research. The static and dynamic governing equations of the nanobeam are established with Galerkin method and under Euler Bernoulli hypothesis. The buckling, post buckling and nonlinear dynamic stability character of the nanobeam is presented. The quasi elastic method, Leibnitz's rule, Runge Kutta method and the incremental harmonic balanced method are employed for obtaining the buckling voltage, post buckling characteristics and the boundaries of the principal instability region of the dynamic system. Effects of the electrostatic load, van der Waals force, creep quantity, inner damping, geometric nonlinearity and other factors on the post buckling and the principal region of instability are investigated.", "venue": "Commun. Nonlinear Sci. Numer. Simul.", "year": 2014.0, "author_names": ["Changping Chen", "Shoujian Li", "Liming Dai", "Changzhao Qian"], "n_citations": 27, "n_key_citations": 0, "score": 0}]} -{"query": "Thesis Ponseti method relapse rate", "session_id": 395303391317650, "user_id": 5202997023564631, "candidates": [{"corpus_id": 35312682, "title": "Role of age in management of clubfoot by ponseti method and relapse rate", "abstract": "Background: With all the stirring advances in modern medicine, it is somewhat sobering to assess the fund of knowledge concerning the treatment of clubfoot. Evolution of treatment started with manipulation, strapping etc. with not much enthusiastic results. Surgical intervention came into scene; with not much success and lasting morbidity. Over the past decade, Ponseti management has become accepted throughout the world, as the most effective and least expensive treatment of clubfoot.Does the age at beginning of treatment has influence,in Ponseti method and rate of relapse is uncertain. Aims and objectives: (1) Role of age at beginning of treatment. (2)Relapse rate. Materials and method: 58 patients were enlisted for study with 96 idiopathic club feet treated by Ponseti method at Al Ameen Medical College Hospital and its ancillary branches between 2006 2012; with minimum follow up of 30 months. Two groups were made, group I with age 6 months of age and group II with age >6 months. Results: Average number of casts necessary to achieve correction in group I was 5.28 casts (range 4 to 8 casts) while in group II was 7.31 (range 6 11 casts) Percutaneous tenotomy was needed in 85.42% of feet. Relapse rate was 7.14% (5 feet) in group I while 15.3% (4 feet) in group II. Conclusion: Effectiveness of Ponseti technique in achieving the correction of deformity and functional as outcome increases with early age of initiation of treatment while relapse rate increases with increase in age.", "venue": "", "year": 2014.0, "author_names": ["Nayeem Ali", "Renuka M Patil"], "n_citations": 4, "n_key_citations": 0, "score": 1}, {"corpus_id": 173993093, "title": "Congenital talipes equinovarus: a systematic review of relapse as a primary outcome of the Ponseti method.", "abstract": "AIMS The Ponseti method is the benchmark treatment for the correction of clubfoot. The primary rate of correction is very high, but outcome further down the treatment pathway is less predictable. Several methods of assessing severity at presentation have been reported. Classification later in the course of treatment is more challenging. This systematic review considers the outcome of the Ponseti method in terms of relapse and determines how clubfoot is assessed at presentation, correction, and relapse. PATIENTS AND METHODS A prospectively registered systematic review was carried out according to Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines. Studies that reported idiopathic clubfoot treated by the Ponseti method between 1 January 2012 and 31 May 2017 were included. The data extracted included demographics, Ponseti methodology, assessment methods, and rates of relapse and surgery. RESULTS A total of 84 studies were included (7335 patients, 10 535 clubfeet) The relapse rate varied between 1.9% and 45% The rates of relapse and major surgery (1.4% to 53.3% and minor surgery (0.6% to 48.8% both increased with follow up time. There was high variability in the assessment methods used across timepoints; only 57% of the studies defined relapse. Pirani scoring was the method most often used. CONCLUSION Recurrence and further surgical intervention in idiopathic clubfoot increases with the duration of follow up. The corrected and the relapsed foot are poorly defined, which contributes to variability in outcome. The results suggest that a consensus for a definition of relapse is needed. Cite this article: Bone Joint J 2019;101 B:639 645.", "venue": "The bone joint journal", "year": 2019.0, "author_names": ["Yael Gelfer", "Shlomo Wientroub", "K Hughes", "Andreas Fontalis", "Deborah M Eastwood"], "n_citations": 16, "n_key_citations": 0, "score": 0}, {"corpus_id": 162170709, "title": "Relapse Rates in Patients with Clubfoot Treated Using the Ponseti Method Increase with Time: A Systematic Review.", "abstract": "BACKGROUND The Ponseti method is the preferred technique to manage idiopathic clubfoot deformity; however, there is no consensus on the expected relapse rate or the percentage of patients who will ultimately require a corrective surgical procedure. The objective of the present systematic review was to determine how reported rates of relapsed deformity and rates of a secondary surgical procedure are influenced by each study's length of follow up. METHODS A comprehensive literature search using the Preferred Reporting Items for Systematic Reviews and Meta Analyses (PRISMA) guidelines was performed to identify relevant articles. The definition of relapse, the percentage of patients who relapsed, the percentage of feet that required a surgical procedure, and the mean duration of follow up of each study were extracted. Pearson correlations were performed to determine associations among the following variables: mean follow up duration, percentage of patients who relapsed, percentage of feet that required a joint sparing surgical procedure, and percentage of feet that required a joint invasive surgical procedure. Logarithmic curve fit regressions were used to model the relapse rate, the rate of joint sparing surgical procedures, and the rate of joint invasive surgical procedures as a function of follow up time. RESULTS Forty six studies met the inclusion criteria. Four distinct definitions of relapse were identified. The reported relapse rates varied from 3.7% to 67.3% of patients. The mean duration of follow up was strongly correlated with the relapse rate (Pearson correlation coefficient 0.44; p 0.01) and the percentage of feet that required a joint sparing surgical procedure (Pearson correlation coefficient 0.59; p 0.01) Studies with longer follow up showed significantly larger percentages of relapse and joint sparing surgical procedures than studies with shorter follow up (p 0.05) CONCLUSIONS Relapses have been reported to occur at as late as 10 years of age; however, very few studies follow patients for at least 8 years. Notwithstanding that, the results indicated that the rate of relapse and percentage of feet requiring a joint sparing surgical procedure increased as the duration of follow up increased. Longer term follow up studies are required to accurately predict the ultimate risk of relapsed deformity. Patients and their parents should be aware of the possibility of relapse during middle and late childhood, and, thus, follow up of these patients until skeletal maturity may be warranted. LEVEL OF EVIDENCE Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.", "venue": "JBJS reviews", "year": 2019.0, "author_names": ["Hannah M Thomas", "Sophia N Sangiorgio", "Edward Ebramzadeh", "Lewis E Zionts"], "n_citations": 14, "n_key_citations": 0, "score": 0}, {"corpus_id": 202740085, "title": "Relapse of Clubfoot after Treatment with the Ponseti Method", "abstract": "Background: During the last decade, Ponseti clubfoot treatment has become more effective and popular because of its high initial correction rate. But the problem affecting the long term successful outcome is relapse of the deformity. The major problem is Non compliance with Ponseti brace protocol associated with relapse. Although more comfortable braces have been reported to develop the compliance, they all have the same design and no significant changes have been made to the protocols. After refinement in the Ponseti method and emphasizing the significance of brace to parents, the relapse rate has been noticeably decreased. Objective: To evaluate relapse of Clubfoot after treatment with the Ponseti Method Methodology: A Cross sectional study including patients' information during treatment period was done at outpatient basis in Prime Medical College, Rangpur, Bangladesh and the sample was 200 patients under Ponseti clubfoot treatment over a period of three years from 1 st October 2014 to 30 th September 2017. The 200 patients with idiopathic legs with clubfoot 240 treated initially with the Ponseti technique who had relapse of their clubfoot were identified. Relapse was defined as a return to casting or surgery due to recurrent deformity. Data collected included demographics, treatment and brace adherence. Patients who sustained initial relapse before the age of two years were compared with those who sustained initial relapse after the age of two years. Results: After initial relapse prior to age two, bracing adherence does not affect likelihood of subsequent recurrence. Among the 200 patients with 240 legs with clubfoot after treatment with the Ponseti Method only 15 legs were relapse. Therefore from the study findings we can say that though there is some complications in Ponseti method but the treatment outcome is better than other methods. Conclusion: Patients with idiopathic clubfoot who experienced recurrence prior to age two years are significantly more likely to be non adherent with bracing than those who sustain recurrence after age two.", "venue": "", "year": 2018.0, "author_names": ["Dr Md Shariful Haque", "Dr Mohammad Mushfiqur Rahman", "Dr Khaleda Perveen", "Dr Mahmuda Sharmin"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 203624554, "title": "The Ponseti Method Decreased the Surgical Incidence in Children with Congenital Clubfoot: A Population Based, 8 Birth Year Cohort Study.", "abstract": "BACKGROUND With the introduction of the Ponseti method for congenital clubfoot, the relapse rate and the surgical rate have been remarkably reduced. However, data from population studies for patients up to 10 years of age are still lacking. This study aimed to survey the relapse and surgery rates in the first 10 years of life in children with congenital clubfoot before and after introduction of the Ponseti method in Taiwan using the National Health Insurance Research Database (NHIRD) METHODS We retrieved clubfoot cases and related surgical procedures determined by International Classification of Diseases, Ninth Revision (ICD 9) 754.51 from the 1999 2016 NHIRD. Foot and ankle surgical procedures coded as ICD 9 754.51 for patients who were older than 6 months of age were regarded as surgical procedures for relapsed or residual deformities. The rate of clubfoot release when the patients were 0.5 to 1 year of age and extensive surgical procedures in the first 10 years of life were assessed among 8 birth year cohorts (1999 to 2006) with a 10 year follow up. RESULTS Among 622 children with idiopathic congenital clubfoot diagnosis, 301 underwent a total of 367 surgical procedures for clubfoot between 6 months and 10 years of age. Disease incidence of 0.32 per 1,000 live births remained stable in the 8 birth year cohorts. After the Ponseti method was introduced in 2002, there was a decrease in the clubfoot release rate in the 0.5 to 1 year age group (25.8% in the 1999 to 2002 birth year cohorts compared with 17.6% in the 2003 to 2006 birth year cohorts) and the rate of extensive surgical procedures (41.5% in the 1999 to 2002 birth year cohorts compared with 31.3% in the 2003 to 2006 birth year cohorts) both determined to be significant at p 0.05 using the chi square test. A significant decreasing trend (p 0.05) was revealed in the rate of clubfoot release in patients who were 0.5 to 1 year of age by polynomial correlation, with an increasing negative slope after a turning point around 2002. The Ponseti method increased the ratio of minor to extensive surgical procedures when a surgical procedure was required. CONCLUSIONS The Ponseti method decreased subsequent extensive surgical procedures for clubfoot, especially in the group that was 0.5 to 1 year of age. LEVEL OF EVIDENCE Therapeutic Level III. See Instructions for Authors for a complete description of levels of evidence.", "venue": "The Journal of bone and joint surgery. American volume", "year": 2019.0, "author_names": ["Chia-Hsieh Chang", "Shu Mei Wang", "Ken N Kuo"], "n_citations": 3, "n_key_citations": 0, "score": 0}, {"corpus_id": 216029284, "title": "The Influence of Achilles Tenotomy and Compliance with Foot Abduction Orthosis on the Relapse Rate of Ponseti Treatment for Idiopathic Clubfoot: A Regional Study.", "abstract": "The Ponseti method for treating idiopathic clubfoot is based on gradual manipulations and corrective plaster castings followed by a years long period of use of a foot orthosis. The role of surgery is limited. The factors that may affect outcome and their influence are subject of controversy. The aim of the study is to systematically and objectively evaluate the results of Ponseti treatment in our region of Southern Israel and focus on the role of the Achilles tenotomy and compliance to foot orthosis as factors that may influence outcome. The use of Ponseti method was retrospectively studied (level of evidence IV) by searching computerized medical files and clinical photos. The severity of deformity was evaluated by Dimeglio score (D score) at baseline and at last examination. During 2006 2014, 57 children with idiopathic clubfoot (total 90 feet) were enrolled. An Achilles tenotomy was performed in 55/90 (61.1% of the feet. If the D score was 15 or higher there was a 20% increase in the incidence of Achilles tenotomy. The parental compliance had a weak protective effect against relapse. The treatment of idiopathic clubfoot by the Ponseti method was successful and reliable, proving efficiency and universality of the method. A dominant predictor for relapse was not seen. An incidental observation was that extended time in cast may buffer the adverse effects of low compliance rate. Although the initial severity, or compliance to braces are important, there may be other factors that affect the outcome such as, accuracy of the casting technique, time in the cast, access to a dedicated clubfoot clinic, cooperation with nurses and pediatricians, economic status that allows purchase of new generation of braces, cultural perception, and education level of the patient population are some examples.", "venue": "The Journal of foot and ankle surgery official publication of the American College of Foot and Ankle Surgeons", "year": 2020.0, "author_names": ["Eugen Cohen", "Tiberiu Katz", "Uri Rozen", "Tai Friesem", "Eugene Leibovitz"], "n_citations": 1, "n_key_citations": 0, "score": 0}, {"corpus_id": 23824751, "title": "Sixty Years On: Ponseti Method for Clubfoot Treatment Produces High Satisfaction Despite Inherent Tendency to Relapse", "abstract": "Background: Developed at the University of Iowa in 1950, the Ponseti method to manage idiopathic clubfoot deformity was slow to gain wide acceptance until the mid 1990s. There is a paucity of intermediate and long term outcome studies involving this technique, with nearly all such studies coming from a single institution. The purpose of this study is to report the contemporary outcome of patients with clubfoot deformity whose feet were managed with the Ponseti method and who were followed to =5 years old, to provide outcome expectations for parents and for clinicians managing patients with idiopathic clubfoot. Methods: Families of infants seen in our clinic diagnosed with idiopathic clubfoot since July 2006 were prospectively invited to participate in our institutional review board approved study. Patients who received no prior outside treatment and had a minimum follow up to the age of 5 years were included. Demographic, treatment, and outcome data were collected. To provide an array of outcome measures, both the Dallas outcome criteria and the Roye disease specific instrument (DSI) were used. Results: One hundred and one patients met the inclusion criteria. The mean length of follow up (and standard deviation) was 81.1 17.1 months. Initial correction was achieved in all feet. Thirty seven percent of families reported that they were adherent with the bracing protocol; 68% of patients had =1 relapse, and 38% underwent a tendon transfer. With the Dallas criteria, 62% had outcomes rated as good, 38% had outcomes rated as fair, and no patient had an outcome rated as poor. With the Roye DSI, most families were generally very satisfied with the function and appearance of the feet. Conclusions: Satisfactory results at intermediate follow up were achieved using the Ponseti method. However, despite a better understanding of the Ponseti method and the importance of longer post corrective brace use, the need for anterior tibial tendon transfer remains an important adjunct to the Ponseti method. Brace adherence also continues to be a critical clinical issue. Level of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.", "venue": "The Journal of bone and joint surgery. American volume", "year": 2018.0, "author_names": ["Lewis E Zionts", "Edward Ebramzadeh", "Rebecca Morgan", "Sophia N Sangiorgio"], "n_citations": 19, "n_key_citations": 1, "score": 0}, {"corpus_id": 57375974, "title": "Relapse following use of Ponseti method in idiopathic clubfoot", "abstract": "Abstract Purpose We assessed the pattern of relapse as well as the correlation between the number of casts required for correction and Pirani and Dimeglio scores at presentation, and age at presentation. We hypothesized that the Ponseti method would be effective in treatment of relapsed clubfoot as well. Methods We evaluated 115 idiopathic clubfeet in 79 children presenting with relapse following treatment by the Ponseti method. The mean age was 33.8 months with mean follow up of 24 months. All patients were assessed for various patterns of relapsed deformities. Quantification of deformities was done using the Pirani and Dimeglio scores. All relapsed feet were treated by a repeat Ponseti protocol. Results Non compliance to a foot abduction brace was observed to be the main contributing factor in relapse, in 99 clubfeet (86% Combination of three static deformities (equinus, varus and adduction) together was observed most commonly (38.3% feet) Overall, relapse of equinus deformity was noted most commonly followed by adduction. A painless plantigrade foot was obtained in all 115 feet with a mean of five casts. In all, 71 feet (61.7% underwent percutaneous tenotomy. A total of 15 feet (13% required tibialis anterior tendon transfer. Re relapse rate in group 1 was 21% compared with 12.6% in group 2 and overall 16.5% Conclusion We conclude that the Ponseti method is effective and the preferred initial treatment modality for relapsed clubfeet. Surgical intervention should be reserved for residual deformity only after a fair trial of Ponseti cast treatment. Regular follow up and strict adherence to brace protocol may reduce future relapse rates. Further research is required to identify high risk feet and develop individualized bracing protocol. Level of evidence: IV", "venue": "Journal of children's orthopaedics", "year": 2018.0, "author_names": ["Sameeksha Chand", "Anil Mehtani", "Alok Sud", "Jatin Prakash", "A Sinha", "Akhil Agnihotri"], "n_citations": 15, "n_key_citations": 0, "score": 0}, {"corpus_id": 26123610, "title": "Prognosticating Factors of Relapse in Clubfoot Management by Ponseti Method", "abstract": "Background: It is challenging that some Ponseti method corrected clubfeet have a tendency to relapse. Controversies remain as to the implication of initial severity, representing the deformity degree, as well as number of casts needed, representing the treatment process, in predicting relapse. However, no study has been reported to take these 2 parameters into comprehensive consideration for outcome measurement. The purpose of this study is to investigate the correlation between the initial Pirani score and the number of casts required to correct the deformity in our series; to evaluate noncompliance as a risk factor of the deformity recurrence in Ponseti treatment; to test the validity and predictive value of a new proposed parameter, ratio of correction improvement (RCI) which is indicated by the initial Pirani scores divided by the number of casts. Methods: A total of 116 consecutive patients with 172 idiopathic clubfeet managed by Ponseti method were followed prospectively for a minimum of 2 years from the start of brace wearing. RCI value and the other clinical parameters were studied in relation to the risk of relapse by using multivariate logistic regression analysis modeling. Results: A positive correlation between the initial Pirani score and the number of casts required to correct the deformity was found in our series (r=0.67, P<0.01) There were 45 patients (39% with brace noncompliance. The relapse rate was 49% (22/45) The odds ratio of relapse in noncompliant patients was 10 times more that in compliant patients (odds ratio=10.30 and 95% confidence interval, 2.69 39.42; P<0.01) The multivariate logistic regression analysis showed that there was significant association between relapse and RCI value. There were 42 patients (36% with RCI value <1, among them, the relapse rate was 57% in 24 patients. The odds ratio of relapse in patients with RCI value <1 was 27 times more likely to relapse than those >1 (odds ratio=26.77 and 95% confidence interval, 5.70 125.72; P<0.01) Conclusions: On the basis of the findings from our study, we propose the RCI to be a new parameter in predicting the risk of relapse in Ponseti method of clubfoot management. Early intervention is recommended to optimize the brace compliance particularly in case with lower RCI value. Level of Evidence: Level II prognostic.", "venue": "Journal of pediatric orthopedics", "year": 2018.0, "author_names": ["Dahang Zhao", "Hai Li", "Li Zhao", "Ken N Kuo", "Xuan Yang", "Zhen-kai Wu", "Jianlin Liu", "Jie-Ping Zhu"], "n_citations": 13, "n_key_citations": 0, "score": 0}, {"corpus_id": 81100914, "title": "Ponseti's clubfoot treatment a method in need of correction?", "abstract": "A clubfoot is a common congenital deformity of the foot. Worldwide the Ponseti method is the accepted treatment method for clubfoot. In this method, the treating physician manipulates the foot into a slightly better position and fixates this position with a plaster cast that stays on for a week. After five or six weekly cast changes the position of the foot is corrected. An abduction brace is worn for several years to prevent relapse. The research in this thesis describes how aspects of the Ponseti method are quantified and gives pointers for improvements. Both a literature study and a study with force sensors on the foot suggest that the weekly cast change interval is unnecessary long. A surprising observation was a temperature drop due to water evaporating from the cast, creating an uncomfortably long cold period for the children. Concepts for a dynamic clubfoot brace were developed as an alternative treatment method. Such a brace would apply a constant force on the foot rather than a constant position, making the correction process more efficient and possibly more comfortable. Patents would be able to temporarily remove the brace to allow bathing of their child and the soft materials of the brace makes cuddling a pleasant experience again.", "venue": "", "year": 2018.0, "author_names": ["Robert Bram Giesberts"], "n_citations": 0, "n_key_citations": 0, "score": 0}]} -{"query": "Tracking from one side: multi-person passive tracking with WiFi magnitude measurements", "session_id": 2607924729454677, "user_id": 794556925718894, "candidates": [{"corpus_id": 102349409, "title": "Tracking from One Side Multi Person Passive Tracking with WiFi Magnitude Measurements", "abstract": "In this paper, we are interested in passively tracking multiple people walking in an area, using only the magnitude of WiFi signals from one WiFi transmitter and a small number of receivers (configured as an array) located on one side of the area. Past works on RF based tracking either track only a single moving person, use a large number of transceivers surrounding the area to track multiple people, or use additional resources like ultra wideband signals. Furthermore, magnitude based tracking provides an attractive feature that additional receiver antennas can easily be added to the antenna array as needed, without the need for phase synchronization, since the magnitude can be measured independently on the different antennas. In this paper, we then propose a new framework that uses only the magnitude of WiFi signals and expresses it in terms of the angles of arrival of signal paths at the receivers as well as the motion parameters of the virtual arrays emulated by the moving people. We then use a two dimensional MUltiple SIgnal Classification (MUSIC) algorithm to estimate the aforementioned parameters, and further utilize a Particle Filter with a Joint Probabilistic Data Association Filter to track multiple people walking in the area. We extensively validate our proposed framework in both indoor and outdoor areas, through 40 experiments of tracking 1 to 3 people, using only one transmit antenna and three laptops as receivers (a total of four off the shelf Intel 5300 WiFi Network Interface Cards (NICs) Our results show highly accurate tracking (mean error of 38 cm in outdoor areas/closed parking lots, and 55 cm in indoor areas) using minimal WiFi resources on only one side of the area.", "venue": "2019 18th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN)", "year": 2019.0, "author_names": ["Chitra R Karanam", "Belal Korany", "Yasamin Mostofi"], "n_citations": 17, "n_key_citations": 3, "score": 1}, {"corpus_id": 19038276, "title": "Pedestrian tracking with shoe mounted inertial sensors", "abstract": "A navigation system that tracks the location of a person on foot is useful for finding and rescuing firefighters or other emergency first responders, or for location aware computing, personal navigation assistance, mobile 3D audio, and mixed or augmented reality applications. One of the main obstacles to the real world deployment of location sensitive wearable computing, including mixed reality (MR) is that current position tracking technologies require an instrumented, marked, or premapped environment. At InterSense, we've developed a system called NavShoe, which uses a new approach to position tracking based on inertial sensing. Our wireless inertial sensor is small enough to easily tuck into the shoelaces, and sufficiently low power to run all day on a small battery. Although it can't be used alone for precise registration of close range objects, in outdoor applications augmenting distant objects, a user would barely notice the NavShoe's meter level error combined with any error in the head's assumed location relative to the foot. NavShoe can greatly reduce the database search space for computer vision, making it much simpler and more robust. The NavShoe device provides not only robust approximate position, but also an extremely accurate orientation tracker on the foot.", "venue": "IEEE Computer Graphics and Applications", "year": 2005.0, "author_names": ["Eric Foxlin"], "n_citations": 1195, "n_key_citations": 117, "score": 0}, {"corpus_id": 8714546, "title": "Tracking Groups of People", "abstract": "A computer vision system for tracking multiple people in relatively unconstrained environments is described. Tracking is performed at three levels of abstraction: regions, people, and groups. A novel, adaptive background subtraction method that combines color and gradient information is used to cope with shadows and unreliable color cues. People are tracked through mutual occlusions as they form groups and separate from one another. Strong use is made of color information to disambiguate occlusion and to provide qualitative estimates of depth ordering and position during occlusion. Simple interactions with objects can also be detected. The system is tested using both indoor and outdoor sequences. It is robust and should provide a useful mechanism for bootstrapping and reinitialization of tracking using more specific but less robust human models.", "venue": "Comput. Vis. Image Underst.", "year": 2000.0, "author_names": ["Stephen J McKenna", "Sumer Jabri", "Zoran Duric", "Azriel Rosenfeld", "Harry Wechsler"], "n_citations": 775, "n_key_citations": 39, "score": 0}, {"corpus_id": 206986664, "title": "Parallel Tracking and Mapping for Small AR Workspaces", "abstract": "This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual core computer: one thread deals with the task of robustly tracking erratic hand held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame rate, with an accuracy and robustness rivalling that of state of the art model based systems.", "venue": "2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality", "year": 2007.0, "author_names": ["Georg S W Klein", "David William Murray"], "n_citations": 3632, "n_key_citations": 456, "score": 0}, {"corpus_id": 1336659, "title": "DTAM: Dense tracking and mapping in real time", "abstract": "DTAM is a system for real time camera tracking and reconstruction which relies not on feature extraction but dense, every pixel methods. As a single hand held RGB camera flies over a static scene, we estimate detailed textured depth maps at selected keyframes to produce a surface patchwork with millions of vertices. We use the hundreds of images available in a video stream to improve the quality of a simple photometric data term, and minimise a global spatially regularised energy functional in a novel non convex optimisation framework. Interleaved, we track the camera's 6DOF motion precisely by frame rate whole image alignment against the entire dense model. Our algorithms are highly parallelisable throughout and DTAM achieves real time performance using current commodity GPU hardware. We demonstrate that a dense model permits superior tracking performance under rapid motion compared to a state of the art method using features; and also show the additional usefulness of the dense model for real time scene interaction in a physics enhanced augmented reality application.", "venue": "2011 International Conference on Computer Vision", "year": 2011.0, "author_names": ["Richard A Newcombe", "S Lovegrove", "Andrew J Davison"], "n_citations": 1596, "n_key_citations": 161, "score": 0}, {"corpus_id": 122794686, "title": "Survey of maneuvering target tracking. Part I. Dynamic models", "abstract": "This is the first part of a comprehensive and up to date survey of the techniques for tracking maneuvering targets without addressing the so called measurement origin uncertainty. It surveys various mathematical models of target motion/dynamics proposed for maneuvering target tracking, including 2D and 3D maneuver models as well as coordinate uncoupled generic models for target motion. This survey emphasizes the underlying ideas and assumptions of the models. Interrelationships among models and insight to the pros and cons of models are provided. Some material presented here has not appeared elsewhere.", "venue": "", "year": 2003.0, "author_names": ["X Rong Li", "Vesselin P Jilkov"], "n_citations": 1225, "n_key_citations": 71, "score": 0}, {"corpus_id": 433048, "title": "TaintDroid: An Information Flow Tracking System for Realtime Privacy Monitoring on Smartphones", "abstract": "Today's smartphone operating systems frequently fail to provide users with visibility into how third party applications collect and share their private data. We address these shortcomings with TaintDroid, an efficient, system wide dynamic taint tracking and analysis system capable of simultaneously tracking multiple sources of sensitive data. TaintDroid enables realtime analysis by leveraging Android's virtualized execution environment. TaintDroid incurs only 32p performance overhead on a CPU bound microbenchmark and imposes negligible overhead on interactive third party applications. Using TaintDroid to monitor the behavior of 30 popular third party Android applications, in our 2010 study we found 20 applications potentially misused users' private information; so did a similar fraction of the tested applications in our 2012 study. Monitoring the flow of privacy sensitive data with TaintDroid provides valuable input for smartphone users and security service firms seeking to identify misbehaving applications.", "venue": "OSDI", "year": 2010.0, "author_names": ["William Enck", "Peter Gilbert", "Byung-Gon Chun", "Landon P Cox", "Jaeyeon Jung", "Patrick Mcdaniel", "Anmol Sheth"], "n_citations": 2976, "n_key_citations": 300, "score": 0}, {"corpus_id": 7965352, "title": "RADAR: an in building RF based user location and tracking system", "abstract": "The proliferation of mobile computing devices and local area wireless networks has fostered a growing interest in location aware systems and services. In this paper we present RADAR, a radio frequency (RF) based system for locating and tracking users inside buildings. RADAR operates by recording and processing signal strength information at multiple base stations positioned to provide overlapping coverage in the area of interest. It combines empirical measurements with signal propagation modeling to determine user location and thereby enable location aware services and applications. We present experimental results that demonstrate the ability of RADAR to estimate user location with a high degree of accuracy.", "venue": "Proceedings IEEE INFOCOM 2000. Conference on Computer Communications. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies (Cat. No.00CH37064)", "year": 2000.0, "author_names": ["Paramvir Bahl", "Venkata N Padmanabhan"], "n_citations": 8390, "n_key_citations": 935, "score": 0}, {"corpus_id": 8192877, "title": "Marker tracking and HMD calibration for a video based augmented reality conferencing system", "abstract": "We describe an augmented reality conferencing system which uses the overlay of virtual images on the real world. Remote collaborators are represented on virtual monitors which can be freely positioned about a user in space. Users can collaboratively view and interact with virtual objects using a shared virtual whiteboard. This is possible through precise virtual image registration using fast and accurate computer vision techniques and head mounted display (HMD) calibration. We propose a method for tracking fiducial markers and a calibration method for optical see through HMD based on the marker tracking.", "venue": "Proceedings 2nd IEEE and ACM International Workshop on Augmented Reality (IWAR'99)", "year": 1999.0, "author_names": ["Hirokazu Kato", "Mark Billinghurst"], "n_citations": 2420, "n_key_citations": 184, "score": 0}, {"corpus_id": 122803681, "title": "A tutorial on particle filters for online nonlinear/non Gaussian Bayesian tracking", "abstract": "Increasingly, for many application areas, it is becoming important to include elements of nonlinearity and non Gaussianity in order to model accurately the underlying dynamics of a physical system. Moreover, it is typically crucial to process data on line as it arrives, both from the point of view of storage costs as well as for rapid adaptation to changing signal characteristics. In this paper, we review both optimal and suboptimal Bayesian algorithms for nonlinear/non Gaussian tracking problems, with a focus on particle filters. Particle filters are sequential Monte Carlo methods based on point mass (or \"particle\" representations of probability densities, which can be applied to any state space model and which generalize the traditional Kalman filtering methods. Several variants of the particle filter such as SIR, ASIR, and RPF are introduced within a generic framework of the sequential importance sampling (SIS) algorithm. These are discussed and compared with the standard EKF through an illustrative example.", "venue": "IEEE Trans. Signal Process.", "year": 2002.0, "author_names": ["M Sanjeev Arulampalam", "Simon Maskell", "Neil J Gordon", "Tim Clapp"], "n_citations": 10952, "n_key_citations": 1029, "score": 0}]} -{"query": "Pengaruh Reciprocal teaching terhadap kemampuan berpikir kreatif", "session_id": 2760948414280099, "user_id": 2216320506367268, "candidates": [{"corpus_id": 127298249, "title": "Pengaruh Metode Reciprocal Teaching terhadap Kemampuan Berpikir Kreatif Siswa SMA Kelas X di SMA Kae Woha Tahun Pelajaran 2017/2018", "abstract": "Penelitian ini bertujuan untuk mengetahui pengaruh Reciprocal Teaching terhadap kemampuan berpikir kreatif siswa kelas X di SMA KAE Woha tahun pelajaran 2017/2018. Jenis penelitian ini adalah quasi eksperimen (Eksperimen Semu) Adapun sampel dalam penelitian ini adalah siswa kelas X SMA KAE Woha yang terdiri dari dua kelas yaitu kelas X MIA/IPA I sebagai kelas eksperimen dan kelas X MIA/IPA 2 sebagai kelas kontrol. Instrumen penelitian yang digunakan untuk mengukur kemampuan berpikir kreatif adalah soal esay. Kriteria pengujiannya adalah H 0 ditolak jika t hitung t tabel atau nilai signifikansi lebih kecil 0,05 pada taraf sinikansi a =5% Hasil Uji hipotesis menggunakan bantuan program SPSS 16 for window menunjukkan nilai t 4,128 atau nilai signifikansi sebesar 0,000. Jika dikaitkan dengan nilai signifikansi 0,05 dan nilai t hitung maka H a diterima. Sehingga dapat disimpulkan bahwa terdapat pengaruh penggunaan metode reciprocal teaching terhadap kemampuan berpikir kreatif siswa.", "venue": "", "year": 2018.0, "author_names": ["Nurhayati Sarib", "Mariamah Mariamah", "Muslim Muslim", "Fatmah Fatmah"], "n_citations": 0, "n_key_citations": 0, "score": 1}, {"corpus_id": 213043581, "title": "PENGARUH MODEL PEMBELAJARAN RECIPROCAL TEACHING (RT) DAN KEMAMPUAN BERPIKIR KREATIF TERHADAP PEMAHAMAN SEJARAH PESERTA DIDIK KELAS XI IPS DI SMA MUHAMMADIYAH 1 TAMAN SIDOARJO", "abstract": "RINGKASAN Saputri, Reni. 2019. Pengaruh Model Pembelajaran Reciprocal Teaching (RT) dan Kemampuan Berpikir Kreatif Terhadap Pemahaman Sejarah Peserta Didik Kelas XI IPS SMA Muhammadiyah 1 Taman Sidoarjo. Tesis. Pascasarjana Pendidikan Sejarah Universitas Negeri Malang. Pembimbing: (1) Dr. Joko Sayono M.Pd.M.Hum. (2) Dr. Hj. Endang Sri handayani, S.E. M.Si.Ak Kata Kunci Reciprocal Teaching, Kemampuan Berpikir Kreatif, Pemahaman Sejarah. Model pembelajaran reciprocal teaching (pengajaran timbal balik) merupakan pembelajaran yang dirancang melalui empat strategi pembelajaran yaitu menyusun pertanyaan (questioning) memprediksi (prediction) mengklarifikasi atau menjelaskan (clarifying) dan merangkum (summarizing) Model pembelajaran reciprocal teaching sejalan dengan teori konstruktivis yaitu peserta didik mencari pengetahuan sendiri sehingga bisa menjalankan langkah langkah model pembelajaran reciprocal teaching. Kemampuan berpikir kreatif merupakan kemampuan berpikir kelancaran (fluency) kemampuan berpikir fleksibel (fleksibilitas) kemampuan berpikir orisinal (originality) dan kemampuan berpikir elaborasi (elaboration. Pemahaman sejarah Pemahaman sejarah adalah pemahaman yang bisa menguasai dan memahami peristiwa sejarah yang terdiri dari fakta sejarah, konsep sejarah, narasi sejarah. Penelitian ini tujuan untuk (1) mendeskripsikan adanya pengaruh model pembelajaran reciprocal teaching terhadap pemahaman sejarah peserta didik; (2) mendeskripsikan adanya pengaruh kemampuan berpikir kreatif terhadap pemahaman sejarah peserta didik; (3) mendeskripsikan adanya pengaruh interaksi model pembelajaran reciprocal teaching dan kemampuan berpikir kreatif terhadap pemahaman sejarah peserta didik. Penelitian ini merupakan eksperimen semu (quasi eksperimen) menggunakan rancangan factorial pretest dan post test non equvalent control group desing. Subjek peneliti adalah peserta didik kelas XI IPS SMA Muhammadiyah 1 Taman Sidoarjo, tahun ajaran 2018/2019. Jumlah subjek penelitian adalah 55 peserta didik, 27 peserta didik kelas eksperimen dan 28 peserta didik kelas kontrol. Pengumpulan data berdasarkan tes berupa pilihan ganda dan essay untuk mengukur pemahaman sejarah dan kemampuan berpikir kreatif. Teknik analisis data menggunakan Anova Two Way Hasil penelitian menunjukkan: (1) ada pengaruh model pembelajaran reciprocal teaching terhadap pemahaman sejarah peserta didik dengan nilai signifikansi 0,000; (2) ada pengaruh kemampuan berpikir kreatif terhadap pemahaman sejarah peserta didik dengan nilai signifikansi 0,000; dan (3) ada interaksi model pembelajaran reciprocal teaching dan kemampuan berpikir kreatif terhadap pemahaman sejarah peserta didik dengan nilai signifikansi 0,037.", "venue": "", "year": 2019.0, "author_names": ["saputri reni"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 149430851, "title": "PENGARUH PEMBELAJARAN MATEMATIKA MENGGUNAKAN MODEL PEMBELAJARAN RECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF MATEMATIK SISWA SMK", "abstract": "Dalam kegiatan belajar mengajar, pengajar harus memberikan kemudahan agar peserta didik mendapatkan pengalaman belajar sesuai dengan kebutuhan dan kemampuannya, sehingga dapat terwujud intera ksi yang lebih komunikatif Maka dari itu cara yang cocok untuk mengaktifkan siswa adalah dengan menggunakan model Reciprocal Teaching Adapun latar belakang masalah yang menjadi titik tolak penelitian ini adalah mencari suasana baru dalam pembelajaran. Dalam penerapannya Reciprocal Teaching lebih mengutamakan partisipasi dan keaktifan siswa dalam proses pembelajaran, karena dalam sistem pengaja ran dengan pendekatan keterampilan proses siswa harus lebih aktif daripada guru. Karena guru hanya bertindak sebagai pembimbing dan fasilitator sehingga, siswa diberi kesempatan untuk ber p ikir lebih aktif dan kreatif. T ujuan penelitian ini adalah 1) u ntuk menge tahui kemampuan berpikir kreatif matematik siswa yang memperoleh pembelajaran dengan model Reciprocal Teaching lebih baik daripada dengan siswa yang memperoleh pembelajaran secara Problem Based Learning 2) u ntuk mengetahui sikap siswa terhadap pembel ajaran matematika dengan model Reciprocal Teaching 3) u ntuk mengetahui korelasi antara kemampuan berpikir kreatif matematik dan sikap siswa. Penelitian ini menggunakan metode eksperimen. Populasi penelitian ini adalah semua siswa SMK Puragabaya Bandung ta hun ajaran 201 6 /201 7 Dan s ampel diambil sebanyak dua kelas yang dipilih secara acak menurut kelas. Instrumen penelitian yang digunakan berupa tes tipe uraian soal soal kemampuan berpikir kreatif matemati k dan angket skala sikap. Analisis data dilakukan dengan menggunakan uji normalitas, uji homogenitas, dan uji t Berdasarkan analisis data hasil penelitian, diperoleh kesimpulan 1) k emampuan berpikir kreatif matematik siswa yang mendapatkan model pembelajaran Reciprocal Teaching lebih baik daripada siswa yang mendapatkan model pembelajaran Problem Based Learning 2) s iswa bersikap positif terhadap pembelajaran matematika dengan menggunakan model pembelajaran Reciprocal Teaching 3) t erdapat korelasi antara kemampuan berpikir kreatif matematik dengan sikap siswa terhadap pembelajaran matematika yang menggunakan model pembelajaran Reciprocal Teaching Kata K unci Reciprocal Teaching Problem Based Learning Berpikir Kreatif Matemati k", "venue": "", "year": 2016.0, "author_names": [""], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 202258956, "title": "Pengaruh pembelajaran matematika menggunakan model reciprocal teaching terhadap kemampuan berpikir kreatif matematika siswa yayasan mahasiswa islamiyah Medan TP. 2017/2018", "abstract": "", "venue": "", "year": 2017.0, "author_names": ["Indah Permatasari"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 148616077, "title": "PENGARUH MODEL PEMBELAJARAN RECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF PADA MATERI LINGKARAN", "abstract": "ABSTRAK ABDUL RAHMAN SAMAITU. 2015, Skripsi: Pengaruh Modelpembelajaran Reciprocal Teaching Terhadap Kemampuan Berpikir Kreatif Pada Materi Lingkaran (Suatu Penelitian pada Siswa Kelas VIII di SMP N 1 Bolaang Uki) Penelitian ini bertujuan Untuk mengetahui perbedaan penggunaan model pembelajaran reciprocal teaching terhadap kemampuan berpikir kreatif matematik siswa dengan penggunaan model pembelajaran konvensional pada materi lingkaran. Metode penelitian yang digunakan adalah metode eksperimen. Populasi dalam penelitian adalah seluruh siswa kelas VIII yang ada di SMP Negeri 1 Bolaang Uki berjumlah 123 orang dan terdistribusi pada 5 kelas. Teknik pengambilan sampel menggunakan Simple random sampling, sampel penelitian yang terpilih adalah kelas VIIIc dengan jumlah siswa 24 orang dikenakan model pembelajaran Reciprocal Teaching dan kelas VIIId dengan jumlah siswa 24 orang dikenakan pembelajaran konvensional. Data penelitian dikumpulkan melalui instrumen tes kemampuan berfikir dan dianalisis secara deskriptif dan inferensial. Analisis deskriptif dilakukan melalui tabel distribusi frekuensi dengan mempersentasikan rata rata dan analisis inferensial dilakukan melalui uji t untuk menguji hipotesis penelitian. Hasil analisis data menunjukan bahwa hasil belajar siswa yang diajarkan dengan menggunakan model pembelajaran Reciprocal Teaching lebih tinggi dibandingkan dengan hasil belajar siswa yang menggunakan pembelajaran konvesional. Temuan ini memperlihatkan bahwa model pembelajaran Reciprocal Teaching lebih unggul dalam membelajarkan siswa pada materi lingkaran dibandingkan dengan pembelajaran konvensional. Karena itu disarankan kepada guru agar menggunakan model pembelajaran Reciprocal Teaching dalam membelajarkan siswa pada materi matematika yang memiliki karakteristik seperti materi lingkaran. Kata Kunci Kemampuan Berpikir Kreatif dan Model Pembelajaran Reciprocal Teaching.", "venue": "", "year": 2015.0, "author_names": ["Abdul Rahman Samaitu", "Kartin Usman", "Perry Zakaria"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 132454275, "title": "PENGARUH MODEL RECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF PADA MATERI PERBANDINGANRNSISWA KELAS VII SMP NEGERI 8 BANDA ACEH", "abstract": "Kata Kunci: Berpikir Kreatif, Model pembelajaran Reciprocal Teaching Penelitian yang berjudul \"Pengaruh Model Reciprocal Teaching Terhadap Kemampuan Berpikir Kreatif pada Materi Perbandingan Siswa Kelas VII Smp Negeri 8 Banda Aceh\" ini mengangkat masalah keberhasilan hasil belajar siswa setelah menerapkan model reciprocal teaching pada materi perbandingan di kelas VII SMP Negeri 8 Banda Aceh, dan kemampuan berpikir kreatif siswa berdasarkan jawaban pretes dan postes. Populasi dalam penelitian ini adalah siswa kelas VII SMP Negeri 8 Banda Aceh. Sedangkan sampel diambil satu kelas yaitu kelas VII3 dengan jumlah siswa 23 orang. Pengumpulan data dilakukan dengan tehnik tes dan observasi. Pengolahan data terhadap hasil tes diolah secara kuantitatif, menggunakan uji t pihak kanan. Untuk hasil kemampuan berpikir kreatif siswa menggunakan indikator berpikir kreatif dan perhitungan persentase. sedangkan hasil observasi diolah dengan menggunakan analisis deskriptif. Berdasarkan hasil analisis data ditemukan bahwa (1) hasil belajar siswa diperoleh", "venue": "", "year": 2015.0, "author_names": ["Nur Irsyadiyati"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 182900907, "title": "PENGARUH MODEL PEMBELAJARAN RECIPROCAL TEACHING BERBANTUAN MODUL PEMBELAJARAN RME (REALISTIC MATHEMATICS EDUCATION) TERHADAP KEMAMPUAN BERPIKIR KREATIF DAN PEMAHAMAN KONSEP MATEMATIS SISWA", "abstract": "This research aimed to describe: (1) the effect of Reciprocal Teaching learning model assisted by RME (Realistic Mathematics Education) learning modules, (2) the students' creative thinking ability in the Reciprocal Teaching learning model assisted by RME (Realistic Mathematics Education) learning modules, (3) the students' mathematical concept understanding in the Reciprocal Teaching learning model assisted by the RME (Realistic Mathematics Education) learning module. This research used a quasi experimental research with a quantitative approach. The population of this research the students of 8th grade in the academic year of 2017/2018 that consisted of 8 classes. The data were taken from the students of 8th grade in Junior High School of Negeri 24 Malang (SMP Negeri 24 Malang) The samples were taken randomly from the population of 2 classes, namely as an experimental class and control class. The sampling technique was done by Cluster Random Sampling. The data were taken from giving validation questionnaires and written tests in the form of descriptions to know the ability of creative thinking and understanding of mathematical concepts. The results showed that: (1) the validity of the RME (Realistic Mathematics Education) learning module obtained valid results and was very valid on the material expert validators of 1 and 2, while it obtained a valid result on the media expert validators of 1 and 2; (2) There was an influence between the Reciprocal Teaching learning model assisted by the RME (Realistic Mathematics Education) learning module on the students' creative thinking ability; (3) There was an influence between the Reciprocal Teaching learning model assisted by the RME (Realistic Mathematics Education) learning module on the students' students' mathematical concepts understanding.", "venue": "", "year": 2018.0, "author_names": ["Dini Pratiwi Ningsih"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 149539127, "title": "PENGARUH PENDEKATAN RECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF SISWA PADA POKOK BAHASAN TRIGONOMETRI DI KELAS X SMA NEGERI 1 SIBOLGA T.P 2011/2012", "abstract": "Penelitian ini bertujuan unruk rnengetahui apakah terdapat pengaruh yang signifikan antara pendekatan pembelajaran Reciprocal Teaching den pembelajaran deagan menggunakan rnetode konvensional terhadap kemampuan berpikir kreatif siswa pada pokok bahasan trigonometri dikelas X SMA Negeri I Sibolga, populasi dalam penelitian ini adalah seluruh siswa kelas X di SMA Negeri I Sibolga semester genap tahun ajaran 2011/2012 yang terdiri dari 8 kelas paralel dengan jumlah siswa sebanyak 350 orang. Sedangkan yang menjadi sampel dalam penelitian ini terdiri dari 2 kelas yaitu kelas X 2 sebanyak 40 orang sebagai kelas kontrol dan kelas X 3 sebanyak 40 orang sebagai kelas eksperimen yang ditentukan secara random dengan sistem undi. Kelas kontrol menggunakan pembelajaran metode konvensional dan kelas eksperimen menggunakan pendekatan pembelajaran R.eciprocal Teaching.Jenis penelitian ini adalah eksperimen semu dengan memberikan perlakuan pada kelompok sampel penelitian kemudian diberikan pretes dan protest, sebagai alat pengumpul data digunakan tes kemampuan berpikir kreatif dalam bentuk uraian pada materi pokok trigonometri sebanyak 10 soal. Sebelum pengujian hipotesis terlebih dahulu diuji normalitas tes dengan menggunakan teknik liliefors dan homogenitas tes dengan menggunakan uji F. Dari pengujian yang dilakukan diperoleh bahwa kedua sampel berdistribusi normal dan homogen.Hipotesis dalam penelitian ini diuji dengan menggunakan analisis inferensial regresi anakova. Hasil penelitian di.peroleh persamaan regresi untuk kelas kontrol yaitu Y 46,32 1.0277X dan kelas eksperimen yaitu Y 49,10 1.,40X. Berdasarkan uji keberartian model regresi diperoleh kesimpulan bahwa model regresi kelas kontrol dan kelas eksperimen berarti. Karena syarat homogenitas dipenuhi, maka analisis kovarians dapat dilakukan. Berdasarkan perhitungan uji analisis kovarians diperoleh F hitung (4,64) F tabel (3.966) pada taraf a 0,05. ini berartii terdapat pengaruh yang signifikan antara pendekatan pembelajaran Reciprocal Teaching dengan pembelajaran menggunakan metode konvensional terhadap kemampuan berpikir kreatif siswa. Besarnya pengaruh pendekatan pembelajaran Reciprocal Teaching dengan pembelajaran dengan menggunakan metode konvensional terhadap kemampuam berpikir kreatif siswa berdasarkan indeks determinasi (r2) masing masing sebesar 0.5484 atau 54,84 dan 0,1881 atau 18,81 Sehingga besarnya perbedaan pengaruh dua model tersebut adalah 36.03%", "venue": "", "year": 2012.0, "author_names": ["Jonatan Pasaribu"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 145472275, "title": "PENGARUH PENDEKATAN RECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF SISWA PADA POKOK BAHASAN TRIGONOMETRI DI KELAS X SMA NEGERI 1 SIBOLGA T.P 2011/2012", "abstract": "", "venue": "", "year": 2012.0, "author_names": ["M Pd Drs M Panjaitan"], "n_citations": 0, "n_key_citations": 0, "score": 0}, {"corpus_id": 148720704, "title": "PENGARUH MODEL PEMBELAJARANRECIPROCAL TEACHING TERHADAP KEMAMPUAN BERPIKIR KREATIF MATEMATIS SISWA SMA", "abstract": "Nasyar Mubaroq (2016) Pengaruh Model Pembelajaran Reciprocal Teaching terhadap Kemampuan Berpikir Kreatif Matematis Siswa SMA. Matematika adalah salah satu ilmu pengetahuan yang sangat berguna dalam segala aspek kehidupan. Kemampuan berpikir kreatif matematis sangat diperlukan siswa dalam memahami matematika. Namun kemampuan berpikir kreatif matematis siswa ternyata masih rendah. Hal tersebut disebabkan karena guru lebih sering memberikan siswa hafalan rumus yang rumit, dan juga guru jarang melatih kemampuan berpikir kreatif matematis siswa saat proses pembelajaran. Salah satu alternatif pembelajaran yang dapat meningkatkan kemampuan berpikir kreatif matematis adalah model pembelajaran Reciprocal Teaching. Penelitian ini bertujuan untuk (1) mengetahui apakah kemampuan berpikir kreatif matematis siswa yang memperoleh pembelajaran matematika dengan model pembelajaran Reciprocal Teaching lebih baik dari pada siswa yang memperoleh model pembelajaran Problem Based Learning, (2) untuk mengetahui apakah sikap siswa positif terhadap pembelajaran matematika dengan model pembelajaran Reciprocal Teaching. Metode penelitian ini adalah eksperimen. Populasi penelitian ini adalah kelas X SMA Negeri 18 Bandung tahun ajaran 2015 2016, dan sampelnya adalah dua kelas yang dipilih secara acak. Instrumen yang digunakan dalam penelitian berupa tes tipe uraian soal soal kemampuan berpikir kreatif matematis dan skala sikap yang menggunakan model skala Likert yang berisikan pernyataanpernyataan siswa mengenai kegiatan pembelajaran yang dilakukan. Dari analisis data hasil penelitian, diperoleh kesimpulan (1) kemampuan berpikir kreatif matematis siswa yang memperoleh pembelajaran matematika dengan model pembelajaran Reciprocal Teaching lebih baik daripada siswa yang memperoleh model pembelajaran Problem Based Learning (2) siswa bersikap positif terhadap pembelajaran matematika dengan model pembelajaran Reciprocal Teaching. Sehingga model pembelajaran Reciprocal Teaching dapat dijadikan alternatif bagi guru dalam melaksanakan pembelajaran untuk menciptakan suasana belajar yang aktif, efektif dan menyenangkan. Kata Kunci Kemampuan Berpikir Kreatif Matematis, Reciprocal Teaching, Sikap", "venue": "", "year": 2016.0, "author_names": ["Nasyar Mubaroq"], "n_citations": 0, "n_key_citations": 0, "score": 0}]}