article_id
stringlengths
8
10
article
stringlengths
0
822k
abstract
stringlengths
287
2.77k
section_names
stringlengths
1
1.13k
PMC3647576
diabetes mellitus is a heterogeneous group of metabolic diseases that is characterised by hyperglycaemia . it can result in blindness , kidney and heart disease , stroke , loss of limbs , and reduced life expectancy if left untreated and is recognized as a primary threat to human health in the 21st century . type 2 diabetes mellitus is the most common form of diabetes and accounts for approximately 90% of cases ; its development is controlled by interactions between multiple genetic and environmental factors [ 24 ] . the genetics and pathophysiology of type 2 diabetes remain poorly understood because detailed investigations , such as genetic dissection , have been restricted in humans for practical and ethical reasons . animal models of type 2 diabetes mellitus have provided important information , and many rat strains of spontaneous diabetes have been reported [ 511 ] . among these strains , goto - kakizaki ( gk ) , otsuka long - evans tokushima fatty ( oletf ) , and spontaneously diabetic torii ( sdt ) rats have been widely used for elucidating the genes responsible for the development of diabetes , the physiological course of the disease , and the complications related to diabetes [ 1214 ] . the different diabetes model strains exhibit different aspects of the disease , and thus additional animal models are needed to elucidate the complete pathogenesis of diabetes . we have developed a novel , nonobese rat strain with spontaneous diabetes , long - evans agouti ( lea ) rat , which was established from a long - evans closed colony together with the long - evans cinnamon ( lec ) rat . they do not exhibit any signs of obesity throughout their lives but experience late onset of the disease and exhibit histological changes ( macrophage infiltration and fibrosis ) of the pancreatic islets . thus , lea rats may serve as a new model of nonobese type 2 diabetes mellitus caused by impairing insulin secretion . in the present study , we examined the pathophysiological characteristics of the strain and demonstrated that the lea rat is a useful model of human nonobese diabetes . lea rats , also known as lea / sendai or sendai rats , were maintained at the institute for animal experimentation , tohoku university graduate school of medicine , japan . wistar rats were purchased from japan slc ( hamamatsu , japan ) and were used as controls . all rats were housed in air - conditioned animal rooms at an ambient temperature of 23 3c and relative humidity of 50 10% , under specific pathogen - free conditions with a 12 h light / dark cycle . food , consisting of a labo mr standard diet ( nosan , yokohama , japan ) , and water were available ad libitum . all animal care procedures were approved by the animal care and use committee of tohoku university graduate school of medicine , and complied with the procedures of the guide for the care and use of laboratory animals of tohoku university . urinary glucose and protein were monitored at 1-month intervals using uro - paper ag2 ( eiken , tokyo , japan ) . the detection limits of the uro - paper were 1020 mg / dl for urine protein and 4060 mg / dl for urine glucose . the body weight ( bw ) and body length ( bl ) , that is , the distance from the nose to the anus , of five male rats ( 6 months old ) were measured . the body mass index ( bmi ) after 16 h fast , a dose of 2 g / kg bw of glucose was given orally , and blood samples were collected from the tail vein at 0 , 30 , 60 , 90 , and 120 min after loading . the blood glucose levels were measured with a glutest eii blood glucose monitoring meter with a monitoring range of 40500 mg / dl ( sanwa kagaku , nagoya , japan ) . the rats were classified according to the three - grade system of diabetes mellitus ( dm ) , impaired glucose tolerance ( igt ) , and normal . dm was defined as 120 min blood glucose levels of 200 mg / dl . igt was defined as 120 min blood glucose levels between 140 and 199 mg / dl . the plasma insulin concentration was determined as immunoreactive insulin ( iri ) by elisa with a rat insulin assay kit ( morinaga milk industry , yokohama , japan ) using separated plasma from the blood collected from the tail vein at 0 , 30 , 60 , 90 , and 120 min after glucose loading . to study early - phase insulin secretion , an ogtt was performed on male rats ( 2 and 12 months of age ) as described previously , and blood glucose levels ( bg ) and insulin concentrations ( iri ) at 0 and 30 min after glucose loading were measured . the insulinogenic index ( i.i . ) = iri/bg , where iri and bg are the differences between their respective values at 0 and 30 min . the animals were intraperitoneally challenged with a dose of 0.75 u / kg bw of human insulin ( novolin r ; novo nordisk , denmark ) . blood samples were drawn from the tail vein at different time points , and glucose levels were determined as described previously . the tissues from rats were fixed overnight at 4c in phosphate - buffered saline ( pbs ) that contained 4% paraformaldehyde . they were rinsed with pbs , dehydrated , embedded in paraffin , cut into 5-m - thick sections , and stained with haematoxylin and eosin ( h&e ) . for insulin detection , the pancreatic sections were processed for immunostaining by an indirect method using guinea pig anti - insulin polyclonal antibody ( 1 : 100 , daco , carpinteria , ca ) as the primary antibody and peroxidase - labelled goat anti - guinea pig igg antibody ( 1 : 200 , chemicon international , temecula , ca ) as the secondary antibody . the specific reactions were visualised with a dab substrate kit ( vector , burlingame , ca ) . to distinguish the inflammatory cells infiltrated in the islets , additional deparaffinised sections of pancreas were processed for immunostaining using mouse monoclonal antibodies against rat cd4 clone w3/25 ( mca 55r ; abd serotec , oxford , uk ) , cd8 clone ox-8 ( mca48r ; abd serotec ) , cd45ra clone ox-33 ( mca340 g , abd serotec ) , and macrophage antigen clone ed1 ( mca341 , abd serotec ) . specific reactions were visualised with a sab - po kit ( nichirei , tokyo , japan ) . the volume of -cells relative to the pancreas volume was calculated as the proportion of the total area of -cells to the total area of pancreatic tissue , according to the method of bouwens et al . three serial paraffin sections ( 5-m - thick ) of pancreatic tissue from three animals of each strain were obtained at intervals of 100 m . the sections were immunostained with guinea pig anti - insulin antibodies ( 1 : 100 ) and analysed under an olympus bx51 microscope ( olympus , tokyo , japan ) connected to a computer running the winroof software ( mitani corp . , tokyo , japan ) . the image analysis quantified the total pancreatic tissue area and the insulin - positive area , permitting the calculation of the ratio of islet -cell area to total pancreatic area . to ensure that any change in the relative -cell volume was not attributable to a change in the size of individual -cells , the density of nuclei per insulin - positive area was also measured in 20 islets of the five rats of each strain . the incidence of diabetes in lea rats as determined by ogtt is shown in figure 1(a ) . diabetes mellitus was observed only in male rats , and its incidence increased with age : 10% , 61% , and 86% at 6 , 12 , and 14 months of age , respectively . the onset of diabetes was not observed in females , although 33% of the females showed only igt at 12 months of age . as the onset of diabetes differed according to sex glucosuria appeared at 5 months of age , before the onset of diabetes in male rats , and was present in 100% of the males at 8 months of age . in female rats , glucosuria appeared at 7 months of age and was present in 100% of the females at 9 months ( table 1 ) . proteinuria appeared at 6 months of age , concomitant with the onset of diabetes , in male rats and was present in 57% of the males at 9 months of age , whereas proteinuria appeared in 20% of female rats at 9 months of age and did not exceed 30% of the females thereafter . the average bws of male and female lea rat increased gradually throughout the experimental period and were 506 37.8 g ( n = 5 ) and 312 27.7 g ( n = 5 ) at the 12 months of age , respectively ( figure 1(b ) ) . a significant decrease in bw the bmi of the lea rats at 6 months of age ( 0.57 0.02 g / cm , n = 5 ) was not significantly different from that of the control wistar rats ( 0.59 0.02 g / cm , n = 5 ) , confirming that the lea rats were nonobese . the survival rate of lea rats was examined(supplementary figure 1 available online at http://dx.doi.org/10.1155/2013/986462 ) . we found that 95% of the male rats survived to 12 months of age , and 50% survived to 22 months of age . the survival rate of male lea rats was not significantly different from that of normal control wistar rats , which indicates that diabetes does not influence the survival of lea rats . the results of the ogtt in male rats at different ages are shown in figure 2 . two - month - old male lea rats showed impaired glucose tolerance compared with age - matched male wistar rats ( figure 2(a ) ) . at 12 and 14 months of age , the lea rats presented with typical diabetic glucose levels of 200 mg / dl at 120 min after glucose loading ( figures 2(c ) and 2(d ) ) . the wistar rats did not show any change in blood glucose level in relation to age . the plasma insulin concentrations in male rats at 2 months of age showed that the pre - ogtt values did not differ among the rats , whereas the values at 30 min after glucose loading were significantly lower in lea rats ( figure 2(e ) ) . the plasma insulin level was significantly lower in lea rats at 12 months of age after glucose loading ( figure 2(f ) ) . these results indicate that lea rats have impairment of insulin secretion in response to glucose stimulation . the low insulin levels measured at 30 min after glucose loading suggest that lea rats have a decreased ability to secrete insulin at an early phase of the disease ( figures 2(e ) and 2(f ) ) . the insulinogenic index ( i.i . ) of male lea rats was significantly lower at both 2 and 12 months of age compared with that of wistar rats ( table 2 ) , which indicates that early - phase insulin secretion is significantly impaired at an early age . we performed an insulin tolerance test on nonfasting , 12-month - old male rats by intraperitoneal injection of insulin , to examine insulin response ( figure 3 ) . before insulin injection , the blood glucose levels were significantly higher in lea rats than in wistar rats . after insulin injection , the blood glucose levels in lea rats significantly decreased by maximum 51.1% over 120 min to the same levels observed in the wistar rats . these results indicate that lea rats have a normal glycaemic response to exogenous insulin and are not insulin - resistant . the age - dependent histological changes in the pancreas were examined ( figure 4 ) . male lea rats at 2 months of age had inflammatory reaction in fraction of the pancreatic islets ( figure 4(a ) ) , although the most of islets were intact . immunostaining for insulin revealed that insulin - positive cells were irregularly distributed within inflammatory foci ( figure 4(b ) ) . the inflamed islets were infiltrated by cells positive for anti - rat macrophage antibody ( figure 4(c ) ) but not for antibodies against cd4 + , cd8 + , or cd45ra ( data not shown ) . however , inflammatory reactions disappeared by 6 months of age , and the islets had been replaced by fibrotic remnants in rats over 12 months of age ( figure 4(d ) ) . the number of -cells was reduced , although -cells were present at 12 months of age ( figure 4(e ) ) . a few tiny islets without fibrosis , which appeared to be regenerative islets , were intermingled in the affected pancreas in rats from 12 months of age ( figures 4(f ) and 4(g ) ) . the pancreatic islets from female lea rats had no pathological changes , including fibrosis and inflammatory reaction . the volume of -cells relative to the gross volume of the pancreas was determined using the winroof software to analyse sections that were immunostained for insulin ( figure 4(h ) ) . in male lea rats , the volume of -cells significantly decreased from 0.74 0.23% at 2 months to 0.52 0.08% at 6 months . the control wistar rat strain showed a rising , albeit not statistically significant , trend from 0.79 0.05% at 2 months to 0.96 0.24% at 6 months . these results reveal that the significant decrease in the insulin - positive area with age appears in lea rats owing to severe fibrosis of the islets . to identify whether the diminished -cell mass in lea rats was caused by a reduction in cell number or atrophy of the cells , we counted the number of nuclei in the insulin - positive areas of 20 islets in five of each of 6-month - old lea and wistar rats . the number of nuclei did not differ significantly between the two strains ( 117.97 4.13 m / nuclei versus 128.82 13.35 m / nuclei for lea versus wistar rats , resp . ) . these results indicate that a reduction in the cell number ( not atrophy of -cells ) is responsible for the reduction of -cell volume in lea rats . in lea rats , glucosuria and proteinuria were present in 100% at 8 months of age and in 66% of males at 12 months of age , respectively . the histopathological analysis was performed to examine the renal lesions in 12-month - old male lea rats ( figure 5 ) . the large dilatation of tubular lumen was present , mostly in superficial cortex regions ( figure 5(a ) ) . atrophy of tubular epithelium and flattend / detached renal tubules were also observed ( figure 5(b ) ) . the intracytoplasmic hyaline droplet accumulation and the disappearance of tubular epithelial cell layer associated with thickening of basement membrane were evident in proximal tubules ( figure 5(c ) ) . there were no obvious pathological changes in the glomeruli at 12 months of age ( figure 5(d ) ) . two inbred strains , long - evans agouti ( lea ) and long - evans cinnamon ( lec ) , which were selected for coat color , were established from a closed colony of long - evans rats at the center for experimental plants and animals , hokkaido university ( japan ) . the lea rat has been known as the control strain for the lec rat , an animal model of wilson disease . however , the large amount of urine and the strong odor of urine , a possible indicator of diabetes , are often observed in lea rats during long - term breeding . in 1996 , we found three male rats that were positive for glucosuria and hyperglycaemia among littermates from an inbred colony of lea / hkm . the lea rats exhibit several distinctive diabetes - related characteristics : ( 1 ) onset of diabetes is observed only in male rats , not in female , at over 6 months of age ; ( 2 ) the early - phase insulin secretion is impaired at 2 months of age ; ( 3 ) the progressive fibrosis in islet in an age - dependent manner ; ( 4 ) a normal glycaemic response to exogenous insulin ; ( 5 ) nonobese . from these results , we conclude that the lea rat is a new rat model for nonobese type 2 diabetes mellitus . several rat model strains of spontaneous type 2 diabetes mellitus have been identified to date , and they are classified into two types , obesity and nonobesity models . the obesity models of type 2 diabetes , such as sand , wistar fatty , and oletf rats , are characterised by hyperglycaemia , hyperinsulinemia , and insulin resistance . in contrast , the nonobesity models , such as gk , wbn / kob , and spontaneously diabetic torii ( sdt ) rats [ 11 , 19 ] , are characterised by hyperglycaemia , hypoinsulinemia , and noninsulin resistance . we classify the lea rat as a nonobesity model because bmi of lea rats is not different from that of control rats . however , there are several differences between the gk and lea rats . in gk rats , there is no sex difference with respect to the occurrence of diabetes and no age - dependent deterioration of impaired glucose tolerance ; in addition , hyperglycaemia occurs 8 days after birth [ 8 , 20 ] . the sdt rat is a new model of nonobese , severe type 2 diabetes mellitus with hyperglycaemia , hemorrhage in and around the islets , and hyposecretion of insulin ( hypoinsulinemia ) resulting from a significantly decreased number and size of islets . although lea rats displayed no hemorrhage in and around the islets , macrophage infiltration was present around the islets , leading to the progressive fibrosis of islet ( figure 4 ) . the glucose intolerance in correspondence with the impairment of the insulin secretion was observed in male lea ( figure 2 and table 2 ) , suggesting that the main cause of diabetes in lea rats is hypoinsulinemia attributable to a decreased number of -cells in the islets ( figure 4 ) . it is also likely that significant decreased capability of early - phase insulin secretion is caused by the hypofunction of -cells . although there was inflammatory reaction in fraction of the pancreatic islet in male lea rats at 2 months of age , the volume of -cells in lea rats was comparable to that of control rats ( figure 4(h ) ) , suggesting that lea rats have congenital defect of insulin secretion in addition to the progressive reduction of -cell mass in age - dependent manner . the ability to secrete insulin in the early phase reflects the first phase of insulin secretion from pancreatic -cells and contributes to the suppression of gluconeogenesis in the liver . therefore , we suggest that the lea rat is unable to suppress the increasing blood glucose concentration that occurs after feeding because of a decreased ability to secrete insulin in the early phase , which eventually leads to the deterioration of -cell function by glucose toxicity and causes the rats to experience chronic hyperglycaemia . the reduction in the number of -cells is thought to be the main cause of type 2 diabetes in lea rats , and this is supported by previous observations in human type 2 diabetes patients and rat models of type 2 diabetes mellitus [ 2226 ] . both congenital and acquired factors are involved in the mechanism of -cell reduction . in regard to congenital factors , mutations in transcription factor genes , such as insulin promoter factor-1 ( ipf-1 ) and hepatocyte nuclear factor-1 ( hnf-1 ) , have been verified . have revealed that apoptosis reduces -cells in human diabetes and that it progresses by amyloid deposition and hyperglycaemia . zhu et al . have reported that the impairment of -cell proliferation causes a decrease in -cells under hyperglycaemic conditions in oletf rats , and movassat et al . free - fatty - acid- ( ffa- ) induced -cell apoptosis has been proposed by shimabukuro et al . as the underlying cause in zucker diabetic fatty ( zdf ) rats . although it is speculated that the impairment of -cell proliferation , progression of apoptosis by hyperglycaemia , and impairment by cytokines produced by macrophages cause the reduction in -cells in lea rats , further analyses are required to clarify the pathogenesis of diabetes in lea rats . the abnormality of the renal tubules at 12 months of age was observed in the lea rats with glycosuria and proteinuria ( figure 5 and table 1 ) . onset of diabetes as determined by ogtt was observed in only male lea rats ( figure 1(a ) ) , and its incidence increased with age : 10% and 61% at 6 and 12 months of age , respectively . however , glucosuria appeared at 5 months of age before the onset of diabetes in male rats , and was present in 100% of the females at 9 months ( table 1 ) , which did not develop diabetes ( figure 1(a ) ) . based on these findings , it is unlikely that the onset of diabetes and impairment of glucose intolerance are not associated with glucosuria and proteinuria . although the lea rat is used as the control strain of the lec rat since they do not harbor atp7b mutation , the several phenotypes such as hypersensitivity to x - rays and the lack of d - amino acid oxidase ( dao ) activity , which is involved in the degradation of d - serine , a key coagonist for n - methyl - d - aspartate ( nmda ) receptor , have been reported . we are now performing quantitative trait locus ( qtl ) analyses for impaired glucose tolerance and urinary glucose to lead us to identification of genes for glucose intolerance , renal glucose excretion , and the development of diabetes in lea rat . in conclusion , the lea rat has distinctive characteristics that are different from the previously described model rats . the lea rats develop late onset diabetes in correspondence with the impairment of the insulin secretion , which is caused by progressive fibrosis in pancreatic islets in age - dependent manner . in japan , the prevalence of type 2 diabetes mellitus is increasing rapidly , and more than 10% of individuals over 40 years of age are affected . relatively few diabetic individuals in japan are obese , and impairment of insulin secretion often develops before onset of diabetes . the unique characteristics of lea rat are a great advantage to analyze the progression of diabetes mellitus with age . the additional studies are expected to disclose the genes involved in type 2 diabetes mellitus .
animal models have provided important information for the genetics and pathophysiology of diabetes . here we have established a novel , nonobese rat strain with spontaneous diabetes , long - evans agouti ( lea ) rat derived from long - evans ( le ) strain . the incidence of diabetes in the males was 10% at 6 months of age and 86% at 14 months , while none of the females developed diabetes . the blood glucose level in lea male rats was between 200 and 300 mg / dl at 120 min according to ogtt . the glucose intolerance in correspondence with the impairment of insulin secretion was observed in male rats , which was the main cause of diabetes in lea rats . histological examination revealed that the reduction of -cell mass was caused by progressive fibrosis in pancreatic islets in age - dependent manner . the intracytoplasmic hyaline droplet accumulation and the disappearance of tubular epithelial cell layer associated with thickening of basement membrane were evident in renal proximal tubules . the body mass index and glycaemic response to exogenous insulin were comparable to those of control rats . the unique characteristics of lea rat are a great advantage not only to analyze the progression of diabetes , but also to disclose the genes involved in type 2 diabetes mellitus .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion
PMC4458351
hepatitis g virus ( hgv ) belongs to the flaviviridae , which includes three genus and more than 70 members . hepatitis g virus members are widely variable and biologically different ( 1 , 2 ) . despite the gene structure and duplication similarities , there is no antibody cross - reactivity among hgv members proteins ( 3 , 4 ) . hepatitis g virus is an envelope and spherical shaped virus of 40 - 60 nm diameters that e - protein , the most important protein of hgv , is necessary for the virus adhesion and fusion ( 5 ) ; therefore , determination is important in case of the anti - e2 antibodies presence . the hgv genome is composed of a single stranded rna with the length of 11 kb , caped on 5 ' without poly - a tail at the 3 ' end ( 6 ) . hepatitis g virus can not be cultivated and a sensitive and suitable cell type of its culture is not developed yet . diagnosis of hgv is according to the revers transcription polymerase chain reaction ( rt - pcr ) and enzyme linked immunosorbent assay ( elisa ) in biological samples ; however , rt - pcr technique is valuable to detect current infections ( 8) . two different techniques , rt - pcr and elisa , consider different targets to diagnose hgv ; rt - pcr only detects hgv rna molecules in the patient samples , but elisa measures antibodies against e2-proteins . therefore , a patient may have antibody titers for e2 proteins but its rt - pcr result may became negative because of an active immune response . prevalence of hgv varies in blood donors ranging from 0.9% to 10% . besides blood products transfusion , other routes for transfection include placental and needle sticking , especially for drug users ( 8 - 16 ) . hepatitis g virus is mostly concomitant with hepatitis b and c viruses ( hbv and hcv ) . anyway , hgv has no definitive impact on the patient status ( 17 , 18 ) . however , there are reports on the pathogenesis of hgv that make the prevalence studies essential , especially for healthcare providers and authors . according to the reports by the investigators , hgv could develop fulminant hepatitis , which its causes are manifested sporadically ( 19 - 28 ) . since the patients with renal failure who undergo dialysis receive blood products and transfusion , the current study measured the prevalence of hgv assay in the patients undergoing hemodialysis and kidney transplantation in khuzestan province , iran . the current study aimed to investigate the prevalence of hgv using determination of e2 , viral envelope antigen , antibodies and its rna by elisa and rt - pcr techniques . to evaluate the prevalence of hgv antibody and rna , 516 serum samples were collected from the patients undergoing hemodialysis and kidney transplanted and stored at -70 c until the test running day . also other data including gender , hospitalization or outdoor monitoring , and place of residence were recorded for each patient ; 86 cases were from ahvaz kidney transplantation center and the rest were from other cities of khuzestan province ; also 60 sera belonged to the patients undergoing hemodialysis from ahvaz . all patients were oriented and informed with the study purpose and signed written consent letter . furthermore , this study was reviewed , accepted and approved by the ethical committee of ahvaz jundishapur university of medical sciences . to diagnose seropositive patients , the test was performed according to the manufacturer s instruction and the steps briefly were : serums were diluted by sample buffer ( 1/10 ratio ) , the diluted samples were added to wells and stored at 37 c for 30 minutes and then washed with antihuman antibody and conjugated with horse radish peroxidase ( hrp ) and again storing at 37 c . after well rewashing , substrate was added and the chromogenic reaction was blocked by stopping solution . optical density of each sample was measured in 450 nm and 630 nm as reference filter . to evaluate hbv and hcv involvement in the patients , serological determination was done using hepatitis b surface antigen ( hbs - ag ) and hcv - ab elisa kits ( diaplus , usa ) according to the kit manufacturers . to measure hgv rna positive / negative in patients , sera rt - pcr was performed . to extract viral rna , tripure rna extraction kit ( roche , germany ) and to synthesize cdna sensiscript kit ( qiagen , usa ) were used . briefly , after viral rna extraction , microtubes were kept at 65 c for five minutes and transferred into the ice cold dishes to prepare cdna as template rna . then cdna synthesis was performed as follows : rt buffer 2 l , dntps ( 5 mm ) 2 l , random primer ( 10 picom ) 2.5 l , rnasin ( 10 u/l ) 1 l , reverse transcriptase 5 l , distilled water 6.5 l were added , then mixed and incubated at 37 c for one hour . primers used for the first step of pcr were 58 ( 58f-5 ' cag ggt tgg tag gtc gta aat cc-3 ' ) and 75 ( 75r-5 ' cct att ggt caa gag aga cat-3 ' ) . the first step was performed as follows : pcr buffer 5 l , dntps ( 5 mm ) 1 l , forward primer ( 58f ) 1 l , reverse primer ( 75r ) 1 l , taq - polymerase 0.3 l , cdna 5 l , and distilled water 36.7 l were added ; then thermocycler apparatus ( techne , uk ) was programmed as follows : 94 c for 30 seconds , 60 c for 30 seconds , 72 c for 30 seconds in 30 cycles . in the second step , 5 l of the first step final product was used with 131 ( 131f-5'aag aga gac att gaa ggg cga cgt-3 ' ) and 134 primers ( 134r- 5 ' ggt cat ctt ggt agc cac tat agg-3 ' ) . ultimately , 5 l of amplified product was added to the wells of a 2% gel agarose and the electrophoresis was derived at 100 v for one hour . the pcr final product immersed into a dish containing 20 l ethidium bromide in 200 ml distilled water . transiluminator apparatus ( vilber - lurmat , french ) was used to visualize the bands and imaging . statistical analysis was conducted using chi - square test , and the prevalence determination of each investigated variable ; the confidence interval of 95% ( ci = 0.95 ) was considered to estimate significant results . to diagnose seropositive patients , elisa tests were performed using diagnostic kit ( diaplus inc . the test was performed according to the manufacturer s instruction and the steps briefly were : serums were diluted by sample buffer ( 1/10 ratio ) , the diluted samples were added to wells and stored at 37 c for 30 minutes and then washed with antihuman antibody and conjugated with horse radish peroxidase ( hrp ) and again storing at 37 c . after well rewashing , substrate was added and the chromogenic reaction was blocked by stopping solution . optical density of each sample was measured in 450 nm and 630 nm as reference filter . to evaluate hbv and hcv involvement in the patients , serological determination was done using hepatitis b surface antigen ( hbs - ag ) and hcv - ab elisa kits ( diaplus , usa ) according to the kit manufacturers . to measure hgv rna positive / negative in patients , sera rt - pcr was performed . to extract viral rna , tripure rna extraction kit ( roche , germany ) and to synthesize cdna sensiscript kit ( qiagen , usa ) were used . briefly , after viral rna extraction , microtubes were kept at 65 c for five minutes and transferred into the ice cold dishes to prepare cdna as template rna . then cdna synthesis was performed as follows : rt buffer 2 l , dntps ( 5 mm ) 2 l , random primer ( 10 picom ) 2.5 l , rnasin ( 10 u/l ) 1 l , reverse transcriptase 5 l , distilled water 6.5 l were added , then mixed and incubated at 37 c for one hour . primers used for the first step of pcr were 58 ( 58f-5 ' cag ggt tgg tag gtc gta aat cc-3 ' ) and 75 ( 75r-5 ' cct att ggt caa gag aga cat-3 ' ) . the first step was performed as follows : pcr buffer 5 l , dntps ( 5 mm ) 1 l , forward primer ( 58f ) 1 l , reverse primer ( 75r ) 1 l , taq - polymerase 0.3 l , cdna 5 l , and distilled water 36.7 l were added ; then thermocycler apparatus ( techne , uk ) was programmed as follows : 94 c for 30 seconds , 60 c for 30 seconds , 72 c for 30 seconds in 30 cycles . in the second step , 5 l of the first step final product was used with 131 ( 131f-5'aag aga gac att gaa ggg cga cgt-3 ' ) and 134 primers ( 134r- 5 ' ggt cat ctt ggt agc cac tat agg-3 ' ) . ultimately , 5 l of amplified product was added to the wells of a 2% gel agarose and the electrophoresis was derived at 100 v for one hour . the pcr final product immersed into a dish containing 20 l ethidium bromide in 200 ml distilled water . transiluminator apparatus ( vilber - lurmat , french ) was used to visualize the bands and imaging . statistical analysis was conducted using chi - square test , and the prevalence determination of each investigated variable ; the confidence interval of 95% ( ci = 0.95 ) was considered to estimate significant results . elisa and rt - pcr tests were performed on 516 sera gathered from different cities of khuzestan province to evaluate the prevalence of hgv . table 1 shows the obtained results of the two evaluating techniques . according to elisa test results for different cities , from 126 samples of ahvaz hospitals , 4 ( 1.95% ) samples were positive for hgv antibody . from 26 samples of sosa and shushtar cities , each had one positive sample . from 36 samples of mahshahr , and 17 samples of andimeshk , all other samples from masjid soleiman , shadegan , dezfoul , khoramshahr , and baghmalek were negative for e2 antibody . distribution of hgv was significantly different among the different cities ( ci = 0.95 , p = 0.004 ) . out of the 516 sera , 285 ( 55.23% ) samples were from males and 231 ( 44.77% ) samples from females . positive and negative cases of e2 antibodies between males and females are presented in table 2 . there was no significant difference between gender of the patients considering hgv antibody ( ci = 0.95 , p = 0.313 ) . for the association between blood receiving frequency and hgv antibodies , data showed no considerable difference ( mean sd = 23.68 15.9 ; ci = 0.95 , p = 0.99 ) ; note that the minimum data is not shown . concurrent infection of hgv with hcv or hbv was also investigated using elisa test for hcv antibody and hbs antigen . 438 sera were negative for both hgv and hcv and from the rest of 516 samples , 40 cases were hcv positive and hgv negative ; 37 sera were hcv negative and hgv positive . only one sample was positive in both hgv and hcv , which was considered as co - infection . from the 516 sera , 470 samples were both hbv and hgv negative ; eight hbv positive and hgv negative ; 36 sera were hbv negative and hgv positive ; however , two samples were positive for both hbv and hgv . patients with transplantation were more positive for hgv immunoreactive antibodies in comparison with the renal failure ones ( p = 0.000 ; ci = 0.95 ) . furthermore , likelihood ratio ( lr ) was 25.116 , which implied the conclusive increase of disease likelihood in patients with transplantation . high value of lr also implied the importance of performing hgv immune - assay for kidney donors . however , calculated odd s ratio for the patients was 0.178 ( 0.95 ci = 0.089 - 0.353 ) , which implied that the chance of hgv positivity , using immunoassay , was higher among patients with transplantation but lower in patients with renal failure . in fact , patients with transplantation were 16% positive for hgv antibody ; meanwhile other patients with renal failure were only 3.3% positive ( table 3 ) . patients with transplantation were more positive for hcv immunoreactive antibodies compared to the ones with renal failure ( p = 0.001 ; ci = 0.95 ) . furthermore , lr was 13.475 , which implied the conclusive increase of disease likelihood in patients with transplantation . high value of lr also implies the importance of performing hcv immune - assay for kidney donors . calculated odd s ratio for negative / positive patients was 7.381 ( 0.95 ci = 1.76 - 30.952 ) , which implied that the chance of hgv positivity , using immunoassay , was lower in patients with transplantation but higher in those with renal failure . in fact , patients with transplantation were only 1.3% positive for hgv antibody compared to other patients with renal failure who were 9.1% positive ( table 4 ) . prevalence of hbv antibody positivity was insignificantly different between patients with transplantation and those with renal failure ( p = 0.763 ; ci = 0.95 ) . likelihood ratio for hbv elisa test was 0.088 , which implied conclusive decrease of disease likelihood in patients with renal failure . lower values of lr also emphasized that performing hbv elisa test was not essential for patients with renal failure ; this is because of vaccination or higher rate of blood transfusion episodes in such patients . furthermore , the obtained odd s ratio for hbv elisa test in patients with transplantation and those with renal failure was 0.811 , and the chance of negative / positive result could range from 0.207 to 3.177 ( table 5 ) . rt - pcr results showed a negative reaction for all 38 serum samples which were seropositive for hgv antibody ; 478 samples were negative for hgv rna , and 16 ( 3.14% ) were positive for rt - pcr assay . considering the low frequency of rt - pcr positive cases , the frequency was not reported for each city . since 1995 that hepatitis g virus was identified for the first time , a series of studies considered the epidemiology and diagnosis of this virus and its pathogenesis , especially in hepatitis . different social groups were tested for hgv and healthy blood donors , patients undergoing dialysis or kidney transplantation recipients , patients with acute , chronic , idiopathic hepatitis , and those with cirrhosis or cancer were included . all patients were subjected to two techniques ; elisa , a serological test to measure hgv antibody in the serum samples ; and rt - pcr that is a molecular test to search viral rna in the serum samples . all the sixteen sera that were positive using rt - pcr belonged to the patients undergoing hemodialysis ( 3.14% of all the patients undergoing hemodialysis ) . the prevalence of hgv in patients undergoing kidney transplantation was reported 24% in italy ( 18 ) . the frequency of hgv is reported 50% between the patients undergoing hemodialysis in germany ( 19 ) , 12.8% in brazil ( 20 ) , 4.5% in japan ( 21 ) and 17% in taiwan ( 22 ) . reported data showed a 24.3% frequency of hgv among the patients undergoing hemodialysis in south africa ( 15 ) . the present study showed a 3.7% frequency of hgv among the patients undergoing hemodialysis in khuzestan province , which was close to the japanese reported data ( 21 ) . however , overall prevalence of hgv among the patients undergoing hemodialysis could vary from 1.3% to 55% . on the other hand , according to the reported data , concomitant infection of hgv with hcv or hbv was not similar . even in some cases reports there was only one case of concomitant infection of hgv and hcv ; but hgv and hbv co - infection was found in two cases . however , a previus study showed the frequency of chronic hbv and hcv concomitant infection with hgv as 55% and 18% , respectively ( 25 ) . other epidemiological studies in iran reported the prevalence of 12.6% for the patients undergoing hemodialysis in tehran ( 29 ) . the other study reported 4.8% hgv rna positive cases among the blood donors , who were negative for hiv - ag / ab , hcv - ab , and hbs - ag ( 30 ) . however , the current study reported the prevalence of 3.14% hgv rna among the patients undergoing hemodialysis , which was different from the results obtained in other countries , but agreed with the previous studies in iran . it is noteworthy that up to now no definitive pathogenesis has been found for hgv . however , it is necessary to detect hgv in the societies since , despite its questionable role for hepatitis and non - pathogenic feature , it may be converted to a dangerous pathogen in the future ; its role may also be revealed in its healthy carriers . the current study emphasized that transplantation may be an important way of hgv transmission in the iranian kidney transplantation recipients ; therefore , hgv should be considered routinely for kidney donors . higher rate and chance of hcv positivity in the patients with renal failure than transplantation recipients may be due to more episodes of blood transfusion and exposure to blood transmitted hcv . since the studies newly started considering hgv in iran , it is necessary to continue such studies especially hgv sequencing in the patients undergoing hemodialysis and those with hepatitis b and c infections . investigating the association of hgv and clinical manifestations of the aforementioned patient groups could be valuable for a clear concept about their disease severity and treatment . regarding the hgv transmission through blood or blood products
background : hepatitis g virus ( hgv ) is a member of flaviviridae . prevalence of hgv in healthy people is very low , but this virus is more prevalent in patients with hepatitis . besides , relative frequency of hgv in patients undergoing hemodialysis , and kidney recipients is very high . the role of hgv in pathogenesis is not clear . since this virus can not be cultivated , molecular techniques such as revers transcription polymerase chain reaction ( rt - pcr ) is applied to detect hgv.objectives:the current study aimed to investigate the prevalence of hgv using determination of e2 , viral envelope antigen , antibodies and the rna by enzyme linked immunosorbent assay ( elisa ) and rt - pcr techniques . the rational of the study was to determine the prevalence of hgv in patients undergoing hemodialysis and kidney transplantation in khuzestan province , iran.patients and methods : five hundred and sixteen serum samples of the patients undergoing hemodialysis and kidney transplantation from various cities of khuzestan province were collected . anti - hepatitis g e2 antibodies were investigated by elisa method . rnas were extracted from serums and hepatitis g rna was detected by rt-pcr.results:of the 516 samples , 38 ( 7.36% ) specimens were positive for anti - hgv by elisa . all of these elisa positive samples were negative for hgv genome by rt - pcr . of the remaining 478 elisa negative samples , 16 ( 3.14% ) samples were positive by rt-pcr.conclusions:hepatitis g virus was not prevalent in the patients undergoing hemodialysis and kidney transplantation in khuzestan province . although reports indicated high frequency of co - infection of hgv with hepatitis b and c viruses , in the current research , co - infection of hgv with b and c was not considerable . since different groups and subtypes of hgv are reported , periodic epidemiologic evaluation of hgv and its co - infection with other hepatitis viruses is suggested in other populations such as the patients with thalassemia ; however , periodic epidemiologic monitoring of hgv may be helpful to control future potential variations of the virus .
1. Background 2. Objectives 3. Patients and Methods 3.1. Enzyme Linked Immunosorbent Assay 3.2. Reverse Transcription-Polymerase Chain Reaction 3.3. Statistical Analysis 4. Results 5. Discussion
PMC1160138
a multiplex pcr solution specifies a forward and reverse primer for each single nucleotide polymorphism ( snp ) and assigns each primer pair to one of a finite set of tubes . in partitioning snp primers into individual tubes , care must be taken to ensure that all primers within a tube are mutually compatible , i.e. that they do not form primer - dimers through cross - hybridization , which would otherwise reduce target product yield . the multiplex pcr problem is equivalent to partitioning a graph g(v , e ) into a set of disjoint cliques , where nodes represent snps , edges connect two snps whose associated primers are tube - compatible and resulting cliques constitute valid multiplex pcr tubes . the problem of partitioning a graph into k k disjoint cliques is np - complete ( 1 ) . the muplex system is unique in that it provides multiple design alternatives that reveal inherent tradeoffs with respect to multiple competing objectives , such as average tube size , tube size uniformity and overall snp coverage . multiplex pcr is a core enabling technology for high - throughput snp genotyping , serving as a foundation for applications in forensic analysis , including human identification and paternity testing ( 2 ) , the diagnosis of infectious diseases ( 3,4 ) , whole - genome sequencing ( 5 ) , and pharmacogenomic studies aimed at understanding the connection between individual genetic traits , drug response and disease susceptibility ( 6 ) . for example , in the hme assay ( 7 ) , genomic sequences containing the snps of interest are first amplified by pcr . after shrimp alkaline phosphatase digestion of excess dntps , a primer extension reaction is carried out to interrogate the snps . the primer extension products ( often oligonucleotides 18 to 25 bases long ) are then detected by matrix - assisted laser desorption ionization time - of - flight ( maldi - tof ) mass spectrometry . given the large molecular weight window ( 45009000 da ) and the high resolution of the mass spectrometry , 20 or more snps can be easily and simultaneously genotyped . thus , the throughput - limiting step is often the pcr plex level . in a 384-well format with 20-plex pcr , the per - snp cost can be reduced to just a few cents while a single maldi - tof mass spectrometry can be used to genotype 76 800 snps by a single operator in 1 day . given a set of dna sequences and a snp location at each , the system aims at designing ( i ) a set of pair forward and reverse primers for each sequence ; ( ii ) a placement of these primers into maximal size tubes such that the coverage ( number of sequences ) included in the pcr assay is maximized . the user provides a set of snps and associated flanking sequences in the standard fasta format . these sequences may be entered manually or uploaded from a file . to improve primer specificity , users may instruct the muplex server to filter resulting primer candidates by aligning them against the human genome using blat ( 8) . in addition to the snp sequences , the user specifies primer selection criteria , including length , gc content , positional constraints , and melting temperature tm constraints for individual primer oligos as well as interaction parameters ( maximum local alignment score , 3 tail g ) , and a maximum tm range for all primer pairs within a single multiplex assay . muplex then solves the dual problem of selecting primer pairs for amplifying the flanking sequence of each snp and partitions these primer pairs into multiplex compatible sets each corresponding to a single multiplex pcr tube reaction . as noted above , muplex generates multiple solutions alternative each corresponding to a set of multiplex pcr tubes . each solution is evaluated with respect to the following objectives : total number of tubes required.minimum , average and maximum tube size ( multiplexing level).number of unique tube sizes.total snp coverage measured both in terms of the percentage of snps ( associated primer pairs ) assigned to maximum - sized tubes , as well as the percentage of snps assigned to tubes of any size . total snp coverage measured both in terms of the percentage of snps ( associated primer pairs ) assigned to maximum - sized tubes , as well as the percentage of snps assigned to tubes of any size . for example , some solutions may achieve higher overall multiplexing levels but at the expense of lower coverage , i.e. by excluding some snps from the solution . in addition , muplex tries to minimize the number of unique tube sizes in order to facilitate automation in a high - throughput genomics environment . the email contains a solution summary allowing quick comparison of each alternative , and details for each solution including the selected primers , their individual properties and assigned tube . muplex employs a number of heuristic algorithms and allows new algorithms to be added over time . solution time depends on the number of solution alternatives requested , the number of snps and the target multiplexing level . for typical problems involving < 100 snps , given a set of dna sequences and a snp location at each , the system aims at designing ( i ) a set of pair forward and reverse primers for each sequence ; ( ii ) a placement of these primers into maximal size tubes such that the coverage ( number of sequences ) included in the pcr assay is maximized . the user provides a set of snps and associated flanking sequences in the standard fasta format . these sequences may be entered manually or uploaded from a file . to improve primer specificity , users may instruct the muplex server to filter resulting primer candidates by aligning them against the human genome using blat ( 8) . in addition to the snp sequences , the user specifies primer selection criteria , including length , gc content , positional constraints , and melting temperature tm constraints for individual primer oligos as well as interaction parameters ( maximum local alignment score , 3 tail g ) , and a maximum tm range for all primer pairs within a single multiplex assay . muplex then solves the dual problem of selecting primer pairs for amplifying the flanking sequence of each snp and partitions these primer pairs into multiplex compatible sets each corresponding to a single multiplex pcr tube reaction . as noted above , muplex generates multiple solutions alternative each corresponding to a set of multiplex pcr tubes . each solution is evaluated with respect to the following objectives : total number of tubes required.minimum , average and maximum tube size ( multiplexing level).number of unique tube sizes.total snp coverage measured both in terms of the percentage of snps ( associated primer pairs ) assigned to maximum - sized tubes , as well as the percentage of snps assigned to tubes of any size . total snp coverage measured both in terms of the percentage of snps ( associated primer pairs ) assigned to maximum - sized tubes , as well as the percentage of snps assigned to tubes of any size . for example , some solutions may achieve higher overall multiplexing levels but at the expense of lower coverage , i.e. by excluding some snps from the solution . in addition , muplex tries to minimize the number of unique tube sizes in order to facilitate automation in a high - throughput genomics environment . the email contains a solution summary allowing quick comparison of each alternative , and details for each solution including the selected primers , their individual properties and assigned tube . muplex employs a number of heuristic algorithms and allows new algorithms to be added over time . solution time depends on the number of solution alternatives requested , the number of snps and the target multiplexing level . for typical problems involving < 100 snps , muplex is written entirely in java ( j2sdk1.4.2_05 ) and employs the apache jakarta tomcat server ( ) connected to a backend mysql database ( ) . individual solvers operate asynchronously on a network of workstations running a customized distribution of the linux operating system based on fedora core 3 ( ) . these solvers new problems are assigned to the first available solver . as depicted in figure 2 , agents encapsulating specific algorithms either create new solutions from scratch , improve or modify existing solutions , or remove unpromising solutions from further consideration . for example , one creator algorithm is based on a best - fit methodology that iteratively assigns snps to the largest open compatible tube . when the tube size reaches the target multiplexing level specified by the user , it is closed , and no further additions or modifications to that tube are made . one improver algorithm eliminates partial tubes in order to reduce the number of unique tube sizes but while incurring reduced coverage , while another attempts at reformulating partial tubes in an effort to identify additional full tubes . efficiency is enhanced during the optimization process by periodically culling unpromising solutions from the population of candidates . the architecture is scaleable in the sense that new algorithms can be readily plugged - in over time , and it is robust in that it does not depend on a single algorithm to generate every viable alternative , and because system load is balanced across a distributed collection of solvers . within a given solution , there is no guarantee that a snp will be assigned , and the results depend on the random order in which snps and primers are processed . resulting coverage critically depends on the number of snps and the target level of multiplexing desired ( j. rachlin , c. m. ding , c. cantor and s. kasif , manuscript submitted ) . the muplex server allows scientists to design multiplex pcr assays while explicitly considering intrinsic design tradeoffs . the consideration of competing alternatives has played a key role in the development of optimization and decision - support technologies in complex domains such as manufacturing and transportation logistics ( 9,10 ) . here , we have demonstrated the viability of such approaches to the optimization of multiplex pcr assays . future efforts will focus on the development of new algorithms and on allowing users to impose dynamic feedback constraints in an effort to further guide the design optimization process towards solutions that more closely meet the scientist 's particular design objectives . we also plan to develop a distributed version that will run on our 128-processor linux cluster . the muplex homepage . users specify primer selection criteria and provide a collection of snps in the fasta format . the system emails to the user one or more solution alternatives revealing key design tradeoffs . once a problem is submitted and validated , it is assigned to one of the several solvers distributed across the network . each solver instantiates one or more agents ( algorithms ) that either create new solutions from scratch , attempt to improve an existing solution candidate or eliminate unpromising solutions from further consideration . the collaboration of algorithms in this manner enables the system to produce multiple multiplex pcr solutions that reveal intrinsic design tradeoffs .
we have developed a web - enabled system called muplex that aids researchers in the design of multiplex pcr assays . multiplex pcr is a key technology for an endless list of applications , including detecting infectious microorganisms , whole - genome sequencing and closure , forensic analysis and for enabling flexible yet low - cost genotyping . however , the design of a multiplex pcr assays is computationally challenging because it involves tradeoffs among competing objectives , and extensive computational analysis is required in order to screen out primer - pair cross interactions . with muplex , users specify a set of dna sequences along with primer selection criteria , interaction parameters and the target multiplexing level . muplex designs a set of multiplex pcr assays designed to cover as many of the input sequences as possible . muplex provides multiple solution alternatives that reveal tradeoffs among competing objectives . muplex is uniquely designed for large - scale multiplex pcr assay design in an automated high - throughput environment , where high coverage of potentially thousands of single nucleotide polymorphisms is required . the server is available at .
INTRODUCTION MuPlex features ARCHITECTURE CONCLUSIONS AND FUTURE WORK Figures and Tables
PMC4897371
the incidence of laryngeal cancer has decreased in the usa in recent years as rates of smoking have declined . however , larynx cancer continues to be a serious problem for individuals suffering from this disease , with treatment frequently affecting the patient 's ability to phonate and swallow . this year , 12630 people in the usa are estimated to be diagnosed with laryngeal cancer and 3610 will die from this disease . the veterans affairs laryngeal cancer study and the radiation therapy oncology group trial 91 - 11 are the basis for the organ - preservation treatment approaches currently employed for advanced laryngeal cancer . definitive radiation treatment with chemotherapy is utilized as the initial treatment strategy for many advanced laryngeal cancers except those with cartilage involvement . however , when there is persistence of disease or recurrence of cancer after chemoradiation , salvage total laryngectomy is often necessary to achieve cure . pharyngocutaneous or salivary fistula is a common complication after salvage total laryngectomy and can lead to serious consequences . fistulas can lead to infection and skin breakdown , prolonging the patient 's hospital stay , at times necessitating operative repair . in very serious cases , the persistent bathing of saliva around major vessels can lead to arterial erosion and subsequent carotid blowout . the reported incidence of pharyngocutaneous fistula for primary total laryngectomy varies from 10 to 35% [ 413 ] . for salvage total laryngectomy , the reported fistula rate is generally higher , varying in the literature from 25 to 50% [ 1419 ] . many groups have attempted to identify the risk factors for salivary fistula with salvage total laryngectomy , and associations have been reported for poor preoperative nutrition , low hemoglobin , prior tracheostomy , liver disease , and diabetes [ 4 , 12 , 16 ] . with some exceptions , most groups have found that a prior history of radiation and/or chemotherapy predisposes patients to a higher risk of salivary fistula after total laryngectomy [ 911 , 2022 ] . in an attempt to reduce the incidence of salivary fistula after salvage total laryngectomy , the use of pedicled or free vascularized tissue transfer to reinforce while there is some data that vascularized flaps may provide benefit , the conclusions have thus far been conflicting [ 6 , 2326 ] . this study aims to identify what factors play a role in development of pharyngocutaneous or salivary fistulas in patients undergoing salvage total laryngectomy . a secondary aim was to separately analyze the predictive factors for development of minor fistulas that are managed conservatively and major fistulas that are severe enough to require surgical intervention . a retrospective chart review was performed for all patients who underwent salvage total laryngectomy for laryngeal squamous cell carcinoma at the university of california , san francisco ( ucsf ) . we included all patients who had laryngeal squamous cell carcinoma treated primarily with radiation therapy or chemoradiation who subsequently were found to have recurrent or persistent disease , requiring a salvage total laryngectomy between january 1 , 2002 , and january 1 , 2012 . oncologic resection was completed within the department of otolaryngology - head and neck surgery at ucsf . during this time period , eight different attending surgeons performed these laryngectomies . data collection was performed using all electronic medical record systems in place at ucsf , and patient records were screened for inclusion in this study using procedure codes for total laryngectomy . patients who had salvage total laryngectomy for reasons other than cancer such as chronic aspiration or dysfunctional larynx were excluded . demographic data was then collected including information about sex , race , ethnicity , and tobacco or alcohol use . oncologic data regarding the primary tumor including the american joint committee on cancer ( ajcc ) stage , tnm stage , and detailed histopathologic data were collected . salvage laryngectomy surgical details regarding extent of pharyngeal resection or concurrent neck dissection were reviewed . the type of neopharyngeal closure was also evaluated with regard to whether a single- , double- , or triple - layer closure was performed . data regarding use of pedicled pectoralis muscle flaps or free tissue transfer was also collected . inlay method refers to using the skin paddle of the pectoralis myocutaneous flap to reconstruct a portion of the neopharyngeal wall . the onlay method refers to placing a pectoralis myofascial flap on top of the neopharyngeal closure without actually augmenting the neopharynx wall . the occurrence of a salivary fistula for each of these patients was determined by reviewing the discharge summary for the hospital stay following the salvage total laryngectomy as well as the clinic note for the first postoperative visit . a fistula was defined as any documented clinical suspicion or clear evidence of salivary leak , on a continuum from erythema of the neck to saliva within the surgical drain to frank wound breakdown and leakage of saliva . major salivary fistulas were defined as those that needed revision surgery for closure of the leak . data analyses were performed with sas , version 9.3 ( sas institute , cary , north carolina ) . normally distributed data were analyzed using independent sample t - testing and nonparametric data using mann - whitney testing . of these , 133 patients were excluded because they did not receive prior radiation or chemoradiation or their laryngectomy was not done for cancer . in total , there were 48 patients who met inclusion criteria for the study . radiation metric data was available for 27 patients ; 40/48 patients received radiation treatment at an outside hospital . total radiation dose varied from 6000 cgy to 7920 cgy ( mean 6900 cgy , median 7000 cgy ) . for 40 patients , it was possible to assess whether there was persistence versus recurrence of tumor . there was persistence of tumor in 13 patients , occurring 1.9 months to 4.8 months after radiation treatment . there was recurrence of tumor in 27 patients , occurring 6.2 months to 24 years after radiation treatment ( median 11.4 months ) . once recurrence of persistence was diagnosed , salvage surgery was scheduled , with a mean interval from date of diagnosis to surgery of 38 days . reconstruction methods were varied : 9 had single - layered primary closure , 14 had double - layered primary closure , 7 had triple - layered primary closure , 3 had pedicled pectoralis myocutaneous inlay flap without free tissue transfer , 9 had pedicled pectoralis myofascial onlay flap without free tissue transfer , and 4 had free tissue transfer . one patient had a temporary esophagostoma and pharyngostoma created in anticipation of future reconstruction , but , for unclear reasons , this patient never had definitive reconstruction . one patient had a stapler assisted closure of the neopharynx . among patients who had primary closure it could not be determined through chart review how the second or third layers were closed if applicable . twenty - three of the 48 patients had clinical evidence of a fistula for an overall fistula incidence of 47.9% . nine patients had a major fistula requiring operative repair for a major fistula rate of 18.8% . we analyzed various preoperative patient and tumor characteristics to determine whether any were associated with fistulas ( table 1 ) . there were no statistically significant associations between overall fistula rate and sex , ajcc tumor stage , or t or n status at the time of initial diagnosis . age was similar between those who developed fistulas compared to those who did not ( 64 versus 65 years , p = 0.72 ) . there was not a significant difference in fistula rate whether patients received prior radiation at an academic medical center or a community medical center . the use of chemotherapy was not associated with a significant difference in the overall fistula rate . performing a concurrent neck dissection with the total laryngectomy next , comparisons of several closure techniques were made to determine whether differences in fistula rate could be identified ( table 2 ) . there was no significant difference in fistula rate for patients who had a complete pharyngectomy versus those that had either a partial or limited pharyngectomy . the rate of fistula was observed to decrease with increasing number of layers of primary closure , from 66.7% for single - layer closure to 28.6% for triple - layer closures . the fistula rate for pectoralis muscle flap onlay ( 22.2% ) was lower than any of the primary closure techniques . we found that those who developed major fistulas were older ( 71 versus 63 , p = 0.03 ) . sex , ajcc tumor stage , or t or n status at initial diagnosis was not associated with major fistulas . no significant difference in major fistula incidence was detected whether patients were radiated at an academic or community hospital , whether they received chemotherapy , or whether a concurrent neck dissection was performed . the various neopharyngeal closure techniques were examined for major fistulas ( table 4 ) . comparing patients who had complete pharyngectomy to those who had a partial or limited pharyngectomy , compared to multilayer primary closures , there was a trend toward a higher major fistula rate with single - layer closures ( 44.4% versus 12.82% , p = 0.09 ) . the rate of major fistula was observed to decrease with increasing number of layers of primary closure , from 44.4% for single - layer closure to 0% for triple - layer closures . no patients ( 0 of 7 ) with triple - layer closure had a major fistula , and only 1 of 9 patients ( 11.1% ) with pectoralis muscle flap onlay had a major fistula . major salivary fistulas requiring reoperation occurred in 9 patients ( 18.8% ) , which is within the broadly documented range of salivary fistula rates ( 1050% ) reported in the literature [ 419 ] . it is notable that only a few of these studies delineate how a salivary leak is defined . fistulas can vary widely in their presentation , from a small leak having minimal impact on postoperative course to large volume salivary drainage leading to prolonged hospital stay and potential catastrophic consequences requiring surgery . for example , grau et al . defined a fistula as those salivary leaks lasting more than 2 weeks . in our study , we counted any clinically evident fistula regardless of severity or length of time that the leak was present . we also sought to categorize fistulas into minor and major fistulas to determine whether more specific risk factors for salivary leaks could be identified . we did additional analysis on those fistulas requiring operative intervention , which we defined as major fistulas . the way in which the salivary fistula rate is calculated is important since it varies the magnitude of the fistula rate and modifies how we find different risk factors . among the preoperative factors studied , our study did not reveal a difference in salivary leak rates with regard to sex , ajcc stage , t or n status at initial diagnosis , or whether radiation treatment was performed at our institution versus a community hospital . other possible risk factors such as the addition of adjuvant chemotherapy , extent of pharyngeal resection , or concurrent neck dissection were not associated with an increased salivary fistula rate in our study . other groups have reported various preoperative factors that are associated with salivary fistula after total laryngectomy . the majority of the literature shows a trend towards higher fistula rates in patients with a history of radiation to the larynx [ 911 , 2022 ] . the magnitude of the radiation dose seems to be important . in a study by vendelbo johansen et al . , the fistula rate for salvage total laryngectomy was 25% if patients received 57 gray ( gy ) compared with 92% for those receiving 72 gy . other studies have corroborated this finding that higher dose and larger radiation field contribute to fistula formation . in a number of studies , the addition of adjuvant chemotherapy increases by up to twofold the risk of fistula formation when compared to radiation alone [ 28 , 29 ] . one study found a significant increase in fistula rate if salvage total laryngectomy was done within 4 months of radiation . other studies have similarly found a higher wound complication rate for surgeries done soon after radiation [ 8 , 11 , 13 , 31 ] . besides history of radiation or chemoradiotherapy nonglottic tumors or advanced t3 or t4 tumors tend to have elevated rates of pharyngocutaneous fistula after salvage total laryngectomy . patients with nutritional deficiencies , hypothyroidism , or hypoalbuminemia are at higher risk as well [ 4 , 19 , 32 ] . the link between postlaryngectomy fistula formation and previous radiation can be explained through radiation therapy 's cellular mechanism of action . radiation induces cell death through dna damaging mechanisms . though preferentially affecting rapidly dividing cells such as malignant tumors , radiation also damages normal cells such as connective tissue and muscle . on a microscopic level , radiation leads to progressive fibrosis and obliterative endarteritis of the blood vessels , which in turn inhibits future wound healing . chemotherapy has been thought to be an effective radiosensitizer , inducing more cellular damage , more fibrosis , and obliteration of the microcirculation . it has been hypothesized that placing nonradiated vascularized tissue into the compromised recipient wound bed can improve wound healing and reduce the incidence of salivary fistulas in those patients who have undergone prior radiation and/or chemotherapy . we examined whether the neopharyngeal closure technique correlated with the overall ( major and minor ) fistula rate . we observed a decrease in overall fistulas and major fistulas with increasing the number of primary closure layers . increasing the number of layers of closure may minimize the risk for the suture line dehiscing . to our knowledge , this is the first study to examine the impact of varying the number of layers of primary closure on fistula incidence for salvage total laryngectomy . however , possibly due to small sample size , none of the differences we observed in fistula rates for the various closure techniques reached statistical significance . providing vascularized tissue from outside the previous radiation field as an onlay over some studies have shown that prophylactically placing a pectoralis myofascial flap over the suture line can reduce the incidence of salivary leak while other studies have failed to find this difference [ 57 , 34 ] . similar conflicting results have been found with the utilization of free tissue transfer to augment the neopharyngeal suture line [ 15 , 35 ] . one study showed that placing a pectoralis flap in an inlay fashion reduced the salivary leak rate when compared to an onlay fashion , citing that skin holds sutures better than fascia or muscle [ 6 , 24 ] . all these studies suffer from low statistical power and lack a standardized method of defining salivary leaks . placing vascularized tissue in the wound bed may not reduce the overall incidence of salivary fistula , but it may mitigate the severity of the leak . perhaps prophylactic placement of vascularized tissue converts cases that would have resulted in a severe fistula into a mild fistula that can be managed conservatively . in our study , the use of a pedicled pectoralis muscle onlay flap was noted to reduce the overall and major fistula rates compared to single - layer closures ; however the difference was not significant possibly due to small sample size . nonetheless , it seems likely that certain subgroups of patients more prone to poor wound healing would benefit from vascularized tissue overlying the neopharyngeal suture line . further studies are needed to explore and define appropriate recommendations for closure techniques in salvage total laryngectomy . it is also interesting to speculate whether a multilayer closure for total laryngectomy in patients without a history of prior radiation treatment has an impact on the occurrence of salivary leaks . a future study could help determine whether there is any added benefit to performing more than a single - layer closure for previously untreated patients undergoing total laryngectomy . the utilization of various closure techniques may have been predicated on certain preoperative or intraoperative findings that raised the surgeon 's fear of having a fistula . our study had a small sample size and was thus underpowered to detect statistical differences that may truly exist . we observed a trend with greater number of layers of neopharyngeal closure techniques or onlay pectoralis flaps and decreased rates of pharyngocutaneous fistula , findings that may have been statistically significant with a larger sample size . these limitations are not unique to this study , and the majority of single - institution reports on this topic are afflicted with these same drawbacks . although there is a paucity of data from the literature to guide the surgeon as to how to prevent pharyngocutaneous fistulas after salvage total laryngectomy , we believe that the optimal management of these challenging cases should begin with identifying those patients at highest risk for developing fistulas . the literature suggests that this high risk group includes patients who have been treated previously with chemoradiation or those with poor nutritional status . in these patients , the surgeon should carefully select the method of neopharyngeal reconstruction to decrease the risk of fistula . our study suggests that it may be beneficial to increase the number of pharyngeal closure layers or to use a pedicled pectoralis muscle onlay flap . in this study , salivary fistulas were a common complication after salvage total laryngectomy , occurring with varying severity in 47.9% of cases . reoperation due to salivary fistulas was performed for nearly 1 in 5 salvage total laryngectomies . in contrast to previous studies , we did not find any clinicopathologic variables associated with fistulas , which may be related to the small sample size . the overall fistula and major fistula incidence was decreased with increasing the number of layers of primary closure and with pectoralis muscle onlay flaps .
background . salivary fistula is a common complication after salvage total laryngectomy . previous studies have not considered the number of layers of pharyngeal closure and have not classified fistulas according to severity . our objective was to analyze our institutional experience with salvage total laryngectomy , categorize salivary fistulas based on severity , and study the effect of various pharyngeal closure techniques on fistula incidence . methods . retrospective analysis of 48 patients who underwent salvage total laryngectomy , comparing pharyngeal closure technique and use of a pectoralis major flap with regard to salivary fistula rate . fistulas were categorized into major and minor fistulas based on whether operative intervention was required . results . the major fistula rate was 18.8% ( 9/48 ) and the minor fistula rate was 29.2% ( 14/48 ) . the overall ( major plus minor ) fistula rate was 47.9% . the overall fistula and major fistula rates decreased with increasing the number of closure layers and with use of a pectoralis major flap ; however , these correlations did not reach statistical significance . other than age , there were no clinicopathologic variables associated with salivary fistulas . conclusion . for salvage total laryngectomies , increasing the number of closure layers or use of a pectoralis major flap may reduce the risk of salivary fistula .
1. Introduction 2. Methods 3. Results 4. Discussion 5. Conclusions
PMC3524329
the terms stunned myocardium and hibernating myocardium refer to abnormality in the systolic and diastolic function of the heart following reperfusion.1 in both abnormalities , myocardial contractility and relaxation are deteriorated , while the cardiac enzymes are still viable.2 in the hibernating myocardium , however , a programmed cell death ( apoptosis ) pattern has been described . myocardial ischemia results in the utilization of adenosine triphosphate atp stores secondary to the paralysis of aerobic metabolism and oxidative phosphorylation.3 stunning was defined by braunwald as post - ischemic cardiac dysfunction of viable myocardium.4 clinical myocardial stunning was first reported by bolli and klonar , who separately characterized its experimental models . stunning has been documented in post - percutaneous coronary intervention and thrombolytic therapy for coronary artery stenosis5 , 6 and also in the wake of cardiopulmonary bypass ( cpb).6 , 7 one of the technical challenges in off - pump coronary artery bypass grafting surgery ( opcab ) is myocardial ischemia caused by the proximal and distal snaring of the coronary artery , which gives rise to post - ischemic ventricular dysfunction.8 nonetheless , the occurrence of myocardial stunning in this setting has yet to be fully investigated . we herein report the case of a patient who developed temporary left ventricular dysfunction after an opcab procedure . a 53-year - old man presented with unstable angina due to the severe stenosis of the left anterior descending coronary artery and obtuse marginalis , although the right coronary artery was normal . laboratory findings , including a complete blood count , erythrocyte sedimentation rate , and c reactive protein , were normal . chest x - ray revealed no abnormal findings , and there was no valvular abnormality on preoperative echocardiography . the patient had no co - morbid disorders , but his left ventricular ejection fraction was reduced ( 4045% ) . , there was no respiratory distress , blood pressure was 130/80 mm hg , heart rate was 80 beats per minute , respiratory rate was 23 per minute , the neck veins were not distended , and there was no ankle edema . cardiovascular system examination showed regular first and second heart sounds with no gallop or murmur . for temporary coronary artery occlusion , 4/0 viline sutures and a bulldog clamp were used , and warm blood was employed to de air the graft . temporary coronary artery occlusion was not prolonged , and the electrocardiogram and hemodynamic variables and objective data showed no signs of ischemia or contractile dysfunction . six hours later , however , he developed low cardiac output . at exploration , cardiac tamponade there were no findings as regards pericarditis , and the patient s postoperative erythrocyte sedimentation rate , c - reactive protein , and cardiac enzymes were normal . a high dose of adrenalin and dobutamine was administered , and an intra - aortic balloon pump was used . intraoperative transesophageal echocardiography demonstrated a depression in the left ventricular function due to an akinetic lateral left ventricular wall in the region of the obtuse marginalis . after hemodynamic stabilization , the patient left the intensive care unit without an intra - aortic balloon pump and inotropic support . on the fifth postoperative day , a coronary angiogram demonstrated patent grafts and correct anastomotic sites ( figure 1 & 2 ) . on the seventh postoperative day , the akinetic lateral wall of the left ventricle changed to dyskinesia . finally , after hospital discharge on the thirtieth postoperative day , an echocardiogram showed a normal left ventricular function without regional wall motion abnormalities . cpb may contribute to the mortality and cost associated with cabg.9 recently , opcab has emerged as an alternative technique allowing coronary revascularization without the need for cpb . we have performed 300 consecutive opcab procedures in our hospital over the years . in the case reported in this paper , the anesthesia protocol was comprised of a combination , of fentanyl and pancuronium bromide , supplemented with isoflurane and nitrous oxide , to permit early extubation . an arterial and central venous line was utilized as is the standard in this modality . conduits for cabg , including the left internal mammary artery and saphenous vein , were harvested in the standard fashion . deep pericardial traction sutures were placed to facilitate elevation of the apex of the heart and exposure of the lateral wall of the myocardium . the right pleural space was opened routinely to allow displacement of the heart to facilitate the exposure of the circumflex artery . revascularization on the left anterior descending artery with the left internal mammary was typically performed first , followed by revascularization of the left circumflex artery and the right coronary artery distribution . to assist further in providing good presentation of the target arteries , especially the posterior and inferior walls , an optimal combination of pharmacological and mechanical methods was drawn upon to reduce the coronary artery movement . stabilization of the target arteries was accomplished with an octopus stabilizer ( medtronic , ts 300 ) . intravenous heparin ( 1 mg / kg ) was given to maintain activated clotting time ( act ) between 200 and 300 seconds . the target coronary artery was occluded proximal and distal to the proposed arteriotomy site by widely placing double - looped 4 - 0 viline sutures . proximal anastomosis to the aorta was made on a punch aortotomy after applying a side clamp to the ascending aorta . visualization of the anastomosis was enhanced with the use of humidified carbon dioxide blower . before the application of the octopus stabilizer , amiodarone and esmolol were administered to the patient and communication with the anesthesia team was maintained to monitor changes in the patient s hemodynamic and to treat cardiac arrhythmias . after distal anastomosis , proximal anastomoses were carried out on the ascending aorta with a partially occluding clamp . serial electrocardiograms and estimation of serum creatinine phosphokinase and its mb fraction were done to detect perioperative ischemia . ventricular dysfunction developed postoperatively in 2 patients , and 2 patients developed severe left ventricular dysfunction due to the poor quality of the anastomotic site of the left internal mammary artery to the left anterior descending coronary artery graft . intraoperative flowmetry demonstrated normal graft flow and cardiac enzymes were not significantly elevated and postoperative angiography showed patent bypass graft and good quality of the anastomotic sites . ( figure 1 and 2 ) because the bypass grafts were patent , the only ischemic event that would have caused left ventricular dysfunction was temporary occlusion of the coronary arties . we think that post - ischemic contractile dysfunction of the left ventricle has its pathophysiological background in myocardial stunning . the best approach in the postoperative period is to support the acutely failing heart by inotropic drugs and intra - aortic balloon pumps . alkholaifie reported that ultimate objective must be to prevent ventricular dysfunction by ischemic preconditioning , which could be achieved by repetitive short - time occlusion and reperfusion of the coronary vessel.9 grubitzsch did not observe segment depression or elevation after coronary artery occlusion in his patients , which usually indicates the necessity of preconditioning.8 in contrast , mulkowski , in a clinical setting of opcab , showed that transient ischemia did not limit subsequent ischemic regional dysfunction.10 this controversy in the management of post - operative left ventricular dysfunction with patent bypass graft led to this recommendation by rivetti that the use of the intra - coronary shunt must be considered if the duration of temporary coronary artery occlusion exceeds fifteen minutes . recently , opcab has emerged as an alternative technique allowing coronary revascularization without the use of cpb . because opcab is associated with temporary myocardial ischemia we think that the most important issue in performing opcab is the short ischemia time .
the term stunned myocardium refers to abnormalities in the myocardial function following reperfusion and is common in on - pump coronary artery bypass grafting ( cabg ) and is exceedingly rare in off- pump cabg . a 53-year - old man presented with unstable angina due to the severe stenosis of the left anterior descending coronary artery ( lad ) and the obtuse marginal . laboratory findings and chest x - ray revealed nothing abnormal . the intraoperative course was uneventful . the patient left the operating room without any inotropic support . six hours later , however , he developed low cardiac output . at exploration , cardiac tamponade was excluded and flowmetry showed that the graft had adequate function . cardiac enzymes were normal . high - dose adrenalin and dobutamine were administrated and an intra - aortic balloon pump was used . after hemodynamic stabilization , the patient left the intensive care unit without an intra - aortic balloon pump and inotropic support . on the fifth postoperative day , coronary angiography showed patent grafts and correct anastomotic sites . on the seventh postoperative day , the akinetic lateral wall of the left ventricle changed to dyskinesia . finally after hospital discharge on the thirtieth postoperative day , an echocardiogram showed normal left ventricular function without regional wall motion abnormalities .
Introduction Case Report Discussion Conclusion
PMC2948749
the basic mechanism of bonding to enamel and dentin is essentially an exchange process involving replacement of minerals from the hard dental tissues with resin monomers , which , upon setting , become micro - mechanically interlocked in the created porosities.1 contemporary adhesives can be classified on the basis of underlying adhesion strategy into etch - and - rinse , self - etch , and resin - modified glass - ionomer adhesives.2 the success of the etch - and - rinse adhesives for bonding resin - based restorative materials to enamel and dentin is well supported by numerous studies and many years of clinical experience.3 the concept of self - etching adhesives is based on the use of polymerizable acidic monomers that simultaneously condition and prime both dentin and enamel.4 these adhesives are subdivided into three categories based upon their ph value : strong systems have a ph of 1 or below , intermediary strong systems have a ph of approximately 1.5 , and mild systems have a ph of 2 or more.5 the bond strength of self - etching adhesive systems to enamel is controversially discussed in the literature ; some studies have reported comparable data to that observed with etch - and - rinse systems,68 while other studies considered them less reliable when bonding to enamel.912 there is still some concern that manufacturers are sacrificing enamel bond strength in their struggle for simplified and strengthened bonding to the more complex substrate , dentin , despite the fact that enamel is the front - gate determinant of a restoration s longevity and durability . many conditioning agents have been used for surface pretreatment of enamel and dentin , including phosphoric , maleic , nitric , citric , and ethylenediaminetetraacetic ( edta ) acids . these acids are used to remove the smear layer and to demineralize the underlying enamel and dentin.13 one might consider pre - etching the enamel with phosphoric acid prior to application of a self - etching adhesive system . the effect of such additional etching on enamel bond strength is also controversially discussed in the literature . its use might be beneficial with some self - etching adhesives , but this depends largely on the properties of the adhesive itself.14 it has been previously reported that etching of enamel surfaces with edta is not recommended because of its negligible non - uniform effect.15,16 however , the effect of edta pretreatment on bond strength of enamel in conjunction with self - etching adhesives has not , to our knowledge , yet been addressed in literature . the interaction of such a mild conditioning agent as a pretreatment agent with different ph categories of self - etching adhesive systems is a matter of speculation . thus , this study was designed to determine the effect of two surface pretreatment agents on the enamel bond strength of self - etching adhesive systems with different ph values . the null hypotheses were tested first : pretreatment of the ground enamel surfaces with phosphoric acid or edta had no effect on the shear bond strength of self - etch adhesives to enamel . second , there was no difference in bond strengths between self - etching adhesive systems with different ph values when bonding to ground enamel . a total of 90 sound extracted maxillary human premolars were used for shear bond strength testing . the teeth , excluding the buccal surface , were embedded in self - curing acrylic resin ( rapid repair , degudent gmbh , hanau , germany ) by using a specially fabricated cuboidal teflon mould ( 321.3 cm ) . the buccal enamel surface of the embedded premolars was ground on a water - cooled mechanical grinder ( tf250 , jeanwirtz , dsseldorf , germany ) by using 180-grit abrasive paper to obtain flat enamel surfaces with a clinically relevant smear layer . the acrylic resin blocks were placed in the mould . to standardize the bonding area , a piece of vinyl tape with a 3-mm diameter punctured hole the teeth were assigned into three groups according to bonding procedure . in the first subgroup ( control ) , no pretreatment agent was applied . in the second subgroup , etching was performed using 37% phosphoric acid ( pa ) for 15 seconds ( total etch ; ivoclar vivadent ag , schaan , liechtenstein ) . in the third subgroup , surfaces were pretreated with 18.85% edta for 60 seconds ( edta odahcam ; dentsply , latin america , rio de janeiro , brazil ) . adper prompt l - pop ( aplp ) ( 3 m espe ag dental products , seefeld , germany ) , adhese ( se ) ( ivoclar vivadent ag , schaan , liechtenstein ) , or frog ( fg ) ( sdi limited , bayswater , victoria , australia ) self - etch adhesive systems ( n=10 ) were then applied to the demarcated bonding area following manufacturers instructions ( table 1 ) . all adhesives were cured using a bluephase c5 ( ivoclar vivadent ag , schaan , liechtenstein ) light emitting diode curing unit for 10 seconds at a light intensity of 500 mw / cm . the light intensity was periodically checked with the light meter integrated in the handpiece holder of the curing unit . the 2-mm - thick roof composed of two equal halves with a circular 3-mm diameter hole was used for the packing of the tetric evoceram ( shade a3 ) ( ivoclar vivadent ag , schaan , liechtenstein ) resin composite . the composite was then light cured for 20 seconds using the same light curing unit according to manufacturer instructions . after 24-hour storage in distilled water , the samples were subjected to compression testing using a mono - bevelled , chisel - shaped metallic rod in a computerized universal testing machine ( model lrx - plus ; lloyd instruments ltd . , the specimens were stressed in shear at a cross - head speed of 0.5 mm / min . the shear force at failure was recorded and converted to shear stress in mpa units using computer software ( nexygen-4.1 ; lloyd instruments ltd . , the fracture sites of the debonded surfaces were examined using a binocular stereomicroscope ( smz-10 , nikon , melville , ny , usa ) at 15x magnification . representative samples were chosen for examination under scanning electron microscopy ( sem ) ( xl 30 , philips , eindhoven , netherlands ) . samples were mounted on sem stubs and sputter - coated ( ladd sputter coater , ladd research industries , williston , vermont , usa ) with a thin layer of gold under vacuum . examination was done at 30 kv of accelerating voltage at different magnifications and characteristic photomicrographs were obtained at 1000x magnification . two - way analysis of variance ( anova ) was used in testing significance for the effect of both variables studied ( adhesive system versus surface pretreatment agent ) on the mean shear bond strength . post - hoc tukey s test was used for pair - wise comparison between the means when the anova test was significant . representative samples ( n=2 ) were prepared and treated with the corresponding surface pretreatment agents and adhesive systems for each subgroup as described earlier . however , the steps of bonding component application and light curing were skipped in the case of the two - step self - etching adhesive systems used while the light curing step was skipped in the case of the single - step self - etching adhesive system . all specimens were then treated with a 60-second acetone rinse under ultrasonic movement ( ultrasonic steri - cleaner uc-150 , sturdy industrial co. , taipei , taiwan ) for the removal of any crystals and other residues from primers and left to air dry . additional representative samples ( n=2 ) were prepared using the same surface pretreatment agents , adhesive systems , and restorative resin composite used for shear bond strength testing . after 24-hour storage in distilled water , the bonded specimens were cut perpendicular to the resin / enamel interface using a slow - speed diamond disc ( k6974 , komet , lemgo , germany ) under water lubrication . the cross - sectioned specimens were finished and polished under running water with optidisc ( kerr corporation , orange , california , usa ) resin composite finishing and polishing discs from coarse / medium to fine to extra - fine grits . polished interfaces were demineralized with 50% pa for 15 seconds , rinsed thoroughly with distilled water , and air dried . all specimens were gold sputtered under vacuum and examined using sem at 30 kv accelerating voltage . images of the enamel surfaces topography were viewed at 2000x magnification , while those for resin - enamel interface analysis were examined at 3500x magnification . a total of 90 sound extracted maxillary human premolars were used for shear bond strength testing . the teeth , excluding the buccal surface , were embedded in self - curing acrylic resin ( rapid repair , degudent gmbh , hanau , germany ) by using a specially fabricated cuboidal teflon mould ( 321.3 cm ) . the buccal enamel surface of the embedded premolars was ground on a water - cooled mechanical grinder ( tf250 , jeanwirtz , dsseldorf , germany ) by using 180-grit abrasive paper to obtain flat enamel surfaces with a clinically relevant smear layer . the acrylic resin blocks were placed in the mould . to standardize the bonding area , a piece of vinyl tape with a 3-mm diameter punctured hole was placed over the mid - coronal ground enamel surface . the teeth were assigned into three groups according to bonding procedure . in the first subgroup ( control ) , no pretreatment agent was applied . in the second subgroup , etching was performed using 37% phosphoric acid ( pa ) for 15 seconds ( total etch ; ivoclar vivadent ag , schaan , liechtenstein ) . in the third subgroup , surfaces were pretreated with 18.85% edta for 60 seconds ( edta odahcam ; dentsply , latin america , rio de janeiro , brazil ) . adper prompt l - pop ( aplp ) ( 3 m espe ag dental products , seefeld , germany ) , adhese ( se ) ( ivoclar vivadent ag , schaan , liechtenstein ) , or frog ( fg ) ( sdi limited , bayswater , victoria , australia ) self - etch adhesive systems ( n=10 ) were then applied to the demarcated bonding area following manufacturers instructions ( table 1 ) . all adhesives were cured using a bluephase c5 ( ivoclar vivadent ag , schaan , liechtenstein ) light emitting diode curing unit for 10 seconds at a light intensity of 500 mw / cm . the light intensity was periodically checked with the light meter integrated in the handpiece holder of the curing unit . the 2-mm - thick roof composed of two equal halves with a circular 3-mm diameter hole was used for the packing of the tetric evoceram ( shade a3 ) ( ivoclar vivadent ag , schaan , liechtenstein ) resin composite . the composite was then light cured for 20 seconds using the same light curing unit according to manufacturer instructions . after 24-hour storage in distilled water , the samples were subjected to compression testing using a mono - bevelled , chisel - shaped metallic rod in a computerized universal testing machine ( model lrx - plus ; lloyd instruments ltd . , the specimens were stressed in shear at a cross - head speed of 0.5 mm / min . the shear force at failure was recorded and converted to shear stress in mpa units using computer software ( nexygen-4.1 ; lloyd instruments ltd . , fareham , uk ) . the fracture sites of the debonded surfaces were examined using a binocular stereomicroscope ( smz-10 , nikon , melville , ny , usa ) at 15x magnification . representative samples were chosen for examination under scanning electron microscopy ( sem ) ( xl 30 , philips , eindhoven , netherlands ) . samples were mounted on sem stubs and sputter - coated ( ladd sputter coater , ladd research industries , williston , vermont , usa ) with a thin layer of gold under vacuum . examination was done at 30 kv of accelerating voltage at different magnifications and characteristic photomicrographs were obtained at 1000x magnification . two - way analysis of variance ( anova ) was used in testing significance for the effect of both variables studied ( adhesive system versus surface pretreatment agent ) on the mean shear bond strength . post - hoc tukey s test was used for pair - wise comparison between the means when the anova test was significant . representative samples ( n=2 ) were prepared and treated with the corresponding surface pretreatment agents and adhesive systems for each subgroup as described earlier . however , the steps of bonding component application and light curing were skipped in the case of the two - step self - etching adhesive systems used while the light curing step was skipped in the case of the single - step self - etching adhesive system . all specimens were then treated with a 60-second acetone rinse under ultrasonic movement ( ultrasonic steri - cleaner uc-150 , sturdy industrial co. , taipei , taiwan ) for the removal of any crystals and other residues from primers and left to air dry . additional representative samples ( n=2 ) were prepared using the same surface pretreatment agents , adhesive systems , and restorative resin composite used for shear bond strength testing . after 24-hour storage in distilled water , the bonded specimens were cut perpendicular to the resin / enamel interface using a slow - speed diamond disc ( k6974 , komet , lemgo , germany ) under water lubrication . the cross - sectioned specimens were finished and polished under running water with optidisc ( kerr corporation , orange , california , usa ) resin composite finishing and polishing discs from coarse / medium to fine to extra - fine grits . polished interfaces were demineralized with 50% pa for 15 seconds , rinsed thoroughly with distilled water , and air dried . all specimens were gold sputtered under vacuum and examined using sem at 30 kv accelerating voltage . images of the enamel surfaces topography were viewed at 2000x magnification , while those for resin - enamel interface analysis were examined at 3500x magnification . representative samples ( n=2 ) were prepared and treated with the corresponding surface pretreatment agents and adhesive systems for each subgroup as described earlier . however , the steps of bonding component application and light curing were skipped in the case of the two - step self - etching adhesive systems used while the light curing step was skipped in the case of the single - step self - etching adhesive system . all specimens were then treated with a 60-second acetone rinse under ultrasonic movement ( ultrasonic steri - cleaner uc-150 , sturdy industrial co. , taipei , taiwan ) for the removal of any crystals and other residues from primers and left to air dry . additional representative samples ( n=2 ) were prepared using the same surface pretreatment agents , adhesive systems , and restorative resin composite used for shear bond strength testing . after 24-hour storage in distilled water , the bonded specimens were cut perpendicular to the resin / enamel interface using a slow - speed diamond disc ( k6974 , komet , lemgo , germany ) under water lubrication . the cross - sectioned specimens were finished and polished under running water with optidisc ( kerr corporation , orange , california , usa ) resin composite finishing and polishing discs from coarse / medium to fine to extra - fine grits . polished interfaces were demineralized with 50% pa for 15 seconds , rinsed thoroughly with distilled water , and air dried . all specimens were gold sputtered under vacuum and examined using sem at 30 kv accelerating voltage . images of the enamel surfaces topography were viewed at 2000x magnification , while those for resin - enamel interface analysis were examined at 3500x magnification . table 2 shows the results of statistical analysis using two - way anova test to describe the effect of both studied variables ( adhesive system and surface pretreatment agent ) . both adhesive systems and surface pretreatment agents had statistically significant effects on mean shear bond strength ( p<.001 and p=0.041 , respectively ) . the interaction between adhesive systems and surface pretreatment agents had a statistically significant effect on mean shear bond strength ( p=0.049 ) . the results of tukey s test for the comparison between different interactions of adhesive systems with surface pretreatments are shown in table 3 . comparing the 3 adhesive systems when applied according to manufacturer instructions , the intermediary strong self - etch adhesive system ( se ) showed statistically highest shear bond strength values followed by the strong self - etching adhesive system ( aplp ) while the mild self - etch adhesive system ( fg ) showed the statistically lowest shear bond strength values . with regard to the effect of the different surface pretreatments , it was revealed that different surface pretreatments did not statistically affect the mean shear bond strength values of the intermediary strong self - etching adhesive system ( se ) . pa pretreatment did not affect its bond strength values of the aplp system ; on the other hand , edta significantly reduced its bond strength values . however , pa pretreatment significantly increased the mean shear bond strength values of the mild self - etching adhesive system , which was not affected by edta pretreatment . each fractured surface was allocated to one of five types : type 1 , adhesive failure between the bonding resin and enamel ; type 2 : partial adhesive failure between the bonding resin and enamel and partial cohesive failure of the bonding resin ; type 3 : partial adhesive failure between the bonding resin and enamel and partial cohesive failure of the enamel ; type 4 : 100% cohesive failure of the bonding resin ; or type 5 : 100% cohesive failure of the enamel . figure 1 shows a bar chart of the percentage distribution of failure modes , while figure 2 represents sem photomicrographs for the different types of failure modes . fractographic analysis of the fractured sites revealed that adhesive failure ( type 1 ) was the predominating failure type . without additional surface pretreatment , only the intermediary strong self - etching adhesive system showed cohesive failure of the enamel ( type 5 ) . the highest percentages of failure ( type 1 ) were found in the strong and mild self - etching adhesive system groups with 100% in no pretreatment and edta pretreatment and 90% in the pa pretreatment subgroups . in the intermediary strong self - etching adhesive system , type 1 failure was seen 60% of the time with no pretreatment and pa pretreatment and 50% with edta pretreatment . type 2 failure ( 20% and 30% ) was seen only in the intermediary strong self - etching adhesive system with pa and edta pretreatment subgroups , respectively . type 3 failure was seen only in the same two subgroups ( 10% in both ) . type 5 failure was seen in the intermediary strong self - etching adhesive system group with different surface pretreatments and in the strong self - etching adhesive system and the mild self - etching adhesive system groups with pa pretreatment . according to cehreli and altay,15 each of these types was used in the interpretation of the enamel surface topography photomicrographs : type i , preferential dissolution of the prism cores resulting in a honeycomb - like appearance ; type ii , preferential dissolution of the prism peripheries creating a cobblestone - like appearance ; type iii , a mixture of type i and type ii patterns ; type iv , pitted enamel surfaces as well as structures that look like unfinished puzzles , maps , or networks ; and type v , flat , smooth surfaces . sem photomicrographs of the surface topography and resin / enamel interface of the strong self - etching adhesive system are shown in figure 3 . with no surface pretreatment , topographical ultra - morphological characterization ( figure 3a ) showed predominant homogeneous deep preferential dissolution of the enamel prism peripheries with areas of prism core dissolution ( a type iii etching pattern ) . meanwhile , interfacial ultra - morphological characterization ( figure 3b ) depicted resin infiltration in the form of a very thin hybrid - like layer with sparse , broad , and shallow tag - like structures . a thin continuous adhesive layer was evident between the tags and the over - laying resin composite . with pa pretreatment , enamel surface topography ( figure 3c ) showed progressively homogeneous and deeper type iii etching patterns . the interface ( figure 3d ) revealed resin infiltration in the form of a broader hybrid - like layer with numerous tag - like structures penetrating deeper into the etched enamel . with edta pretreatment , topography ( figure 3e ) revealed a milder , homogeneous type i etching pattern . the enamel prisms were hollowed out to deep pits or craters placed side by side separated by thick interprismatic enamel persisting in the form of rings . the interface ( figure 3f ) closely resembled that of no surface pretreatment ( figure 3b ) . sem photomicrographs of the surface topography and resin / enamel interface of the intermediary strong self - etching adhesive system are shown in figure 4 . with no surface pretreatment , the interface ( figure 4b ) depicted resin infiltration in the form of a very thin hybrid - like layer with thick and shallow penetrating tag - like structures . , the topography ( figure 4c ) had a deeper mixed etching pattern ( type iii ) with areas that were less intensely etched . the interface ( figure 4d ) revealed resin infiltration in the form of a broader hybrid - like layer with numerous deeply penetrating tag - like structures . cracks may be attributed to the high vacuum employed for sem examination . with edta pretreatment , topography ( figure 4e ) certain areas were markedly etched while others were very mildly involved with merely delineation of the prismatic morphology . the interface ( figure 4f ) showed resin infiltration in the form of a very thin hybrid - like layer with very shallow sparse tag - like structures . sem photomicrographs of the surface topography and resin / enamel interface of the mild self - etching adhesive system are shown in figure 5 . with no surface pretreatment , the topography ( figure 5a ) had a very mild irregular etch pattern not related to prism morphology ( type iv etch pattern ) in the form of shallow craters with areas remaining unetched . the interface ( figure 5b ) had resin infiltration in the form of a lamina - like , thin , and hybrid - like layer with no resin tag formation . a thin continuous adhesive layer was evident . with pa pretreatment , ( figure 5c ) a homogeneous and regular type ii etch pattern with deep interprismatic dissolution the interface ( figure 5d ) revealed resin infiltration in the form of a very thin hybrid - like layer . with edta pretreatment , topography ( figure 5e ) revealed preferential dissolution of interprismatic enamel ( type ii ) with areas remaining unetched . the interface ( figure 5f ) showed a close resemblance to that of no surface pretreatment ( figure 5b ) . table 2 shows the results of statistical analysis using two - way anova test to describe the effect of both studied variables ( adhesive system and surface pretreatment agent ) . both adhesive systems and surface pretreatment agents had statistically significant effects on mean shear bond strength ( p<.001 and p=0.041 , respectively ) . the interaction between adhesive systems and surface pretreatment agents had a statistically significant effect on mean shear bond strength ( p=0.049 ) . the results of tukey s test for the comparison between different interactions of adhesive systems with surface pretreatments are shown in table 3 . comparing the 3 adhesive systems when applied according to manufacturer instructions , the intermediary strong self - etch adhesive system ( se ) showed statistically highest shear bond strength values followed by the strong self - etching adhesive system ( aplp ) while the mild self - etch adhesive system ( fg ) showed the statistically lowest shear bond strength values . with regard to the effect of the different surface pretreatments , it was revealed that different surface pretreatments did not statistically affect the mean shear bond strength values of the intermediary strong self - etching adhesive system ( se ) . pa pretreatment did not affect its bond strength values of the aplp system ; on the other hand , edta significantly reduced its bond strength values . however , pa pretreatment significantly increased the mean shear bond strength values of the mild self - etching adhesive system , which was not affected by edta pretreatment . each fractured surface was allocated to one of five types : type 1 , adhesive failure between the bonding resin and enamel ; type 2 : partial adhesive failure between the bonding resin and enamel and partial cohesive failure of the bonding resin ; type 3 : partial adhesive failure between the bonding resin and enamel and partial cohesive failure of the enamel ; type 4 : 100% cohesive failure of the bonding resin ; or type 5 : 100% cohesive failure of the enamel . figure 1 shows a bar chart of the percentage distribution of failure modes , while figure 2 represents sem photomicrographs for the different types of failure modes . fractographic analysis of the fractured sites revealed that adhesive failure ( type 1 ) was the predominating failure type . without additional surface pretreatment , only the intermediary strong self - etching adhesive system showed cohesive failure of the enamel ( type 5 ) . the highest percentages of failure ( type 1 ) were found in the strong and mild self - etching adhesive system groups with 100% in no pretreatment and edta pretreatment and 90% in the pa pretreatment subgroups . in the intermediary strong self - etching adhesive system , type 1 failure was seen 60% of the time with no pretreatment and pa pretreatment and 50% with edta pretreatment . type 2 failure ( 20% and 30% ) was seen only in the intermediary strong self - etching adhesive system with pa and edta pretreatment subgroups , respectively . type 3 failure was seen only in the same two subgroups ( 10% in both ) . type 5 failure was seen in the intermediary strong self - etching adhesive system group with different surface pretreatments and in the strong self - etching adhesive system and the mild self - etching adhesive system groups with pa pretreatment . each of these types was used in the interpretation of the enamel surface topography photomicrographs : type i , preferential dissolution of the prism cores resulting in a honeycomb - like appearance ; type ii , preferential dissolution of the prism peripheries creating a cobblestone - like appearance ; type iii , a mixture of type i and type ii patterns ; type iv , pitted enamel surfaces as well as structures that look like unfinished puzzles , maps , or networks ; and type v , flat , smooth surfaces . sem photomicrographs of the surface topography and resin / enamel interface of the strong self - etching adhesive system are shown in figure 3 . with no surface pretreatment , topographical ultra - morphological characterization ( figure 3a ) showed predominant homogeneous deep preferential dissolution of the enamel prism peripheries with areas of prism core dissolution ( a type iii etching pattern ) . meanwhile , interfacial ultra - morphological characterization ( figure 3b ) depicted resin infiltration in the form of a very thin hybrid - like layer with sparse , broad , and shallow tag - like structures . a thin continuous adhesive layer was evident between the tags and the over - laying resin composite . with pa pretreatment , enamel surface topography ( figure 3c ) showed progressively homogeneous and deeper type iii etching patterns . the interface ( figure 3d ) revealed resin infiltration in the form of a broader hybrid - like layer with numerous tag - like structures penetrating deeper into the etched enamel . with edta pretreatment , topography ( figure 3e ) revealed a milder , homogeneous type i etching pattern . the enamel prisms were hollowed out to deep pits or craters placed side by side separated by thick interprismatic enamel persisting in the form of rings . the interface ( figure 3f ) closely resembled that of no surface pretreatment ( figure 3b ) . sem photomicrographs of the surface topography and resin / enamel interface of the intermediary strong self - etching adhesive system are shown in figure 4 . with no surface pretreatment , the interface ( figure 4b ) depicted resin infiltration in the form of a very thin hybrid - like layer with thick and shallow penetrating tag - like structures . with pa pretreatment , the topography ( figure 4c ) had a deeper mixed etching pattern ( type iii ) with areas that were less intensely etched . the interface ( figure 4d ) revealed resin infiltration in the form of a broader hybrid - like layer with numerous deeply penetrating tag - like structures . cracks may be attributed to the high vacuum employed for sem examination . with edta pretreatment , topography ( figure 4e ) certain areas were markedly etched while others were very mildly involved with merely delineation of the prismatic morphology . the interface ( figure 4f ) showed resin infiltration in the form of a very thin hybrid - like layer with very shallow sparse tag - like structures . sem photomicrographs of the surface topography and resin / enamel interface of the mild self - etching adhesive system are shown in figure 5 . with no surface pretreatment , the topography ( figure 5a ) had a very mild irregular etch pattern not related to prism morphology ( type iv etch pattern ) in the form of shallow craters with areas remaining unetched . the interface ( figure 5b ) had resin infiltration in the form of a lamina - like , thin , and hybrid - like layer with no resin tag formation . a thin continuous adhesive layer was evident . with pa pretreatment , ( figure 5c ) a homogeneous and regular type ii etch pattern with deep interprismatic dissolution the interface ( figure 5d ) revealed resin infiltration in the form of a very thin hybrid - like layer . with edta pretreatment , topography ( figure 5e ) revealed preferential dissolution of interprismatic enamel ( type ii ) with areas remaining unetched . the interface ( figure 5f ) showed a close resemblance to that of no surface pretreatment ( figure 5b ) . this study evaluated the effect of surface pretreatments ( pa or edta ) on the bond strength of three self - etching adhesive systems to ground enamel surfaces . the self - etching adhesive systems were selected based on their ph values ; one was chosen to represent each ph category . all of the selected adhesives had the same solvent ( water - based ) and contained 2-hydroxyethylmethacrylate . adhesives were also devoid of functional monomers that are claimed to chemically interact with tooth substrates . the buccal surface was ground parallel to the tooth long axis to flatten the enamel surface for shear testing and to standardize the orientation of enamel prisms.9 this process removes the outer hypermineralized and acid - resistant enamel and it is also consistent with clinical practice when the outer 0.5 mm of labial enamel is removed during bevelling or for veneering.17 results of the present study revealed that pa pretreatment of the enamel surface led to a significant increase in bond strength values with the mild self - etching adhesive only , while edta pretreatment did not enhance the bond strength values of any of the tested self - etching adhesive systems . both the strong and intermediary strong self - etching adhesive systems revealed definite etching patterns as depicted in figures 3a and 4a . pretreating enamel surfaces with pa led to further deepening of the same etching pattern created by both adhesive systems ( figures 3c and 4c ) . this deepening was consistent with the increase in length of the tag - like structures at the interface ( figures 3d and 4d ) . this observation is in agreement with that reported in shinchi et al,18 who showed that both the depth of etching and the length of the resin tags contribute little to bond strength in pa - etched enamel . in addition , brackett et al19 found that the depth of etching and the subsequent depth of resin permeation induced by self - etching adhesive systems do not correlate with the attained bond strength . this may be due to the fact that increasing the depth of the resin tag does not contribute substantially to the increase in cumulative surface area created by acid etching of cut enamel.20 a marked increase in surface area is achieved via the creation of regular microporosities among the apatite crystallites ; resins can infiltrate these microporosities and result in the formation of an enamel resin composite consisting of inter- and intra - crystallite resin encapsulation as well as resin infiltration into the interprismatic boundaries.21 moreover , it was recently reported that resin - to - enamel bonding with self - etching systems is based on a similar mechanism of inter- and intra - crystallite hybridization of the enamel surface rather than resin tag formation.22 this is in partial agreement with perdigao et al,23 who found that pa pretreatment did not enhance the sealing ability of the strong self - etching adhesive system , aplp , in non - thermocycled specimens . however , this result is in disagreement with lhrs et al,5 who reported a significant increase in enamel shear bond strength of the strong single - step system xeno iii and the intermediary strong two- and one - step systems se and futurabond nr after additional pa etching . in addition , erhardt et al13 reported significant increases in enamel shear bond strength of the single - step intermediary strong self - etching adhesive one up bond f after pa pretreatment . this supports the fact that the bonding performance of adhesives is still material - dependant . on the other hand , pa pretreatment enhanced the bond strength of the mild self - etching adhesive system in the current study , which is in agreement with other studies.13,2429 the data from these studies collectively suggest that the mild self - etching adhesive systems used were unable to provide an adequate level of demineralization to achieve optimum bonding to enamel . pretreatment with pa created adequate microporosities , which enhanced resin permeation . in the topographical sem photomicrographs ( figures 5a and 5c ) , pa pretreatment converted the etching pattern of the mild self - etching adhesive from an indefinite form ( type iv ) to the more definite retentive form ( type ii ) . this observation was also contiguous with the appearance of resin tags at the resin / enamel interface of pa - treated enamel ( figures 5b and 5d ) . meanwhile , erhardt et al13 explained this by the fact that pa might remove the smear layer , lowering its buffering capacity and leaving the enamel surface more receptive to self - etching primer diffusion . this result is , however , in contrast with weerasinghe et al,17 who reported no statistically significant difference in enamel bond strength with pa pretreatment in conjunction with clearfil se bond . however , clearfil se bond contains the functional monomer 10-methacryloxydecyl dihydrogen phosphate ( 10-mdp ) , which is thought to chemically interact with tooth tissues . the effect of edta pretreatment on enamel bond strength in conjunction with self - etching adhesive systems has not yet been addressed in the literature . however , studies that tested enamel etching with edta did not recommend its use because of the negligible non - uniform effect falling into the type iv etching pattern category on ground enamel16 or the type v etching pattern category on unground enamel.15 these phenomena may be due to the neutral ph of edta ( 6.47.4 ) . in addition , the concentration and application time might not have been sufficient to obtain a desirable effect on enamel.16 this result conforms to sem findings of topography and interface ( figures 3e , 3f , 4e , 4f , 5e , and 5f ) , as it was evident that edta pretreatment had a negligible and unpronounced effect when compared to the action of each of the three adhesive systems that were applied according to manufacturers instructions ( figures 3a , 3b , 4a , 4b , 5a , and 5b ) without further pretreatment , especially with the mild self - etching adhesive system ( figures 5e and 5f ) . when the three self - etch adhesives were applied according to manufacturer instructions , the intermediary strong system ( se ) showed the best performance followed by the strong self - etching adhesive system ( aplp ) this result is in agreement with de munck et al,3 who showed that the strong self - etching aplp adhesive system scored the lowest in microtensile bond strength of all of the experimental and control adhesives including the intermediary strong systems se and optibond solo plus se . the authors speculated that etching aggressiveness does not entirely correlate with bonding effectiveness as the individual features of the adhesive resin itself plays a role . variation in adhesive viscosity , surface tension , chemical interaction of acidic monomers with enamel , water concentration , and cohesive strength of the adhesive are all examples of such features.30 although water is a major component of all self - etching adhesives that allows the ionization of the acidic monomers to perform a demineralizing reaction , strong self - etching adhesives have high solvent contents to promote the complete ionization of the acidic monomer.31 the high water content in aplp ( 80% ) could be difficult to remove by air blowing.30,32 this in turn could decrease the polymerization efficacy and degree of conversion , thus altering the mechanical properties of the adhesive.33 in addition , excess water may also dilute the primer and reduce its effectiveness.34 it has also been speculated that the high acidity of unpolymerized monomers remaining at the oxygen inhibited layer after light curing may attack the polymerization initiation system of the resin composite , resulting in lower bond strength.32 aplp and fg contain pa ester as the acidic polymerizable monomer unlike se , which contains phosphonic acid acrylates . the latter is reported to have improved hydrolytic stability and reactivity in free radical polymerization . moreover , se also contains the hydrolytically stable cross - linking monomer bis - acrylamide.4,10 the strong system aplp is a single - step system ( all - in - one ) while the intermediary strong system se is a two - step system . van landuyt et al24 reported that the amounts of ingredients applied on the tooth surface differ considerably between one- and two - step adhesives . two - step adhesives consist of pure priming solution containing only functional etching monomers dissolved in organic solvent and water , and a solvent - free bonding containing hydrophobic cross - linking monomers that allow for a thicker and more hydrolytically stable adhesive layer . this layer can probably act as a shock absorber between tooth tissues and composites . on the other hand , one - step adhesives are complex mixtures of both hydrophobic and hydrophilic ingredients.9 on the other hand , this result is in contrast with findings of goracci et al,10 atash and van den abbeele,32 and perdigo et al.35 the first two studies reported no significant difference in bond strength between aplp compared to se . this contradiction may be attributed to differences in testing methodologies and the substrate examined . in the current study , the mild self - etching adhesive system ( fg ) showed the lowest bond strength compared to the strong and intermediary strong adhesive systems . this could be partially attributed to the relatively higher numbers of retentive etching patterns created by the strong and intermediary strong adhesive systems ( figures 3a and 4a ) compared to the indefinite non - retentive pattern created by the mild self - etching adhesive system ( figure 5a ) . the ph of the mild self - etching adhesives might be optimal for dentin but may not be sufficiently aggressive for enamel.19 the intermediary strong , self - etching adhesive system ( adhese ) might have higher potential for bonding to enamel than the strong and mild , self - etching adhesive systems ( adper prompt l - pop and fg ) . phosphoric acid pre - treatment could be beneficial for bonding to enamel using mild self - etching adhesive systems . edta pre - treatment is not a viable alternative for enamel bonding to self - etching adhesive systems . the uniformity rather than the depth of the etching pattern affected the bonding of self - etching adhesives to enamel .
objectives : this in vitro study determined the effect of enamel pretreatment with phosphoric acid and ethylenediaminetetraacetic acid ( edta ) on the bond strength of strong , intermediary strong , and mild self - etching adhesive systems.methods:ninety sound human premolars were used . resin composite cylinders were bonded to flat ground enamel surfaces using three self - etching adhesive systems : strong adper prompt l - pop ( ph=0.91.0 ) , intermediary strong adhese ( ph=1.61.7 ) , and mild frog ( ph=2 ) . adhesive systems were applied either according to manufacturer instructions ( control ) or after pretreatment with either phosphoric acid or edta ( n=10 ) . after 24 hours , shear bond strength was tested using a universal testing machine at a cross - head speed of 0.5 mm / minute . ultra - morphological characterization of the surface topography and resin / enamel interfaces as well as representative fractured enamel specimens were examined using scanning electron microscopy ( sem).results : neither surface pretreatment statistically increased the mean shear bond strength values of either the strong or the intermediary strong self - etching adhesive systems . however , phosphoric acid pretreatment significantly increased the mean shear bond strength values of the mild self - etching adhesive system . sem examination of enamel surface topography showed that phosphoric acid pretreatment deepened the same etching pattern of the strong and intermediary strong adhesive systems but converted the irregular etching pattern of the mild self - etching adhesive system to a regular etching pattern . sem examination of the resin / enamel interface revealed that deepening of the etching pattern was consistent with increase in the length of resin tags . edta pretreatment had a negligible effect on ultra - morphological features.conclusions:use of phosphoric acid pretreatment can be beneficial with mild self - etching adhesive systems for bonding to enamel .
INTRODUCTION MATERIALS AND METHODS Preparation of specimens Bonding procedures Shear bond strength testing Fractographic analysis Statistical analysis SEM examination Examination of the enamel surface topography Examination of the resin/enamel interface RESULTS Results of shear bond strength test Results of failure mode analysis Results of SEM examination of enamel surface topography and interface DISCUSSION CONCLUSIONS
PMC4168800
although it is textbook knowledge that the functions of biomacromolecules are strongly coupled to their conformational motions and fluctuations , computer simulation of such motions has been a challenge for decades . typically , distinct algorithms are employed to estimate equilibrium quantities ( e.g. , refs ( 3 ) and ( 4 ) ) and dynamical properties ( e.g. , refs ( 510 ) ) . in principle , a single long dynamics trajectory would be sufficient to determine both equilibrium and dynamical properties , but such simulations remain impractical for most systems of interest . aside from straightforward simulations , more technical approaches that can yield both equilibrium and dynamical simulation , sometimes under minor assumptions , a number of approaches employ markov state models ( msms ) as part their overall computational strategy . on the basis of replica exchange molecular dynamics ( remd ) , it is possible to extract kinetic information from continuous trajectory segments between exchanges and thereby construct an msm . the adaptive seeding method ( asm ) similarly builds an msm based on trajectories seeded from states discovered via remd or another of the so - called generalized ensemble ( ge ) algorithms . msms have also been used in combination with short , off - equilibrium simulations to construct the equilibrium ensemble of folding pathways of a protein . another general strategy is to employ a series of nonintersecting interfaces that interpolate between states of interest selected in advance . milestoning generates and analyzes transitions between interfaces assuming prior history does not affect the distribution of trajectories . transition interface sampling ( tis ) and its variants also analyze such transitions and can yield free energy barriers in addition to rates while accounting for some history information . forward flux sampling ( ffs ) again samples interface transitions : it accounts for history information and can yield rates and equilibrium information . the weighted ensemble ( we ) simulation strategy ( see figure 1 ) , which has a rigorous basis as a path - sampling method , has also been suggested as an approach for computation of both equilibrium and nonequilibrium properties . although we was originally developed as a tool for characterizing nonequilibrium dynamical pathways and rates ( e.g. , refs ( 5 ) , ( 2528 ) ) , the strategy was extended to steady - state conditions including equilibrium . the simultaneous computation of equilibrium and kinetic properties using we was demonstrated with configuration space separated into two states by a dividing surface and later for arbitrary states defined in advance of a simulation . in contrast to many other advanced sampling strategies , we generates an ensemble of continuous trajectories , all at the physical condition ( e.g. , temperature ) of interest . ( a ) ensemble of trajectories with arrow tips indicating the instantaneous configuration and tails showing recent history in the space of two schematic coordinates q1 and q2 . states a and b , shown in gray , are two arbitrary regions of phase space . ( b ) dissection into two subsets based on whether a trajectory was most recently in state a ( black solid arrows , the steady state ) or state b ( red dashed , the steady state ) . ( c ) statistically equivalent ensemble of weighted trajectories , with arrow thickness suggesting weight . configuration space has been divided into cells ( bins ) which each containing an equal number of trajectories . here , we further develop the capability of we simulation to calculate equilibrium and nonequilibrium quantities simultaneously in several ways that may be important for future studies of increasingly complex systems . ( i ) the approach described below permits the calculation of rates between arbitrary states , which can be defined after a simulation has been completed . in a complex system , the most important physical states , including intermediates , generally will not be obvious prior to simulation . further , the present approach opens up the possibility to use rate calculations to aid in the state - definition process . ( ii ) the non - markovian analysis described here enables unbiased rate calculations in the typical case where bins used by we simulation do not exhibit markovian behavior . the analysis is general and can be applied outside the we context , including the analysis of ordinary long trajectories . ( iii ) the non - markovian analysis can improve the efficiency of we simulations by yielding accurate estimates of observables from shorter simulations . the analysis is based on a previously suggested decomposition of the equilibrium ensemble into two nonequilibrium steady states . we is easily parallelizable because it employs multiple trajectories and was recently used with 3500 cores . because there is no need to catch trajectories at precise transition interfaces , we algorithms lend themselves to a scripting - like implementation which has been employed to study a wide range of stochastic systems via regular molecular dynamics , monte carlo , the string strategy , and gillespie - algorithm dynamics of chemical kinetic networks . we simulation uses multiple simultaneous trajectories , with weights that sum to one , that are occasionally coupled by replication or combination events every units of time . the coupling events typically are governed by a static partition of configuration space into bins ( figure 1c ) , although dynamical / adaptive bins may be used . in the case of static bins , when one or more trajectories enters an unoccupied bin , those trajectories are replicated so that their count conforms to a ( typically ) preset value , m. replicated daughter if more than m trajectories are found to occupy a bin , trajectories are combined statistically in a pairwise fashion until m remain , with weight from pruned trajectories assigned to others in the same bin . these procedures are carried out in such a way that dynamics remain statistically unbiased . this study does not adjust weights according to previously developed reweighting procedures during the simulation . rather , the we simulations described here are long enough to permit relaxation to the equilibrium state . once the equilibrium state is reached in a we simulation , meaning that there is a detailed balance of probability flow between any two states , equilibrium observables such as state populations or a potential of mean force can be calculated simply by summing trajectory weights in the corresponding regions of phase space . estimation of observables . to calculate rates , the equilibrium set of trajectories ( figure 1a ) is decomposed into two steady states as shown in figure 1b : the steady state consisting of trajectories more recently in a than b , and the steady state with those most recently in b ; these were denoted ab and ba steady states , respectively , in ref ( 31 ) . trajectories are labeled according to the last state visited , i.e. , classified as or , during a we simulation or in a postsimulation analysis ( post - analysis ) . the direct rate kab estimate is computed from the probability arriving at the final state via1where mfpt is the mean - first - passage time , flux(a b| ) is the probability per unit time arriving at state b in the steady state , and p( ) is the total probability in the steady state . by construction p( ) + p( ) = 1 . normalizing by p( ) effectively excludes the reverse steady state , and the rate calculation only sees the unidirectional steady state as in ref ( 23 ) . an expression analogous to eq 1 applies for kba . also note that the effective first order rate constant , defined by flux(a b|)/paeq , can be determined from equilibrium we simulation because paeq can be directly computed by summing weights in a. we note that analogous direct calculation of observables can be performed from an equilibrium ensemble of unweighted ( i.e. , brute force ) trajectories by assigning equal weights to each . beyond the direct estimates of observables based on trajectory weights , we also generalize previous matrix formulations for nonequilibrium steady states into an equilibrium formulation that explicitly accounts for the embedded steady states ( as in figure 1b , c ) . these non - markovian matrix estimates are tested below and may prove important for future we studies using shorter simulations , as described in the discussion . our matrix approach explicitly uses the decomposition of the equilibrium population into and components for each bin i:2which implies p( ) = ipi and p( ) = ipi. we called this a labeled analysis . thus , with n bins , a set of 2n probabilities is required rather than n. similarly , a 2n 2n rate matrix is required : kijv , where and can be either the or subsets of trajectories each of the previously considered kij rate elements is thus decomposed into four history - dependent elements which account for whether the particular trajectory was last in state a or b and whether the trajectory transitions between the and subsets . the analysis assumes states consist strictly of one or more bins , but this is always possible in a post - analysis without a loss of generality . in other words , given the flexibility we have when we define the bins , it is not a real limitation that the states have to be strictly constituted by bins . constructing a labeled rate matrix for unbiased calculations . for purposes of illustration , here state a consists solely of bin 1 and state b solely of bin 3 . left : a traditional rate matrix with history - blind elements . the rate kij gives the conditional probability for transitioning from bin i to bin j in a fixed time increment , regardless of previous history . the element kijv is the conditional probability for the i to j transition for trajectories initially in the subensemble which transition to the subensemble , where and are either or . the labeled rate matrix correctly assigns the and subpopulations of each bin , whereas the traditional matrix may not . we wish to emphasize that this analysis is non - markovian because we are explicitly including history information ( i.e. , and labels ) in the new 2n 2n rate matrix . once the matrix is built , the steady state observables are obtained using the same mathematical formalism that would be used in a regular markov model . however , the matrix should be seen as a tool of linear algebra and not as embodying any physical assumptions . for example , consider a bin in the intermediate region ( neither a nor b ) , such as bin 2 in figure 2 . in this region , an trajectory can not change into a trajectory , nor vice versa ; hence rates for these processes are zero . similarly , an trajectory in the intermediate region which enters a bin in b must turn into a trajectory , so the rate will always be zero to the components of bins in b. the non - markovian results below stem from the division into and steady states , but several steps are required . first , rates among bins are estimated in a post - analysis as3where ij is the probability flux , for a given iteration , from bin i to j of trajectories only with initial and final labels and , respectively , while i is the population labeled as which is initially in i. the subscript 2 in the numerator indicates that the rate kijv is estimated to be nonzero only when more than one transition is observed ; after the second event , all events are included , from the first one , to avoid bias . the requirement for two transitions was found to greatly enhance numerical stability in estimating fluxes and rates between macroscopic states : rates estimated from single events exhibit large fluctuations . notice that eq 3 is a ratio of averages and differs from the average ratio ij/i , which might seem equally or more natural . however , our data show that eq 3 yields unbiased estimates , while the average ratio may not ( data not shown ) . the difference between the two estimators indicates that transitions are correlated with trajectory weights . perhaps more importantly , the average ratio places less importance on high weight transitions due to the instantaneous normalization and that is , low - weight transitions count as heavily as high - weight events , which evidently biases the rate estimate . in the ratio of averages , high - weight events appropriately count more . to obtain macroscopic rates between states consisting of arbitrary sets of bins ( noting that arbitrary bins can be employed in a post - analysis ) , we calculate labeled fluxes for use in eq 1 via4the labeled bin populations pi and pi are obtained from the steady - state solution of the labeled rate matrix k = { kij}. a summary of the labeled or non - markovian matrix procedure for estimating rates between arbitrary states is as follows . first , we obtain the labeled rate matrix k = { kijv } using eq 3 to average interbin transitions . second , we solve the matrix problem kpss = pss , yielding the steady state solution pss . then , the steady state solution pss along with the labeled rate matrix elements are used to calculate the flux entering state b and the flux entering a ( eq 4 ) . finally , the mfpt values are obtained from eq 1 . in the graphs below , each non - markovian estimate shown is from the matrix solution using the kij rates calculated based on all data obtained until the given iteration of the simulation . the non - markovian matrix formulation exhibits a number of desirable properties : ( i ) unlike with unlabeled ( i.e. , implicitly markovian ) analysis , kinetic properties will be unbiased as shown below . ( ii ) solution of both the and steady states is performed simultaneously via a standard markov - state - like analysis of the kij rate matrix . by contrast , if the and steady states are independently solved within a markov formalism , there can be substantial ambiguity in how to assign feedback from the target to initial state when the initial state consists of more than one bin . ( iii ) the labeled formulation guarantees , by construction , the flux balance intrinsic to equilibrium , namely , flux(a ( iv ) the analysis can be performed using arbitrary bins ( and states defined as sets of these bins ) . it is not necessary to employ the bins originally used to run the we simulation because a post - analysis can calculate rates among any regions of configuration space . ( v ) the analysis is equally applicable to ordinary brute - force simulations . for reference , we also perform a traditional markov analysis of the trajectories , which will prove to yield biased rate estimates because most divisions of configuration space ( e.g. , we bins ) are not true markovian states . the markov analysis proceeds without labeling the trajectories . elements of the rate matrix are estimated as5where the subscript 2 again means that we only estimate a rate as nonzero once at least two transitions from i to j have occurred . bin populations are then computed by solving for the steady - state solution of the markov matrix with elements kij . the computation of an mfpt requires the use of source ( a ) and sink ( b ) states . hence , we determine markovian macroscopic rates by substituting the markovian kij for all nonzero elements of the kij. we emphasize that this is merely an accounting trick to establish sources and sinks and simultaneously measure both a - to - b and b - to - a fluxes / rates . we perform a smoothing operation on the macroscopic markovian rates because otherwise the data are fairly noisy . the mfpt results shown for the markovian matrix analysis are running averages based on the last 50% of the estimates ( where each estimate is from the matrix solution using kij estimates from all data obtained until the particular iteration ) . we confirmed numerically that such smoothing did not contribute bias to any of the mfpt estimates . weighted ensemble simulations were performed on two systems : the alanine tetrapeptide ( ala4 ) solvated implicitly and a pair of explicitly solvated methane molecules . all simulations were performed at 300 k with a stochastic thermostat ( langevin thermostat ) . friction constants of 5.0 and 1.0 ps were used for ala4 and methane systems , respectively . the molecular dynamics time step used for all systems was t = 2 fs . an iteration is defined to be the simultaneous propagation of all trajectories in the ensemble for some amount of time , . in these studies , a value of = 2500t is used for ala4 and = 250t for the methane methane system . for ala4 , the all - atom amber ff99sb force field with implicit gb / sa solvent and no cutoff for the evaluation of nonbonded interactions was simulated using the amber 11 software package . the hawkins , cramer , and truhlar pairwise generalized born model is used , with parameters described by tsui and case ( option igb=1 in amber 11 input file ) . the progress coordinates were selected and binned using a 10 10 partition of a 2d space . a dihedral distance d = ( ( 1/n)idi ) with respect to a reference set of torsions is used in the first dimension , where n is the number of torsional angles considered and di is the circular distance between the current value of the ith angle and our reference , i.e. , the smaller of the two arclengths along the circumference . this dimension was divided every 14 from 0 to 126 and then a final partition covering the space ( 126,180 ] ) . in the second dimension , a regular rmsd , using only heavy atoms , is measured with respect to an -helical structure . in this case , the space was divided every 0.4 from 0 to 3.6 and then a final partition covering the space [ 3.6, ) . values and coordinates for the references used to compute the order parameters are given in the supporting information ( si ) . the methane molecules were simulated using the gromacs 4.5 software package with the united - atom gromos 45a3 force field and dodecahedral periodic box of tip3p water molecules ( about 900 water molecules in a 34 34 24 box ) . van der waals interactions were switched off smoothly between 8 and 9 ; real - space electrostatic interactions were truncated at 10 . long range electrostatic interactions were calculated using particle mesh ewald ( pme ) summation . the single progress coordinate was the distance r between the two methane molecules , following ref ( 28 ) . the coordinate r [ 0, ) was partitioned with a bin spacing of 1 from 0 to 16 and a last bin covering the space r [ 16, ) . for the post analysis of methane , the coordinate r [ 0, ) was partitioned so that the first bin is the space r [ 0,5 ) , then a bin spacing of 2 was used from 5 to 17 , while the last bin covers the space r [ 17, ) . the results shown below include all data generated in all trajectories : no transient or relaxation period has been omitted . for ala4 , populations and mfpts are estimated using we and compared to independent measurements based on ordinary brute force ( bf ) simulation . rates are estimated in both directions between the two sets of states a1,b1 and a2,b2 shown in figure 3 ( see si to visualize representative structures ) . the second set is less populated and consequently expected to be more difficult to sample . figure 3 also shows the bin definitions used in the post - analysis , which were the same as those used during the we simulation . however , as we shall see in our second system , we can use any partition of the space for the post analysis . the ala4 free energy surface . the surface is projected onto two coordinates : d = ( ( 1/n)idi ) from one reference structure ( see si ) and the rmsd with respect to an ideal -helix . the surface was computed using 3.0 s of ordinary brute force simulation . the set of states a1,b1 is highlighted in green , while the second set a2,b2 is highlighted in red . the grid shows bins that were used both for we simulation and for the post - analysis calculation of observables via the non - markovian matrix formulation . the data shown below are based on the same total simulation times in bf and we . the bf estimates and confidence intervals are based on a single long trajectory of 3.0 s where thousands of transitions between states were observed . five independent we simulations were run , each employing a total of 3.0 s accounting for all the trajectories . the use of independent we runs permits straightforward error analysis for comparison with bf . as described above , direct we measurements sum trajectory weights for population and flux calculations . figures 4 and 5 show direct estimates for both equilibrium and kinetic quantities for both sets of states . we estimates as a function of simulation time are compared to 95% confidence intervals for bf simulation . direct we estimates for populations and mean first passage times ( mfpts ) for ala4 states a1,b1 from figure 3 . five independent we runs are shown , each based on 3.0 s of total simulation time . dashed lines indicate roughly a 95% confidence interval based on 3.0 s of brute force simulation . each nanosecond of molecular ( single - trajectory ) time corresponds to approximately 200 ns of we simulation including all trajectories in a single run . direct we estimates for populations and mean first passage times for ala4 states a2,b2 from figure 3 . five independent we runs are shown , each based on 3.0 s of total simulation time . dashed lines indicate roughly a 95% confidence interval based on 3.0 s of brute force simulation . each nanosecond of molecular time corresponds to approximately 200 ns of we simulation accounting for all trajectories in a single run . as with all observables , data from five is the estimate using all data from the run and thus is based on a total simulation time equal to that of bf ( 3 s ) . the spread of the rightmost we data points therefore can be compared with the bf confidence interval to gauge statistical quality . the mean values of the direct estimates are in agreement with bf confidence intervals in all cases . in some cases , the spread of we estimates is significantly less than that for bf prior to the full extent of we simulation . each nanosecond of molecular time in figures 4 and 5 ( i.e. , single - trajectory time ) corresponds to approximately 200 ns of total simulation in a single we run accounting for all trajectories . hence , in some cases , considerably less we simulation is required for an estimate of the same statistical quality as resulted from the full bf simulation of 3.0 s . we also show results of the non - markovian matrix analysis for select observables . figure 6 shows that the non - markovian analysis yields unbiased estimates of the same equilibrium and nonequilibrium properties calculated with direct estimates . ( results for other observables , like the population of a1 and the a1b1 mfpt , not shown , exhibit qualitatively similar agreement . ) the agreement contrasts with a purely markovian matrix formulation , which does not account for the labeling described above , which can yield statistically biased estimates for kinetic quantities ( see methane results , below ) . unbiased matrix - based estimates are important when reweighting is used in we as noted in the discussion . reweighting was not used in the present study , however . population of a2 and mean first passage time for ala4 from a2 to b2 , estimated by the non - markovian matrix analysis of we data . dashed lines indicate roughly a 95% confidence interval from brute force simulation , as in figures 4 and 5 . the states are defined in figure 3 . in the methane system , we simulation is used to measure first - passage times based on a range of state definitions . for a complex system , analyzing the sensitivity of the mfpt to state definitions could aid in the definition of states . the mfpt was estimated directly , as well as by both non - markovian and markovian matrix analysis . to assess statistical uncertainty , the bins used for post - analysis differ from those used in the original we simulation , as a matter of convenience underscoring the flexibility of the approach . figure 7 shows passage times measured as a function of the boundary position for the unbound state . the boundary of the bound state a was held fixed at a separation of 5 while the definition of the unbound state was varied from 5 to 17 . the passage times were measured in increments of 2 and compared with bf results as shown in figure 7 . the bf confidence intervals are based on a single long trajectory of 0.4 s , the same total simulation time used in each we simulation . the mean first passage time for methane association ( b to a ) and dissociation ( a to b ) measured directly and from the non - markovian matrix analysis from we simulation as a function of the boundary of state a. the inset displays the pmf along with the definitions of the unbound and bound states , indicated by b and a , respectively . dashed lines indicate roughly a 95% confidence interval based on 0.4 s of brute force simulation . figure 7 shows that both direct and non - markovian matrix estimates are in agreement with bf confidence intervals . for fixed state definitions , figure 8 shows the evolution of state populations mfpts , as was done for ala4 . we fix the movable boundary position in figure 7 ( inset ) , defining state b as all configurations with r > 11 . direct and non - markovian we estimates for populations and mean first passage times ( mfpts ) are plotted vs molecular time . five independent we runs are shown , each based on 0.4 s of total simulation time . dashed lines indicate roughly a 95% confidence interval based on 0.4 s of brute force simulation . each nanosecond of molecular time corresponds to approximately 80 ns of we simulation accounting for all trajectories in a single run . the bound state ( a ) is defined by distances less than 5 , and b is defined by distances greater than 11 . the performance of the non - markovian matrix estimates are particularly noteworthy in figure 8 . the matrix estimates converge faster than direct estimates to the exact results for the state populations . presumably , this is because the direct approach requires relaxation of the full probability distribution to equilibrium , whereas the matrix approach requires only relaxation of the distribution with each bin ( in order to obtain accurate interbin rates kij ) . in contrast to the unbiased mfpt estimates obtained by both direct and non - markovian analysis , the markov analysis can be significantly biased for the mfpt . figure 9 shows that applying the markovian analysis ( section 2.3 ) leads to mfpt estimates clearly outside the bf confidence interval . data in the si show that the use of a more sophisticated model such as a maximum - likelihood estimator for reversible markov models yields similar results and does not correct the bias . populations of a ( r < 5 ) and b ( r > 11 ) and mfpts for the methane system , estimated by the non - markovian matrix analysis and the markovian analysis without history information . dashed lines indicate roughly a 95% confidence interval from brute force simulation based on 0.4 s of total simulation time . equilibrium properties , however , can be estimated without bias in a markovian analysis because history dependence is immaterial . figure 9 also illustrates correct ( equilibrium ) population estimates based on the markovian analysis . to our knowledge , this is the first weighted ensemble ( we ) study using the original huber and kim algorithm to simultaneously calculate both equilibrium and nonequilibrium quantities . the present study estimates observables ( populations and mfpts ) based on arbitrary states defined in a postsimulation analysis , permitting the examination of different state definitions and their effects on observables . two qualitatively different estimation schemes were examined , including a non - markovian rate - matrix formulation which shows promise for reducing transient initial - state bias ( a bias which is intrinsic to direct estimation of observables based on weights ) . both schemes showed substantial efficiency gains for some observables even in the test systems which appear to lack significant energy barriers in their configurational landscapes . nevertheless , as described below , the present data do point to further challenges likely to be exhibited by larger , more complex systems . one key feature of the we implementation studied here is the ability to investigate a range of state choices . as computer simulations tackle systems of growing complexity , it seems increasingly unlikely that states chosen prior to a study will prove physically or biochemically relevant . indeed , it is already the case that specialized algorithms are invoked to identify physical states , separated by the slowest time scales , from existing trajectories . with we simulation , as suggested by our methane data , one can adjust state boundaries to minimize the sensitivity of rates to those boundaries . a possible concern with postsimulation state construction is the need to store a potentially large set of coordinates to ensure sufficient flexibility in post analysis . as an illustration , storage of { x , y , z } coordinates for 1000 heavy atoms in a we run of 1000 iterations using 1000 trajectories would require 10 gb . the estimation of both equilibrium and kinetic properties from relatively short simulations is an important goal of current methods development , including for we . here , we have demonstrated as a proof of principle that we simulation can do this efficiently ( compared to brute force simulation ) , without bias , in parallel , and with flexibility in defining states . given the relatively fast time scales ( nanosecond scale ) characterizing the present systems , it is somewhat surprising that we is better than brute - force simulation for some of the observables and never worse . previous studies suggest that we has the potential for greater efficiency in more complex systems . once a configuration space is discretized ( e.g. , bins in we simulation ) , one expects in general that transitions among such discrete regions will not be markovian . to take the simplest example , in a 1d system , whether a trajectory enters a finite - width bin from the left or right will affect the probability to make a transition in a given direction . so generally , discretized systems are non - markovian , even when the underlying continuous dynamics are markovian . this study compared estimation of equilibrium and nonequilibrium observables using the original we algorithm and via post - analysis . as mentioned in the introduction , the occasional rescaling of weights to match an equilibrium or nonequilibrium steady - state condition was not used to avoid any potential complications . our data clearly show that a standard markovian analysis of we simulation is inadequate ( figure 9 ) , since we bins typically are not markovian . additional information history dependence , as embodied in the / labeling scheme is needed to obtain unbiased results . inclusion of history information in the matrix analysis means it is intrinsically non - markovian regardless of the linear algebra employed . future work will incorporate the rate estimation and non - markovian matrix schemes developed here , as well as possibly the simpler markovian scheme shown in section 2.3 . our data ( figure 8) suggest that these could be very successful in bringing a we simulation closer to a specified steady state . but it is an open question whether reweighting simulations will prove superior to the type of post - analysis suggested here . importantly , data presented here indicate that some rate estimators could lead to biased estimates for populations , which , in turn , would bias a reweighted simulation . one practical future approach , suggested by the work of darve and co - workers , could be to define preliminary states in advance to aid sampling transitions in both directions and then to subject the data to the same post analysis performed here to examine additional state definitions besides the initial choices . the present study has not addressed some of the intrinsic limitations of the we approach , which are the related issues of correlations among trajectories ( due to the replication and merging events ) and sampling orthogonal coordinates not divided up by we bins . in the systems examined here , there was sufficient sampling in orthogonal dimensions to obtain excellent agreement with brute force results in all cases . however , significant future effort will be required to address correlations and orthogonal sampling , the latter being a problem common to methods which preselect coordinates such as multiple - window umbrella sampling and metadynamics . in this proof - of - principle study , the parallel weighted ensemble ( we ) approach has been applied to measure equilibrium and kinetic properties from a single simulation in small but nontrivial molecular systems . importantly , populations and rates could be measured for arbitrary states chosen after the simulation . for all tested observables , unbiased estimates were obtained , as validated by independent brute - force simulations . in a number of instances , we was significantly more efficient yielding estimates of a given statistical quality in less overall computing time compared to simple simulation , including all trajectories . in this sense , not only is we a parallel method but it can exhibit super - linear scaling ; e.g. , 100 cores can yield desired information more than 100 times faster than single - core simulation . we also developed a non - markovian matrix approach for analyzing we or brute - force trajectories , capable of yielding unbiased results , sometimes faster than direct estimates of observables from we . the non - markovian formulation also yields simultaneous estimates of equilibrium and nonequilibrium observables based on an arbitrary division of phase space , which is not possible in a standard markovian analysis . the approaches tested here will need to be further developed and tested in more complex systems .
equilibrium formally can be represented as an ensemble of uncoupled systems undergoing unbiased dynamics in which detailed balance is maintained . many nonequilibrium processes can be described by suitable subsets of the equilibrium ensemble . here , we employ the weighted ensemble ( we ) simulation protocol [ huber and kim , biophys . j.1996 , 70 , 97110 ] to generate equilibrium trajectory ensembles and extract nonequilibrium subsets for computing kinetic quantities . states do not need to be chosen in advance . the procedure formally allows estimation of kinetic rates between arbitrary states chosen after the simulation , along with their equilibrium populations . we also describe a related history - dependent matrix procedure for estimating equilibrium and nonequilibrium observables when phase space has been divided into arbitrary non - markovian regions , whether in we or ordinary simulation . in this proof - of - principle study , these methods are successfully applied and validated on two molecular systems : explicitly solvated methane association and the implicitly solvated ala4 peptide . we comment on challenges remaining in we calculations .
Introduction Theoretical Formulation Model Systems and Simulation Details Results Discussion Conclusions
PMC4606765
educational reform leader michael fullan5 observed that : when adults do think of students , they think of them as the potential beneficiaries of change they rarely think of students as participants in a process of school change and organizational life . fullan5 says that engaging the hearts and minds of students is the key to success in school but many schools see students only as sources of interesting and usable data . students soon tire of invitations that address matters they do not think are important , use language they find restrictive , alienating , or patronizing and that rarely result in action or dialog that affects their lives . despite demonstrated valuable and realistic ideas , student voice is not widespread and students are vastly underutilized resources . student engagement strategies must reach all students , those doing okay but bored by the irrelevance of school , and those who are disadvantaged and find schools increasingly alienating as they move through the grades.5 the research on empowering students validates the wisdom of engaging youth in education , health , and social issues . efficacy studies on youth peer mediation programs in which students are empowered to share responsibility for creating a safe and secure school environment demonstrated the value of turning to students as partners6,7 students learned peer mediation skills , reduced suspensions and discipline referrals in schools , and improved the school climate.7 research has also demonstrated that student peer educators achieve similar or better results than adult educators.8 a review article on the effects of giving students voice in the school decision - making process found evidence of moderate positive effects of student participation on life skills , democratic skills and citizenship , student - adult relationships , and school ethos while finding low evidence of negative effects.9 wallerstein,10 who has supported the use of youth empowerment strategies in all aspects of health promotion , noted that student participation enhances self - awareness and social achievement , improves mental health and academic performance , and reduces rates of dropping out of school , delinquency , and substance abuse . a 2014 review of 26 articles11 to identify the effects of student participation in designing , planning , implementing , and/or evaluating school health promotion measures found conclusive evidence showing ( 1 ) enhanced personal effects on students ( enhanced motivation , improved attitudes , skills , competencies , and knowledge ) ; ( 2 ) improved school climate ; and ( 3 ) improved interactions and social relationships in schools both among peers and between students and adults . both educational12,13 and health experts,10 as well as youth development experts,14 have advocated engaging students as partners to improve the health of peers , family , and community as well as improve the very process of school reform.5,12,13 pittman et al noted that change happens fastest when youth and community development are seen as two sides of the same coin and young people are afforded the tools , training and trust to apply their creativity and energy to affect meaningful change in their own lives and in the future of their neighborhoods and communities . toshalis and nakkula13 support the effectiveness of students as partners in promoting change within the school . they noted that student voice most often is only at the far left of the spectrum , using students as data sources ( figure 1 ) . however , in those schools that have tried involving students as leaders of change , remarkable success has occurred . various forms of youth engagement such as peer education , peer mentoring , youth action , student voice , community service , service - learning , youth organizing , civic engagement , and youth - adult partnerships provide students with a sense of safety , belonging , and efficacy ; gains in their sociopolitical awareness and civic competence ; strengthened community connections;16 and improved achievement.14 spectrum of student voice in schools and community given the opportunity to discover their true passion , students will accept the challenge and deliver . high school student zak malamed17 and a few friends decided it was time for students to speak up . they held their first twitter chat for students who were feeling frustrated about how little say they had in the school reform discussions going on around them . the question to students became what can we do to improve this school ? from the impetus of several frustrated students to the organization known as student voice ( http://www.studentvoice.org ) , the conversation has grown to a movement dedicated to revolutionizing education through the voices and actions of students . supporters promise to advocate for students to be authentic partners in education and ensure that they have a genuine influence on decisions that affect their lives . the student voice collaboration18 was started by the new york city department of education to help students improve themselves and their schools . participating students learned how the educational system works , interviewed school leaders about decision - making , and created a 1-page map showing how decisions were made in their school . students conducted research on a challenge in their schools and then developed a student - led campaign to address the challenge . finally , they set a city - wide agenda identifying something that would benefit all new york city students . as a result of this work , one student group developed 6 recommendations that were shared with the new york city chancellor of education . the program 's goal was to show students that they can bring about change by working within the system . the student - centered schools : closing the opportunity gap evaluation study , conducted by the stanford center for opportunity policy in education,19 described how 4 student - centered high schools in california supported student success . student - centered practices focused on the needs of students through a rigorous , rich , and relevant curriculum that connected to the world beyond school . personalization was critical and students were provided with instructional supports that enabled success . each of the 4 schools supported students ' leadership capacities and autonomy through inquiry - based , student - directed , and collaborative learning within the classroom and in the community . advisory programs , a culture of celebration , student voice , leadership opportunities , and connections to parents and community were embedded in each school . the study revealed that creating high schools designed around student rather than adult needs requires a shift in beliefs that must be translated into action . the stanford study19 also showed that teachers and administrators need to be prepared to address students ' academic , social , and emotional needs in ways that empower students to take control of their own learning . this has significant implications for teacher and administrator preparation programs , teacher induction , and professional development . a culture of collaboration and partnership must go beyond traditional educator networks to include students as partners and consumers of educational programs and services . meaningful student involvement requires educators to provide learning experiences that enhance students ' skill development . effective teachers guide students to discovery , help students make meaning of what they learn , and include students as essential and legitimate contributors to achieving their own health and success . schools must continuously acknowledge the diversity of students by validating and authorizing them to represent their own ideas , opinions , knowledge , and experiences and truly become partners in every facet of school change but certainly in those programs and services that directly impact their health and well - being . student empowerment can and should begin in the elementary grades . serriere et al20 described a youth - adult partnership in one elementary school in which mixed grade level k-5 students participated in small school gatherings described as social , civic , and academic networks designed to create a sense of community and encourage student voice . group projects focused on making a difference , defined by the students as helping the poor , writing letters to military personnel , or aiding a local animal shelter . these authentic activities incorporated critical thinking , decision - making , collaboration , and planning skills and demonstrated that given the right circumstances , even the youngest students can have a voice in their work . similarly , the national health education standards21 emphasize effective communication , goal setting , decision - making , and advocacy all skills that enable students to take a more active role in their school and community . ultimately , the goal is to prepare all students for college , career , and life , educating and empowering them to become informed , responsible , and active citizens . the wscc model provides a vehicle for students to create meaningful learning experiences in education and health that help create a safe and supportive school . students can best articulate their own needs , thus maximizing the provision of health and counseling services . however , schools can not simply ask a select few for their opinions or blessings ; rather , schools must make concerted and genuine efforts to move from the contrived student voice of a few students to the meaningful student involvement of all students . at the heart of meaningful student involvement are students whose voices have long been silenced.12 every student deserves to be healthy , safe , engaged , supported , and challenged but evidence suggests that most students do not receive the supports they need to achieve these outcomes . while the five promises articulated by the america 's promise alliance22 do not use the same terms as the wscc model , they are quite similar in describing the fundamental needs of students : healthy start , safe places , caring adults ( supported ) , opportunities to help others ( engaged ) , and , effective education ( challenged ) . a survey completed in 2006 by america 's promise revealed that 7 in 10 young people ages 12 to 17 ( 69% ) received only 3 or less of the 5 fundamental resources needed to flourish . only 31% ( or 153 million students out of 494 million students ) in grades 6 - 12 received 4 to 5 of these fundamental resources . the 2014 quaglia institute for student aspirations ' my voice survey23 also confirmed the need for more resources . the survey , completed by a racially and socioeconomically diverse sample of 66,314 students in grades 6 - 12 representing 234 schools across the nation , was designed to measure variables affecting student academic motivation and concentrated on the following student constructs : self - worth , engagement , purpose / motivation along with peer support , and teacher support . the authors of the survey noted that the results of the 2014 survey demonstrated little change with the annual results since 2009 . students with a sense of self - worth were 5 times more likely to be academically motivated , yet 45% of students did not have a sense of self - worth . those who described themselves as engaged were 16 times more likely to be academically motivated but 40% reported that they were not engaged . students with a sense of purpose were 18 times more likely to be academically motivated but 15% reported no purpose . teacher support increased academic motivation 8 times over while peer support increased academic motivation 4 times over . however , 39% of the students reported no teacher support and 56% reported no peer support.23 clearly , there is a need for a more coordinated and collaborative approach to meeting students ' basic needs the wscc model could be one mechanism schools and communities use to improve students ' feelings and experiences of self - worth , engagement , purpose , peer support , and teacher support as students become partners in the dissemination of the model . meaningful youth involvement in promoting the wscc model needs to be promoted . learning opportunities to empower youth can be divided into individual empowerment , organizational empowerment , and community empowerment.24 individual empowerment occurs when youth develop the self - management skills , improve competence and exert control over their life , while organizational empowerment refers to schools and community organizations that provide opportunities for engaging in student empowerment as well as benefit from student empowerment . community empowerment refers to the provision of opportunities for citizen participation at the local , state , and national level and the ensuing efforts to improve lives , organizations , and the community.25 successful youth - adult partnerships happen when the relationships between youth and adults are characterized by mutuality in teaching , learning , and action . while these relationships usually occur within youth organizations or in democratic schools , they could become one mechanism for disseminating the wscc model . fletcher asks adults to : imagine a school where democracy is more than a buzzword , and involvement is more than attendance . it is a place where all adults and students interact as co - learners and leaders , and where students are encouraged to speak out about their schools . picture all adults actively valuing student engagement and empowerment , and all students actively striving to become more engaged and empowered . envision school classrooms where teachers place the experiences of students at the center of learning , and education boardrooms where everyone can learn from students as partners in school change [ to improve not only education outcomes but also health outcomes ] . what can schools do to empower students and support student voice ? the authors suggest adapting 4 goals identified by fletcher,25 to include a health focus as well as a school improvement focus : engage all students at all grade levels and in all subjects as contributing stakeholders in teaching , learning , and leading in school [ to ensure that student needs are being met ] . expand the common expectation of every student to become an active and equal partner in school change [ that includes health - promoting student support programs and services as cornerstones of school improvement ] . provide students and educators with sustainable , responsive , and systemic approaches to engaging all students [ in school improvement and health promotion ] and validate the experience , perspectives , and knowledge of all students through sustainable , powerful , and purposeful school - oriented and school - community roles.25 table 1 , an adaptation of a chart by fletcher,12 provides examples of empowerment roles students can assume as partners in promoting achievement and health through the implementation of the wscc model . while the table identifies opportunities at 3 grade clusters ( e = elementary / k-5 , m = middle grades/6 - 8 , hs = high school/9 - 12 ) , these suggestions can be easily adapted for any grade level . potential roles for students in the implementation of the wscc * e , elementary school ; m , middle school ; hs , high school ; wscc , whole school , whole community , whole child . adapted from fletcher.12 a number of researchers1416 have provided guidelines and recommendations for schools and communities on how to begin this process by : assessing the needs of youth on a regular basis;14,15 developing a local database of resources for youth development;14 asking community - based organizations ( cbos ) to document and share with schools what they specifically accomplish related to learning outcomes14 to coordinate a scope and sequence of learning;14 developing curricula that integrates community resources for learning and teaching;14 providing youth with a supportive home base in which youth can work with dedicated and nurturing adults;15 creating youth - adult teams that are intentional about the long - term social changes to be achieved;15 balancing the need for short - term individual supports for youth with long - term goal of community change;15 recognizing and rewarding youth for their participation in youth organizations;14 providing professional development for educators to learn about the power of youth organizations to assist in providing youth with skills;14 advocating for a line item in the community budget to support youth as partners in improving the community and schools;14 providing multiple options for youth participation ensuring that youth receive the support to progressively take on more responsibility as they gain experience and skills;16 providing coaching and ongoing feedback to both youth and adults;14,15 establishing strategies to recruit and retain a diverse core of youth;15,16 providing organizational resources such as budget , staff training , and physical space aligned to support quality youth - adult partnership;16 and providing adults and youth with opportunities to reflect and learn with same - age peers.16 in addition , kania and kramer26 identified 5 conditions needed for achieving collective impact on any issue ( but particularly educational reform ) that could be instructive for school - community partnerships that support youth engagement and empowerment . these include a common agenda , shared measurement systems , mutually reinforcing activities , continuous communication , and backbone support functions ( such as convening partners , conducting needs assessments , developing a shared strategic plan for aligning efforts , selecting success metrics , and designing an evaluation ) . hung et al27 in a review of the factors that facilitated the implementation of health - promoting schools which also engaged community agencies as partners in the process , identified the following effective and somewhat similar strategies : following a framework / guideline ; obtaining committed support from the school staff , school board , management , health agencies , and other stakeholders ; adopting a multidisciplinary , collaborative approach ; establishing professional networks and relationships ; and continuing training and education . hung et al also noted that coordination was the key to promoting school - community partnerships and encouraged a 2-pronged approach : a top - down approach , a more effective initiating force to introduce and support the coordination role ; and a bottom - up commitment , including the participation of parents and students , critical for sustaining an initiative . ten years of research,14 assessing the contributions of 120 community youth - based organizations in 34 cities across the nation , revealed that students working with cbos , when compared to youth in general , were 26% more likely to report having received recognition for good grades . almost 20% were more likely to rate their chances of graduating from high school as very high , 20% were more likely to rate the prospect of their going to college as very high , and more than 2 - 1/2 times were more likely to think it is very important to do community service or to volunteer and give back to their community.14 students can also play an important role within their own school or district . the centers for disease control and prevention28 recommends that schools and communities establish school health coordinating councils at the community level to facilitate the communication and common goals recommended by kania and kramer as well as the professional networks and relationships identified by hung et al as drivers of quality school health programs . members of both the district level and school councils / teams include representatives from education , public health , health and social service agencies , community leadership , and families . some schools and districts have found that including students as critical contributors to the work of the councils / teams is vital . the center consolidated school district ( colorado),29 recognizing the importance of a collaborative approach to student health and learning , has a health advisory committee that represents the wscc components and includes community professionals , school staff , parents , and students . the district believes that , with support , students can achieve academically and be successful in life . health and wellness efforts are integrated into the work of the center school district as reflected by a health and wellness goal for the district 's unified improvement plan . the district sees wellness as the foundation for learning sustained by creating and maintaining environments , comprehensive health , policies , practices , access to services and resources , and attitudes that develop and support the inter - related dimensions of physical , mental , emotional , and social health.29 persons at the district 's skoglund middle school believe that educating students about leading a healthy lifestyle is important and because students educate others about a healthy lifestyle and its impact , sustainability is enhanced . as administrators , staff , parents and students understand the importance of coordinated school health efforts , they become the school 's strongest advocates.30 skoglund has discovered that when parents and students demand something , it continues.30 clearly , student voice and involvement is valued in the district which uses the wscc model to guide its work . table 2 provides examples of resources to assist school and community agency staff members to empower students as partners , enable student voice , and develop students as partners for change . every student deserves to be healthy , safe , engaged , supported , and challenged but evidence suggests that most students do not receive the supports they need to achieve these outcomes . while the five promises articulated by the america 's promise alliance22 do not use the same terms as the wscc model , they are quite similar in describing the fundamental needs of students : healthy start , safe places , caring adults ( supported ) , opportunities to help others ( engaged ) , and , effective education ( challenged ) . a survey completed in 2006 by america 's promise revealed that 7 in 10 young people ages 12 to 17 ( 69% ) received only 3 or less of the 5 fundamental resources needed to flourish . only 31% ( or 153 million students out of 494 million students ) in grades 6 - 12 received 4 to 5 of these fundamental resources . the 2014 quaglia institute for student aspirations ' my voice survey23 also confirmed the need for more resources . the survey , completed by a racially and socioeconomically diverse sample of 66,314 students in grades 6 - 12 representing 234 schools across the nation , was designed to measure variables affecting student academic motivation and concentrated on the following student constructs : self - worth , engagement , purpose / motivation along with peer support , and teacher support . the authors of the survey noted that the results of the 2014 survey demonstrated little change with the annual results since 2009 . students with a sense of self - worth were 5 times more likely to be academically motivated , yet 45% of students did not have a sense of self - worth . those who described themselves as engaged were 16 times more likely to be academically motivated but 40% reported that they were not engaged . students with a sense of purpose were 18 times more likely to be academically motivated but 15% reported no purpose . teacher support increased academic motivation 8 times over while peer support increased academic motivation 4 times over . however , 39% of the students reported no teacher support and 56% reported no peer support.23 clearly , there is a need for a more coordinated and collaborative approach to meeting students ' basic needs one involving families , schools , communities , and peers . the wscc model could be one mechanism schools and communities use to improve students ' feelings and experiences of self - worth , engagement , purpose , peer support , and teacher support as students become partners in the dissemination of the model . meaningful youth involvement in promoting the wscc model needs to be promoted . learning opportunities to empower youth can be divided into individual empowerment , organizational empowerment , and community empowerment.24 individual empowerment occurs when youth develop the self - management skills , improve competence and exert control over their life , while organizational empowerment refers to schools and community organizations that provide opportunities for engaging in student empowerment as well as benefit from student empowerment . community empowerment refers to the provision of opportunities for citizen participation at the local , state , and national level and the ensuing efforts to improve lives , organizations , and the community.25 successful youth - adult partnerships happen when the relationships between youth and adults are characterized by mutuality in teaching , learning , and action . while these relationships usually occur within youth organizations or in democratic schools , they could become one mechanism for disseminating the wscc model . fletcher asks adults to : imagine a school where democracy is more than a buzzword , and involvement is more than attendance . it is a place where all adults and students interact as co - learners and leaders , and where students are encouraged to speak out about their schools . picture all adults actively valuing student engagement and empowerment , and all students actively striving to become more engaged and empowered . envision school classrooms where teachers place the experiences of students at the center of learning , and education boardrooms where everyone can learn from students as partners in school change [ to improve not only education outcomes but also health outcomes ] . what can schools do to empower students and support student voice ? the authors suggest adapting 4 goals identified by fletcher,25 to include a health focus as well as a school improvement focus : engage all students at all grade levels and in all subjects as contributing stakeholders in teaching , learning , and leading in school [ to ensure that student needs are being met ] . expand the common expectation of every student to become an active and equal partner in school change [ that includes health - promoting student support programs and services as cornerstones of school improvement ] . provide students and educators with sustainable , responsive , and systemic approaches to engaging all students [ in school improvement and health promotion ] and validate the experience , perspectives , and knowledge of all students through sustainable , powerful , and purposeful school - oriented and school - community roles.25 table 1 , an adaptation of a chart by fletcher,12 provides examples of empowerment roles students can assume as partners in promoting achievement and health through the implementation of the wscc model . while the table identifies opportunities at 3 grade clusters ( e = elementary / k-5 , m = middle grades/6 - 8 , hs = high school/9 - 12 ) , these suggestions can be easily adapted for any grade level . potential roles for students in the implementation of the wscc * e , elementary school ; m , middle school ; hs , high school ; wscc , whole school , whole community , whole child . adapted from fletcher.12 a number of researchers1416 have provided guidelines and recommendations for schools and communities on how to begin this process by : assessing the needs of youth on a regular basis;14,15 developing a local database of resources for youth development;14 asking community - based organizations ( cbos ) to document and share with schools what they specifically accomplish related to learning outcomes14 to coordinate a scope and sequence of learning;14 developing curricula that integrates community resources for learning and teaching;14 providing youth with a supportive home base in which youth can work with dedicated and nurturing adults;15 creating youth - adult teams that are intentional about the long - term social changes to be achieved;15 balancing the need for short - term individual supports for youth with long - term goal of community change;15 recognizing and rewarding youth for their participation in youth organizations;14 providing professional development for educators to learn about the power of youth organizations to assist in providing youth with skills;14 advocating for a line item in the community budget to support youth as partners in improving the community and schools;14 providing multiple options for youth participation ensuring that youth receive the support to progressively take on more responsibility as they gain experience and skills;16 providing coaching and ongoing feedback to both youth and adults;14,15 establishing strategies to recruit and retain a diverse core of youth;15,16 providing organizational resources such as budget , staff training , and physical space aligned to support quality youth - adult partnership;16 and providing adults and youth with opportunities to reflect and learn with same - age peers.16 in addition , kania and kramer26 identified 5 conditions needed for achieving collective impact on any issue ( but particularly educational reform ) that could be instructive for school - community partnerships that support youth engagement and empowerment . these include a common agenda , shared measurement systems , mutually reinforcing activities , continuous communication , and backbone support functions ( such as convening partners , conducting needs assessments , developing a shared strategic plan for aligning efforts , selecting success metrics , and designing an evaluation ) . hung et al27 in a review of the factors that facilitated the implementation of health - promoting schools which also engaged community agencies as partners in the process , identified the following effective and somewhat similar strategies : following a framework / guideline ; obtaining committed support from the school staff , school board , management , health agencies , and other stakeholders ; adopting a multidisciplinary , collaborative approach ; establishing professional networks and relationships ; and continuing training and education . hung et al also noted that coordination was the key to promoting school - community partnerships and encouraged a 2-pronged approach : a top - down approach , a more effective initiating force to introduce and support the coordination role ; and a bottom - up commitment , including the participation of parents and students , critical for sustaining an initiative . ten years of research,14 assessing the contributions of 120 community youth - based organizations in 34 cities across the nation , revealed that students working with cbos , when compared to youth in general , were 26% more likely to report having received recognition for good grades . almost 20% were more likely to rate their chances of graduating from high school as very high , 20% were more likely to rate the prospect of their going to college as very high , and more than 2 - 1/2 times were more likely to think it is very important to do community service or to volunteer and give back to their community.14 students can also play an important role within their own school or district . the centers for disease control and prevention28 recommends that schools and communities establish school health coordinating councils at the community level to facilitate the communication and common goals recommended by kania and kramer as well as the professional networks and relationships identified by hung et al as drivers of quality school health programs . members of both the district level and school councils / teams include representatives from education , public health , health and social service agencies , community leadership , and families . some schools and districts have found that including students as critical contributors to the work of the councils / teams is vital . the center consolidated school district ( colorado),29 recognizing the importance of a collaborative approach to student health and learning , has a health advisory committee that represents the wscc components and includes community professionals , school staff , parents , and students . the district believes that , with support , students can achieve academically and be successful in life . health and wellness efforts are integrated into the work of the center school district as reflected by a health and wellness goal for the district 's unified improvement plan . the district sees wellness as the foundation for learning sustained by creating and maintaining environments , comprehensive health , policies , practices , access to services and resources , and attitudes that develop and support the inter - related dimensions of physical , mental , emotional , and social health.29 persons at the district 's skoglund middle school believe that educating students about leading a healthy lifestyle is important and because students educate others about a healthy lifestyle and its impact , sustainability is enhanced . as administrators , staff , parents and students understand the importance of coordinated school health efforts , they become the school 's strongest advocates.30 skoglund has discovered that when parents and students demand something , it continues.30 clearly , student voice and involvement is valued in the district which uses the wscc model to guide its work . table 2 provides examples of resources to assist school and community agency staff members to empower students as partners , enable student voice , and develop students as partners for change . the wscc model places students in the center for a reason : students are the consumers of the programs and services we , the adults , provide . a student - centered school considers the thoughts and opinions of the students it serves . that means schools must seek out the opinions and ideas of every student , not just those elected to student government or acknowledged as school leaders . this dialogue must begin in elementary grades as students learn how to develop and present a convincing argument and advocate for their own health , safety , engagement , support for learning and academic challenges as well as these supports for their peers . these skills can be developed , refined , and supported by the implementation of a comprehensive , sequential prek-12 health education program aligned with the national health education standards . school administrators must regularly engage all students through social media , surveys , town hall meetings , and focus groups . creating a continuous feedback loop , where comments are welcomed and expected , is critical to supporting student voice . whereas having student representatives on the school health committee / team is important , asking all students to participate in developing and implementing school health policies is necessary in order to create a safe environment for discussion . it is critical that students become involved in the conversation at the outset and not after decisions have already been made . building trust in the system is crucial to the success of the wscc model . as schools implement the wscc approach , they must create an ongoing dialog about school health policies , programs and services and ensure that the student body is well - represented in those conversations . three simple questions are critical to that process : what do students think about the planned policy , program or service ? how will the policies , programs , and services impact all students in the school ? what would the students do differently if given the opportunity to do so ? ensuring that all students have the skills needed to become effective communicators is but a first step toward creating an environment where students feel safe and supported . empowering students as partners in the dissemination of the wscc model will help generate trust and acceptance and ensure that students ' needs are being met . ascd 's the learning compact redefined : a call to action set the stage for the development of the wscc model with this statement : we are calling for a simple change that will have radical implications : put the child at the center of decision - making and allocate resources time , space , and human to ensure each child 's success.3 creating meaningful roles for students as allies , decision makers , planners , and foremost , as consumers , ensures that our focus is truly student - centered . placing students in the center of the wscc model makes visible the commitment of education and health to collaboratively prepare today 's students for the challenges of today and the possibilities of tomorrow . we can accomplish this by engaging and empowering students and acknowledging them as capable and valuable partners in the process ( figure 2 ) . source : ascd1 the preparation of this paper involved no original research with human subjects .
backgroundstudents are the heart of the whole school , whole community , whole child ( wscc ) model . students are the recipients of programs and services to ensure that they are healthy , safe , engaged , supported , and challenged and also serve as partners in the implementation and dissemination of the wscc model.methodsa review of the number of students nationwide enjoying the 5 whole child tenets reveals severe deficiencies while a review of student - centered approaches , including student engagement and student voice , appears to be one way to remedy these deficiencies.resultsresearch in both education and health reveals that giving students a voice and engaging students as partners benefits them by fostering development of skills , improvement in competence , and exertion of control over their lives while simultaneously improving outcomes for their peers and the entire school / organization.conclusionscreating meaningful roles for students as allies , decision makers , planners , and consumers shows a commitment to prepare them for the challenges of today and the possibilities of tomorrow .
DISCUSSION Students' Perceptions of Achieving the 5 Whole Child Tenets How Schools Can Empower and Engage Students IMPLICATIONS FOR SCHOOL HEALTH Human Subjects Approval Statement
PMC5137306
the facilitation or inhibition of gastrointestinal motility due to stress has been previously reported ( 1 , 2 ) . as gastrointestinal motility is partially reflected by egg activity ( 3 , 4 ) , the effects of stress on the epigastric , supraumbilical and infraumbilical egg data can be correlated with anxiety scores . our previous study demonstrated that the local differences in the power content during 16-location egg were more clearly shown at rest , during the postprandial state and during the mirror drawing test ( mdt ) ( 5 ) . furthermore , we demonstrated the epigastric egg inhibition ( seen in 3 cpm activity ) and infraumbilical egg facilitation ( seen in 6 cpm activity ) during mdt stress in a numerical comparison of the power content ratio of the mdtr and topographic egg mapping of mdtr ( power content during mdt / power content at rest ) ( 6 ) . therefore , in the present study , we compared the effects of mdt - related stress on each of the 16 locations of egg using the power content ratio of the mdtr to clarify the correlation between stress and egg , which partially reflects gastrointestinal motility ( 3 , 4 , 6 ) . this project was conducted under the approval of the ethics committee of niigata university , faculty of medicine ( project no . 179 ) . informed consent was obtained from all of the subjects after explanation about the informed consent immediately prior to the egg recording . there were 58 subjects , ( 52 males and six females ) , with 23 of the subjects ranging in age from 2038 years ( 23.1 1.0 , n=23 ) , while the age of other 35 subjects was not known . the methods used for recording and analyzing eggs were the same as those used in the previous studies ( 7 , 8) . briefly , unipolar eggs were recorded from 16 locations ( channels , ch ) from the thoraco - abdominal skin surface ( fig . location of the electrodes . superimposed images of the body and the location of the 16 electrodes on the thoraco - abdominal body surface based on the xiphoid process , costal arch , and iliac line . the numbers ( 1 - 16 ) by the filled circles indicate the roughly averaged location of the electrodes . the length of x0~xmax was assumed to be 32 cm , and that of y0~ymax 36 cm based on the superimposed body lines ( 5 ) . 2012 ; 48(23 ) : 4757 ) ( 6 ) . ) , using a reference electrode on the right leg . the amplifier was a modified electroencephalographic ( eeg ) amplifier , with time constant set at 5 sec , with a high cut at 0.5 hz , a low cut of 6 dboct and a high cut of 12 dboct , ( biotop 6r 124 , nec - sanei , japan ) . after cleaning the skin with ethanol , electrode cream was applied to the disc electrode for the eeg ( diameter=11 mm ) . resting eggs were recorded for about 20 min , in subjects who had fasted for at least eight hours , and were sampled every 128 sec ( 1 file ) . after recording the resting control data , subjects were exposed to the stress of the mdt . the mdt involves tracing the cue figure of a metal star , reflected onto a mirror with an electric pen , which gives a click alarm when the tracing runs off the edge of the star ( error ) . the mdt stress was applied for about 5 min to obtain 23 egg files . compiled running spectra were obtained after the files were analyzed using the maximal entropy method ( mem ) . the spectral frequency readings were classified into five groups : the 1-cpm group ( 02.4 cpm ) , 3-cpm group ( 2.54.9 cpm ) , 6-cpm group ( 5.07.4 cpm ) , 8-cpm group ( 7.59.9 cpm ) and 10-cpm group ( 10.012.9 cpm ) . ensemble means were obtained during rest , during the stress of the mdt , and after the mdt . with regard to the egg parameters , this study focused on the power content ratio of mdtr for each channel . the egg power content ratio of electrode locations1 cpm ( 02.4)3 cpm ( 2.54.9)6 cpm ( 5.07.4)8 cpm ( 7.59.9)10 cpm ( 10.012.9)ch.22.17 .580.95 .590.52 .077130.86 .0640.86 .12ch.51.95 .2610.90 .05440.89 .060140.96 .0890.84 .10ch.81.71 .161.01 .066121.08 .10150.84 .0730.83 .096ch.101.76 .181.11 .07151.04 .089160.85 .0780.76 .088ch.111.59 .141.36 .1461.03 .091170.84 .0910.64 .079ch.121.64 .171.22 .09070.96 .074180.82 .0880.67 .096ch.131.63 .191.21 .07481.03 .084190.74 .0760.66 .081ch.141.48 .131.30 .1090.99 .081200.84 .121.28 .57ch.151.32 .1121.17 .076101.04 .073100.87 .100.79 .11ch.161.34 .1131.19 .092111.04 .077220.81 .0730.69 .070the effect of the mdt on the power content ratio ( mdtr , n=5258 , means s.e.m . ) . 46 , 47 , 48 , 49 , 410 , and 1314 , p<0.01 . 1315 , 1316 , 1317 , 1318 , 1319 , 1320 , 1321 , and 1322 , p<0.001 . lists the figures for the epigastric 2 , 5 and 8 channels and the infraumbilical 1216 channels found in our previous studies for simplicity ( 5 , 6 ) . the anxiety scores were estimated using the hads ( hospital anxiety and depression scales ) ( 9 , 10 ) . the electrode positions were represented by two - dimensional standard coordinates , xi and yi , and a spectral peak at a certain electrode position was expressed as zi = ( xi , yi ) ( 7 , 8 , 11 , 12 ) . superimposed images of the body and the location of the 16 electrodes on the thoraco - abdominal body surface based on the xiphoid process , costal arch , and iliac line . the numbers ( 1 - 16 ) by the filled circles indicate the roughly averaged location of the electrodes . the length of x0~xmax was assumed to be 32 cm , and that of y0~ymax 36 cm based on the superimposed body lines ( 5 ) . 2012 ; 48(23 ) : 4757 ) ( 6 ) . the effect of the mdt on the power content ratio ( mdtr , n=5258 , means s.e.m . ) . 46 , 47 , 48 , 49 , 410 , and 1314 , p<0.01 . 1315 , 1316 , 1317 , 1318 , 1319 , 1320 , 1321 , and 1322 , p<0.001 . the mean and standard errors ( sem ) were calculated , and the student 's t test was used to determine the level of statistical significance . the epigastric ( ch.2 , 5 and 8) and infraumbilical ( ch.1216 ) power content ratio of the mdtr of five spectral frequencies in addition to umbilical channels 10 and 11 are shown in table 1 . the power content ratio of mdtr of 3- and 6 cpm in the epigastric channels was generally , significantly lower than that of the infraumbilical channels . the significant linear correlations ( p<0.05 ) between the anxiety scores and power content ratio of mdtr are shown in table 2table 2 . the linear correlation parametersprch.2 ( 3 cpm)1.610.0460.071ch.10 ( 6 cpm)1.310.0130.11ch.11 ( 6 cpm)1.110.0330.080ch.13 ( 6 cpm)1.180.0370.077ch.15 ( 6 cpm)1.710.0080.123ch.16 ( 6 cpm)1.280.0380.076tabulation of the the significant linear correlation parameters ( p<0.05 ) between the anxiety scores and the power content ratio of the mdt to that at rest ( mdtr ) . and fig . 2 . a : the correlation between the anxiety scores and the egg power content ratio of 3 cpm . the linear correlation between the anxiety scores ( y axis ) and the power content ratio ( mdtr ) of 3 cpm of epigastric channel 2 ( x - axis ) . slope ( ) = 1.61 , p = 0.046 , and r = 0.071 . b : the correlation between anxiety scores and the egg power content ratio of 6 cpm . the linear correlation between anxiety scores ( y - axis ) and the power content ratio ( mdtr ) of 6 cpm of infraumbilical channel 15 ( x - axis ) . slope = 1.71 , p = 0.008 , and r = 0.123 .. the slope of the ch.2 correlation ( 3 cpm ) was negative ( fig . 2a ) and the slopes of ch.10 , 11 , 13 , 15 , and 16 ( 6 cpm ) were positive ( fig . tabulation of the the significant linear correlation parameters ( p<0.05 ) between the anxiety scores and the power content ratio of the mdt to that at rest ( mdtr ) . a : the correlation between the anxiety scores and the egg power content ratio of 3 cpm . the linear correlation between the anxiety scores ( y axis ) and the power content ratio ( mdtr ) of 3 cpm of epigastric channel 2 ( x - axis ) . slope ( ) = 1.61 , p = 0.046 , and r = 0.071 . b : the correlation between anxiety scores and the egg power content ratio of 6 cpm . the linear correlation between anxiety scores ( y - axis ) and the power content ratio ( mdtr ) of 6 cpm of infraumbilical channel 15 ( x - axis ) . it is well known that egg records gastrointestinal electrical activity or myoelectric activity , which reflects some of the motility as power content ( 3 , 4 ) . the power content ratio or the normalization of egg change , mdtr ( power content during mdt / power content at rest ) reflects the real change of the local egg for each electrode as demonstrated in topographic egg maps ( 6 ) . it is also well known that stress influences the gastric and colonic activity measured with egg . in fact , stress induces dual excitatory and inhibitory effects that can be observed with egg . cold pressor test stress , interviews and performing arithmetic calculations increased the colonic egg ( 13 ) . electric shock significantly decreased the percentage of the 3 cpm frequency and tachyarrhythmia ( % ) component of the egg , but forehead cooling increased the percentage of the 3 cpm frequency ( 14 ) . the induction of similar gastric inhibition or facilitation by stress has been reported using manometry ( 14 , 15 , 16 , 17 ) . we have previously reported the effects of the acute stress of mdt on the gastric and colonic facilitation or inhibition with egg . however , mdt stress did not appear to exert effects on the intestinal egg activity ( 6 , 8 , 12 ) . it is generally accepted that the normal gastric spectral activity of egg is 3 cpm ( 3 , 18 , 19 ) . however , the gastric and colonic egg activity includes both 3- and 6- cpm egg activity according to gastrectomy and colectomy studies ( 7 , 20 , 21 , 22 , 23 , 24 ) . therefore , the infraumbilical 6 cpm egg activity in this study is considered to reflect the colonic myoelectric activity . both colonic facilitation and inhibition however , the finding of a significantly higher power ratio of the mdtr of the infraumbilical 3 cpm than that in the epigastric recording during mdt suggested that the mdt stress inhibited gastric egg and facilitated colonic egg . in addition , topographic egg maps drawn according to the power content ratio of the mdtr and the absolute power ratio of mdtr supported this idea ( 6 ) . similar findings of colonic facilitation have been reported with manometry ( 13 , 25 , 26 , 27 , 28 , 29 ) . mdt - related stress significantly increased bowel evacuation frequency , while it is known that depressed patients tend to be constipated ( 30 ) . , a linear correlation with a negative slope was found between the anxiety scores and the mdtr of 3 cpm egg of channel 2 in this study ( table 2 , fig . , the correlation between the anxiety scores and the 3 cpm mdtr was not calculated in channel 2 ( ch2 ) alone , and the mean of mdtr for channels 3 , 4 , 5 , and 6 channels was defined as the epigastric ch1 ( 8 , 12 ) . a significant linear correlation was not found between the anxiety scores and the 3 cpm of ch1 mdtr ( 12 ) . a significant linear correlation with a positive slope was also found in the mdtr of 6 cpm of umbilical channels 10 and 11 ( table 2 ) . similarly , the infraumbilical ch3 was calculated using the mean of channels 12 , 13 , and 14 ( 12 ) . however , a linear correlation with a positive slope was found between the anxiety scores and infraumbilical 6 cpm of ch3 mdtr 6 cpm ( 12 ) and channel 13 ( one of the ch3 substituents ) in this study in addition to channel 15 ( fig . 2b ) and channel 16 ( table 2 ) . a significant linear correlation with a positive slope was also found in the mdtr of 6 cpm of umbilical channels 10 and 11 ( table 2 ) . the locations of channels 10 and 11 may correspond to the right and left flexure of the colon . it is has been suggested that various stressors depress stomach contractility and emptying , and facilitate colonic motility , transit and defecation through the limbic , hypothalamic and autonomic nervous system via rf - r2 ( corticotrophin - releasing factor receptor and crf - r1 , respectively ( 32 , 33 , 34 , 35 ) . our egg findings in human subjects provide further supports for these studies ( 6 ) . finally , the present results further support the idea that mdt stress inhibits stomach motility and facilitates colonic motility . the author does not have any financial relationship with the organization that sponsored the research .
electrogastrograms ( eggs ) were recorded at 16 locations on the thoraco - abdominal surface at rest and then both during and after the acute stress of performing the mirror drawing test ( mdt ) . a significant linear correlation with a negative slope was found between the anxiety scores and the ratio of the power content during mdt to the power content at rest ( r ) ( mdtr1 ) of the 3 cpm component from the epigastric channel 2 recording . in contrast , significant linear correlations with positive slopes were found between the anxiety scores and mdtr1 of the 6 cpm component of the recordings from the infraumbilical channels ( channels 13 , 15 , and 16 ) . the epigastric 3-cpm egg activity reflects gastric myoelectric activity , while the infraumbilical 3- and 6-cpm activity reflects that of the colon . therefore , these results seem to further support the previous report of the inhibition of gastric egg by stress and the stress - mediated facilitation of colonic egg ( homma s , j smooth muscle res . 2012 ; 48(23 ) : 4757 ) .
Introduction Methods Results Discussion Conflict of interest
PMC5115731
in target - based approaches to drug discovery , linking the observed phenotypic response to a ligand of interest with on - target modulation is a critical step . to this end , both on- and off - targets of a drug candidate need to be identified and characterized prior to clinical development . among many target identification methods , photoaffinity - labeling is particularly attractive , as the transient association of the molecular targets with a drug candidate becomes permanent after photo - cross - linking in the native cellular environment . in addition , for targets that are part of a fragile multiprotein complex , the in situ covalent capture prevents potential loss of the targets after cell lysis . because photoaffinity probes are generally used in excess relative to their targets in order to drive the formation of the target drug complexes , nonspecific targets can also be captured during photo - cross - linking . to overcome this problem , two strategies have been employed : ( i ) use ligands with higher affinity so that the photoaffinity probes can be used at lower concentrations ; and ( ii ) use photoaffinity labels that generate reactive intermediates with high , yet selective reactivities toward the ligand - bound targets . to this end , only a few photoaffinity labels have been reported in the past 40 years ( chart 1 ) , including phenyl azide , diazirine ( da ) , and benzophenone ( bp ) . while these photoaffinity labels have shown tremendous versatility in biomedical research , nevertheless they have two major shortcomings : ( i ) the photogenerated nitrene , carbene , and diradical intermediates exhibit extremely short half - lives , leading to very low target capturing yields , and ( ii ) the nitrone , carbene , and diradical intermediates are prone to react nonselectively with any proximal c h / x h bonds ( x = n , o , s ) , resulting in high background . to balance reactivity with specificity , we envisioned that alternative photogenerated intermediates may exhibit longer half - lives and greater functional group selectivity . indeed , hamachi and co - workers have reported elegant ligand - directed chemistries based on the electrophilic tosyl and acyl imidazolyl groups and demonstrated their exquisite specificity in selective protein target labeling in situ . inspired by this work , we hypothesized that an appropriately functionalized photoreactive tetrazole could serve as a highly selective , electrophilic photoaffinity label for in situ target capture . here we report the development of 2-aryl-5-carboxytetrazole ( act , chart 1 ) as a robust photoaffinity label for identification of the targets and off - targets of dasatinib and jq-1two drugs profiled extensively in the literature . compared with da and bp , act gave higher yields in the ligand - directed photo - cross - linking reactions with the recombinant target proteins . in addition , the act - based probes facilitated the in situ target identification in a manner similar to the da - based ones . whereas the tosyl and acyl imidazolyl groups were successfully employed in labeling endogenous targets in living cells , they are not ideal affinity labels for general target identification because of concerns about their stability in cellular milieu . we hypothesized that act should serve as an ideal photoaffinity label based on the following considerations : ( i ) 5-carboxy - substituted 2-aryltetrazoles are photoactive ; ( ii ) placement of a carboxyl group at the c - position of 2h - tetrazole increases the electrophilicity of the photogenerated carboxy - nitrile imine intermediate ; ( iii ) nucleophilic thiol - addition of 2-mercaptobenzoic acid and 3-mercaptopropionic acid to the base - generated carboxy - nitrile imine was reported in the literature ; and ( iv ) the photogenerated carboxy - nitrile imine intermediate should undergo rapid medium quenching when a proximal nucleophile is not available ( figure s1 ) , minimizing the undesired reactions with nonspecific targets . thus , two series of photoaffinity probes were prepared ( see supporting information for synthetic schemes ) : one series is based on dasatinib , a potent inhibitor of bcr - abl kinase , the src family kinases , as well as btk ( bruton s tyrosine kinase ) ; and the other is based on jq-1 , a potent inhibitor of the bet family of bromodomain proteins ( figure 1a ) . specifically , three photoaffinity labels , act , da , and bp , were attached to dasatinib or jq-1 via the previously reported linkage sites . the linker length was varied to adjust the distance of the reactive intermediate from the target binding site ( figure 1a ) . an alkyne tag was placed on the photoaffinity labels to enable the click chemistry - mediated detection and enrichment of the targets from cell lysates . to determine how the attachment of photoaffinity label affects the inhibitory activity and specificity , kinome profiling was carried out for dasatinib - derived probes ( figure 1b and table s1 ) , while the in vitro binding assay was performed for jq-1-derived probes ( figure 1c and table s2 ) . we found that da probes 2a and 2b retained most of their inhibitory activities while act probes 1a and 1b showed modest reduction ( 220-fold ) , particularly for 1a with the shorter linker . in comparison , the bp probes exhibited the largest reduction in inhibitory activity ( 25400-fold ) , presumably due to the positioning of a large , flat aromatic structure in the solvent - exposed hinge region . for the jq-1 series , almost all the photoaffinity probes demonstrated greater inhibitory activities than jq-1 , indicating that the hydrophobic photoaffinity labels form additional interactions with brd2 - 4 outside the shallow and flat canonical binding pocket . in the cell proliferation assays , the photoaffinity label - linked jq-1 probes showed potencies similar to the parent jq-1 against leukemic cell line skm-1 , but reduced activities against breast carcinoma mx-1 as well as nonsmall cell lung carcinoma nci - h1299 ( table s2 ) , likely due to the permeability difference of the photoaffinity probes and/or the disparate dependency of these cell lines on brd proteins for proliferation . dasatinib and jq-1-derived photoaffinity probes containing 2-aryl-5-carboxy - tetrazole ( act ) , diazirine ( da ) , or benzophenone ( bp ) photoaffinity label ( pal ) and their biological activities . ( a ) structures of the photoaffinity probes . a panel of 82 protein kinases were surveyed in this assay , and inhibition constants , ki , are given in micromolar . ( c ) plots of the inhibition of brd-2 , -3 , and -4 by jq-1-derived photoaffinity probes . see table s2 in the supporting information for ki values . to compare the efficiency of these three photoaffinity labels in covalently labeling their targets , we treated recombinant btk and brd4 proteins with appropriate probes , and we detected the photo - cross - linked adducts using in - gel fluorescence analysis after the copper - catalyzed click chemistry with rhodamine - azide . for btk , all probes showed irradiation and ligand - dependent labeling , with 1a and 3a giving the strongest fluorescence ( figure 2a and figure s2 ) , suggesting other factors , e.g. , click chemistry yield , may also affect the overall labeling efficiency . these probes also selectively labeled btk in the k562 cell lysate spiked with recombinant btk protein ( figure s3 ) . for brd4 protein , both act- and da - based probes showed uv light- and ligand - dependent labeling , while bp - based probes 6a/6b exhibited strong background labeling , evidenced by lack of signal attenuation in the presence of jq-1 ( figure 2b ) as well as labeling of bsa which was added to the reaction mixture to prevent nonspecific binding of brd4 to the plastic surface ( figure s4 ) . in the k562 cell lysate spiked with recombinant brd4 , act - based probe 4a showed stronger labeling of brd4 than da - based probe 5a , while bp - based probe 6a showed no labeling ( figure s5 ) , presumably due to its nonspecific associations with many cellular proteins . evaluating the efficiency and selectivity of photoaffinity - labeling of recombinant proteins by the small - molecule probes . ( a ) evaluating the btk labeling efficiency using in - gel fluorescence ( top panels ) . for reaction setup , 0.5 g of btk ( final concentration 0.1 m ) , 0.2 m small - molecule probe , 10 m dasatinib ( for competition only ) in 50 l of pbs were used . for photoirradiation , a hand - held uv lamp with a wavelength of 302 nm for act ( 5 min ) and bp ( 20 min ) or 365 nm for da ( 10 min ) was used . ( b ) evaluating the brd4 labeling efficiency using in - gel fluorescence ( top panels ) . for reaction setup , 0.4 g of brd4 ( final concentration 0.4 m ) , 0.2 m small - molecule probe , 5 g of bsa , and 10 m jq-1 ( for competition only ) in 50 l of pbs were used . the equal loading of proteins was verified by sypro ruby staining of the same gels ( bottom panels ) . see supporting information for procedures of click chemistry with tamra - azide and polyacrylamide gel electrophoresis . because the photoaffinity probes with the short linker in general exhibited higher labeling efficiency ( figure 2 ) , we decided to focus on these probes in the following comparison studies . to quantify the photo - cross - linking yield , we incubated recombinant protein targets with appropriate dasatinib or jq-1 probes , subjected the mixture to a brief uv irradiation , and analyzed the mixtures by lc - ms . gratifyingly , act - based probes 1a and 4a showed robust photo - cross - linking with their targets , reaching 60% cross - linking yield for 1a ( figure 3a ) and 95% cross - linking yield for 4a ( figure 3d ) . in contrast , da - based probes 2a and 5a gave the desired photo - cross - linked products in much lower yields ( figure 3b , 3e ) . the control experiment showed that the photoactivation efficiencies are similar between the act and da probes ( figure s6 ) . surprisingly , bp - based probes 3a and 6a did not yield any detectable photo - cross - linked adducts ; instead , the recombinant target proteins showed significant broadening of their mass peaks , suggesting the initial photoadducts , if they are formed , may have undergone fragmentation to generate less than expected lower - molecular weight adducts ( figure 3c , 3f ) . an alternative explanation is that the benzophenone serves as a photosensitizer to cause nonspecific oxidative damage to the proteins . importantly , the act - mediated photo - cross - linking with the target protein is ligand - dependent , as addition of dasatinib or jq-1 into the reaction mixture abolished the photoadducts ( figure s7 ) . in addition , the photo - cross - linking yield showed probe - concentration dependency as increasing amount of act - probe 4a used in the reaction led to a higher photo - cross - linking yield ( figure s8 ) . quantifying the cross - linking efficiency of the photoaffinity labels with recombinant target proteins by lc - ms . ( a c ) deconvoluted masses of the product mixture after incubating 2.5 m human btk387659 with 25 m dasatinib probe 1a , 2a , or 3a in 100 l of pbs for 15 min followed by photoirradiation with a hand - held uv lamp for 5 min ( 302 nm for act and bp , 365 nm for da ) on ice . ( d f ) deconvoluted masses of the product mixture after incubating 2.5 m brd444168 with 5 m jq-1 probe 4a , 5a , or 6a in 100 l of pbs followed by photoirradiation with a hand - held uv lamp for 5 min ( 302 nm for act and bp , 365 nm for da ) on ice . the cross - linking yield was calculated using the following equation : yield% = iphotoadduct/(itarget protein + iphotoadduct ) , where itarget protein and iphotoadduct represent the ion counts of the target protein and the photoadduct , respectively , and marked at the upper - right of the spectra . to identify cross - linking sites on the target protein , we digested the probe 1a - treated recombinant btk protein with trypsin , and analyzed the product mixture by lc - ms / ms . a tripeptide fragment corresponding to btk488490 with the carboxy - nitrile imine linked with the glu-488 side chain was identified ( figure 4a ) . it is noted that recombinant btk387659 protein contains 25 glu and 14 asp residues and only glu-488 was detected as labeled , indicating that the photo - cross - linking is ligand - dependent . this ligand - directed proximity - driven reactivity is consistent with the probe docking model ( figure 4b ) in which the binding of probe 1a to the kinase active site brings the c of the act ca . 6.9 away from the carboxylate of glu-488 ; indeed , it is the only nucleophilic side chain within 9.0 from the electrophilic site . certainly , because the act is completely solvent exposed and highly mobile , these distances may vary as the act orients itself dynamically relative to the btk protein . we propose that the photoadduct is formed via nucleophilic addition of the glu-488 carboxylate to the carboxy - nitrile imine intermediate followed by a 1,4-acyl shift ( figure 4c ) . this mechanism is consistent with a literature report in which quenching of the in situ generated diaryl nitrile imine by an excess carboxylic acid produced the n-acyl - n-aryl - benzohydrazide product in good yield . it is conceivable that other nucleophiles such as cys ( figure s9 ) , if they are in close proximity , may also participate in the cross - linking reactions with act for other targets . since a recent report suggested that the photoreactivity of diaryltetrazole can be harnessed for photo - cross - linking with target proteins through their acidic side chains , we compared the intrinsic reactivity of the carboxy - nitrile imine to that of the diaryl - nitrile imine toward glutamic acid ( 10 mm ) in mixed pbs / acetonitrile ( 1:1 ) solution . in the model study , the glutamate - quenching product was clearly detected for the diphenyl - nitrile imine ( figure s10 ) . in contrast , the carboxy - nitrile imine underwent predominant chloride quenching when a weak nucleophile such as glutamic acid is present in solution ( figure s9 ) , suggesting that the observed photo - cross - linking of 1a with glu-488 of the btk enzyme is not merely the result of elevated local concentration of the glutamate near the in situ generated carboxy - nitrile imine . indeed , because of the rapid chloride quenching of the reactive carboxy - nitrile imine , act should be more suitable as a photoaffinity label than the diaryltetrazoles , as the background cross - linking reactions with the nucleophilic side chains present on protein surfaces would be minimal . determination of the cross - linking site on btk protein and the proposed ligand - directed cross - linking mechanism . the ms / ms spectrum for probe 1a - modified tripeptide fragment , emr , is shown with the fragment ions annotated on the structure . ( b ) a docking model of probe 1a bound to btk ( pdb code : 3k54 ) showing a proximal glu-488 residue located on a loop 6.9 away from the c of the tetrazole ring . ( c ) proposed mechanism of the ligand - dependent nucleophilic addition to the carboxy - nitrile imine followed by the o n acyl shift to generate the specific photoadduct . encouraged by high in vitro photo - cross - linking efficiency , we sought to assess the efficiency and selectivity of act as a new photoaffinity label for in situ target identification . for comparison , we included the da - based probes 2a and 5a , as they exhibited excellent biological activities ( figure 1 ) and moderate photo - cross - linking reactivity ( figures 2 and 3 ) . in brief , suspended k562 cells were treated with 1 m probe 1a , 2a , 4a , or 5a for 5 h before uv irradiation ( 5 min for act probe - treated cells at 302 nm ; 10 min for da probe - treated cells at 365 nm ) . the cells were lysed , and the lysates were reacted with biotin azide prior to pulldown with the streptavidin agarose beads . western blot analyses revealed that the dasatinib targets , btk , src , and csk , and the jq-1 target , brd4 , were successfully captured by their respective photoaffinity probes , and pretreating the cells with 50 m parent drug , dasatinib or jq-1 , abolished the capture ( figure s11 ) . in - gel digestion of the streptavidin captured proteins on sds - page gel followed by lc - ms / ms analyses produced lists of potential targets . to ensure that the captured proteins are derived from the ligand - dependent photo - cross - linking , high - confidence targets were compiled based on the following two criteria : ( 1 ) at least two unique peptides were identified in the ms , and ( 2 ) the area under the curve ( auc)a measurement of ms signal intensity and reliabilityfor the parent drug - pretreated sample is not detectable . using these criteria , six kinases were identified by probe 1a , five of which also appeared in probe 2a - treated cells , indicating act works similarly to da ( figure 5a , table s3 ) . however , probe 1a failed to identify abl protein , presumably due to a lack of proximal nucleophilic side chains near the kinase active site . for jq-1 targets , probes 4a and 5a successfully captured the bromodomain proteins brd-2 , -3 , and -4 with minimum off - targets ( figure 5b , table s4 ) , suggesting both act and da are efficient in the in situ target identification . comparison of our data with other literature - reported ms - based target identification studies revealed that these act- and da - based photoaffinity probes performed exceptionally well ( tables s5s6 ) . venn diagrams of the identified protein targets by ( a ) dasatinib - derived photoaffinity probes 1a and 2a ; and ( b ) jq-1-derived photoaffinity probes 4a and 5a . taken together , we show that act can serve as an effective photoaffinity label for target identification both in vitro and in live cells . compared to the existing photoaffinity labels such as bp and da , the main advantage of act lies in its unique photo - cross - linking mechanism , which in principle should lead to reduced background reactions with nonspecific targets as well as a facile mapping of the ligand - binding site . structurally , act is comparable in size to bp and the electronically stabilized da derivatives such as trifluoromethylaryl diazirine , and it features a modular design with the carboxy group at the c - position of 2h - tetrazole , providing the conjugation handle for a drug molecule and the aryl group responsible for the photoreactivity . compared to da and bp , act showed higher cross - linking yields with the desired targets in vitro ( figure 3 ) , but it produced similar efficiency in target capture in situ in a two - step cross - linking / capture procedure ( figure s11 ) , suggesting additional optimization of the capture step may be necessary in order to achieve higher overall target capture yield . at present , an alkyne tag was appended onto the aryl ring to enable the click chemistry - mediated target capture . however , alternative chemical moieties that are captured covalently by engineered enzymes , e.g. , the haloalkane moiety for halotag and the benzoguanine moiety for snap tag , will be explored in the future for more efficient target capture . because of its unique cross - linking mechanism , a potential drawback of act is that a suitable nucleophile needs to be present near the ligand - binding site for a target to be captured and identified , which may result in false negative ; for example , abl kinase was not identified by 1a in this study . in principle , this limitation can be potentially overcome by increasing the linker length between act and the ligand to allow the survey of a larger area surrounding the ligand - binding pocket . in summary , we have developed a new photoaffinity label , 2-aryl-5-carboxytetrazole ( act ) , for efficient in situ target capture and subsequent identification . the attachment of act to two drug molecules was generally well tolerated without significantly altering the binding affinity and specificity . compared with da and bp , act provides a unique mechanism of target capture through which the photogenerated carboxy - nitrile imine intermediate reacts with a proximal nucleophile near the target active site . as a result , act displayed the cleanest and most efficient cross - linking with the recombinant target proteins in vitro among the three photoaffinity labels tested . in the in situ target identification studies with two previously profiled drugs , dasatinib and jq-1 , act successfully captured the desired targets in both cases with an efficiency comparable to da . while aniline was used as the aryl group in the present study , a wide range of heterocycles will be explored in the future with a goal to identify acts with enhanced solubility and photo - cross - linking reactivity . one microliter of 0.5 mm dasatinib in dmso ( for competition experiments ) or dmso ( without dasatinib competition ) was added to 0.5 g of btk in 50 l of pbs . after incubation at r.t . for 15 min , 1 l of 10 m photoaffinity probe in dmso was added . after additional incubation at r.t . for 30 min , the mixture was irradiated with a hand - held 302 nm uv lamp , ca . a premixed click reaction cocktail ( 6 l , 1:3:1:1 of 50 mm cuso4 in water/1.7 mm tbta in 1:4 dmso - buoh/50 mm tcep in water/1.25 mm tamra - azide in dmso ) was added , and the reaction mixture was incubated at r.t . for 1 h. after 1 h , 500 l of cold acetone was added , and the mixture was left at 20 c overnight . the mixture was then centrifuged at 17,200 g at 4 c for 20 min and the pellet was collected . to the pellet was added 30 l of 1 sds sample buffer , and the mixture was boiled at 95 c for 10 min before sds - page with 420% bis - tris gel using mops as running buffer . one microliter of 0.5 mm dasatinib in dmso ( for competition experiments ) or dmso ( without dasatinib competition ) was added to 0.5 g of btk in 50 l of 2 mg / ml k562 cell lysate in pbs . after incubation at r.t . for 15 min , 1 l of 10 m photoaffinity probe in dmso was added . the mixture was irradiated with a hand - held 302 nm uv lamp , ca . 23 cm from the top of the sample . a premixed click reaction cocktail ( 6 l , 1:3:1:1 of 50 mm cuso4 in water/1.7 mm tbta in 1:4 dmso - buoh/50 mm tcep in water/1.25 mm tamra - azide in dmso ) was added , and the reaction mixture was incubated at r.t . for 1 h. after 1 h , 500 l of cold acetone was added and the mixture was left at 20 c overnight . the mixture was then centrifuged at 17,200 g at 4 c for 20 min , and the pellet was collected . to the pellet was added 30 l of 1 sds sample buffer , and the mixture was boiled at 95 c for 10 min before sds - page with 420% bis - tris gel using mops as running buffer . one microliter of 0.5 mm ( + ) -jq-1 in dmso ( for competition experiments ) or dmso ( without competition ) was added to 0.4 g of brd4 and 5 g of bsa ( added to reduce nonspecific binding to the vial surface ) in 50 l of pbs . after incubation at r.t . for 15 min , 1 l of 10 m photoaffinity probe in dmso was added . after additional incubation at r.t . for 30 min , the mixture was irradiated with a hand - held 302 nm uv lamp , ca . a premixed click reaction cocktail ( 6 l , 1:3:1:1 of 50 mm cuso4 in water/1.7 mm tbta in 1:4 dmso - buoh/50 mm tcep in water/1.25 mm tamra - azide in dmso ) was added , and the reaction mixture was incubated at r.t . for 1 h. after 1 h , 500 l of cold acetone was added and the mixture was left at 20 c overnight . the mixture was then centrifuged at 17,200 g at 4 c for 20 min and the pellet was collected . to the pellet was added 30 l of 1 sds sample buffer , and the mixture was boiled at 95 c for 10 min before sds - page with 420% bis - tris gel using mops as running buffer . one microliter of 0.5 mm ( + ) -jq-1 in dmso ( for competition experiments ) or dmso ( without competition ) was added to 0.1 g of brd4 in 50 l of 2 mg / ml k562 lysate in pbs . after incubation at r.t . for 15 min , 1 l of 10 m photoaffinity probe in dmso was added . after additional incubation at r.t . for 30 min , the mixture was irradiated with a hand - held 302 nm uv lamp , ca . 23 cm from the top of the sample . a premixed click reaction cocktail ( 6 l , 1:3:1:1 of 50 mm cuso4 in water/1.7 mm tbta in 1:4 dmso - buoh/50 mm tcep in water/1.25 mm tamra - azide in dmso ) was added , and the reaction mixture was incubated at r.t . for 1 h. after 1 h , 500 l of cold acetone was added and the mixture was left at 20 c overnight . the mixture was then centrifuged at 17,200 g at 4 c for 20 min and the pellet was collected . to the pellet was added 30 l of 1 sds sample buffer , and the mixture was boiled at 95 c for 10 min before sds - page with 420% bis - tris gel using mops as running buffer . plates were stamped with 5 l of kinase buffer ( life technologies # pr4940d ) containing recombinant kinase ( 2.510 nm final concentration ) , eu or tb labeled antibodies ( his or gst ; 0.52 nm final concentration ) , and fluorescently tagged probe ( 3200 nm final concentration ) . appropriate probes were diluted in kinase buffer , and 120 l of compound was added to the plate using biomex fx . example for btk kinase : btk ( invitrogen , pv3363 ) was added to 5 l of kinase buffer ( # pr4940d ) to a final concentration of 10 nm , supplemented with 2 nm tb labeled anti - his antibody and 200 nm oregon green labeled probe . afterward , 120 l of diluted compound in kinase buffer was added and the plate was incubated at r.t . for 2 h. the plate was read on an envision plate reader , and the ki values were calculated using the assay explorer software . one hundred million k562 cells were plated in 20 ml of dmem media ( 5 million cells / ml ) without fbs and antibiotics . twenty l of 50 mm unmodified ligand ( dasatinib or ( + ) -jq-1 ) in dmso ( competition experiments ) or dmso ( without competition , control ) was added to the cells , and the mixture was incubated for 30 min ( 37 c , 5% co2 , gentle shaking ) . afterward , 20 l of 1 mm probe in dmso ( all experiments except the control ) or dmso ( control ) was added ( final competitor concentration = 50 m , final probe concentration = 1 m ) , and the sample was kept in the incubator for 5 h ( 37 c , 5% co2 , gentle shaking ) . after 5 h of incubation , cells were washed twice with 2 ml of pbs and then resuspended in 2 ml of pbs in 35 mm petri dishes . the mixture was irradiated with a hand - held 302 nm uv lamp , ca . pbs was changed to 2 ml of 0.02% tween-20 in pbs , and a protease inhibitor cocktail was added ( amresco , # m250 ) . the suspended cells were lysed with sonication ( 10 10 s with 10 s breaks , 40% power ) on ice . the lysate was centrifuged ( 20 min , 17,200 g , 4 c ) and filtered through a 0.2 m membrane . the protein concentration was measured to be 812 mg / ml using the bca assay . click reaction was performed by following a published procedure . in brief , 10 mg of cell lysate was diluted with pbs to 5 ml to obtain a final concentration of 2 mg / ml . to the above solution , 113 l of 5 mm azide - peg3-biotin ( aldrich , # 762024 ) in dmso , 113 l of 50 mm tcep in pbs , 340 l of 1.7 mm tbta in 1:4 dmso - buoh , and 113 l of 50 mm cuso4 were added . the mixture was gently mixed at r.t . for 1 h before 45 ml of acetone after centrifugation , the protein pellet was collected , washed with 2 10 ml of cold methanol , and redissolved in 14 ml of 0.1% sds in pbs . prewashed streptavidin agarose beads ( 60 l , thermo scientific , # 20347 ) were added , and the mixture was rocked at 4 c overnight . the beads were washed with 3 1 ml of 0.1% sds in pbs followed by 5 1 ml of pbs . then , 60 l of 2 sds sample buffer was added and the mixture was boiled at 95 c for 12 min before samples were loaded onto sds - page gel . rplc - ms was performed using an agilent 1100 hplc coupled to an agilent lc / msd tof running masshunter workstation acquisition b.04.00 . data was deconvoluted in masshunter qualitative analysis b.07.00 using the maximum entropy algorithm with a 0.5 da mass step , proton mass adduct , and baseline subtract factor 7.0 . the site of modification of btk by probe 1a was determined by in - gel trypsin digestion of the band corresponding to the protein after labeling with 1a as described in the following reference : shevchenko , a. evaluation of the efficiency of in - gel digestion of proteins by peptide isotopic labeling and maldi mass spectrometry . lc - ms / ms analysis was performed using a waters nanoacquity hplc system coupled to a thermo fisher scientific fusion mass spectrometer . separation of the peptides was achieved using a thermo easyspray pepmap column ( es802 ; c18 , 2 m , 100 , 75 m 25 cm ) at a flow of 0.25 l / min , with a gradient starting at 5% b ( b = 0.1% formic acid in acetonitrile , a = 0.1% formic acid in water ) , ramping to 15% b at 2 min , 1535% b over 20 min , followed by a 5 min ramp to 80% b , washing for 6 min at elevated flow ( 0.4 l / min ) , before returning to the starting conditions . the fusion source was operated at 1.9 kv in positive ion mode with ms detection in the orbitrap using 120 k resolution . the modified tripeptide was identified by its fragmentation spectrum that resulted from quadrupole isolation of the triply charged ion using an isolation window of 1.8 m / z , and fragmentation via hcd at 26% collision energy , with fragment ion detection in the orbitrap at 15k resolution . protein from in situ enriched samples was eluted from beads with 100 l of 2 lds - page sample buffer ( invitrogen ; 141 mm tris base , 106 mm trishcl , 2% lds , 10% glycerol , 0.51 mm edta , 0.22 mm serva blue g , 0.175 mm phenol red , ph 8.5 ) , and the mixture was heated to 80 c for 10 min . a 20-l sample was applied to sds - page running with 412% bis - tris gel and mops running buffer to remove the detergent . in - gel digestion five microliters , representing 10% of each sample , was loaded via waters nanoacquity autosampler onto an acclaim pep map precolumn ( p / n 164535 ) with online trapping and salt removal ( trapping flow rate at 5 l / min for 3.5 min ) . analytical separation was performed over a 90 min run using an easy spray column ( es802 ) heated to 45 c . reverse phase gradient was delivered at a flow rate of 0.225 l / min by waters nanoacquity hplc as follows : 0 min 10% b , 55 min 25% b , 60 min 40% b , 60.1 min 98.0% b , 65.1 min 10% b , 89.0% b , where b is 0.1% formic acid in acetonitrile . spectra were collected on a thermo fisher scientific fusion mass spectrometer using the following parameters : 2.1 kv spray voltage , 275 c transfer tube temperature , 3501500 m / z scan range with a quadrupole isolation window of 1.6 m / z , ms1 in the orbitrap at 120 k resolution , ms2 by cid in the ion trap with rapid speed , ms2 scans collected with top speed 3 s cycle , dynamic exclusion with repeat count 1 if occurs within 30 s and exclude for 60 s. mips on with charge states 27 allowed with 4e5 agc orbitrap and 2e3 ion trap settings . raw files were processed by proteome discoverer ( v 2.1.081 ) and searched by mascot ( v 2.4.0 ) using the uniprot human database ( downloaded 08 - 10 - 2015 ) . ms1 tolerance was set to 20 ppm , and ms2 tolerances were set to 0.8 da . label - free quantitation was performed with the precursor ions area detector function . areas under the curve ( aucs ) less than 2.0e5 were determined below loq based on previous studies on the performance of the instrument using proteomic reagent standards . thresholds per experiment were set for significant differences dependent upon the determination of potential sample loading bias by comparing total ion chromatogram ( tic ) intensity between paired injections ( competition ) and average signal from nonspecifically binding protein background .
photoaffinity labels are powerful tools for dissecting ligand protein interactions , and they have a broad utility in medicinal chemistry and drug discovery . traditional photoaffinity labels work through nonspecific c h / x h bond insertion reactions with the protein of interest by the highly reactive photogenerated intermediate . herein , we report a new photoaffinity label , 2-aryl-5-carboxytetrazole ( act ) , that interacts with the target protein via a unique mechanism in which the photogenerated carboxynitrile imine reacts with a proximal nucleophile near the target active site . in two distinct case studies , we demonstrate that the attachment of act to a ligand does not significantly alter the binding affinity and specificity of the parent drug . compared with diazirine and benzophenone , two commonly used photoaffinity labels , in two case studies act showed higher photo - cross - linking yields toward their protein targets in vitro based on mass spectrometry analysis . in the in situ target identification studies , act successfully captured the desired targets with an efficiency comparable to the diazirine . we expect that further development of this class of photoaffinity labels will lead to a broad range of applications across target identification , and validation and elucidation of the binding site in drug discovery .
Introduction Results and Discussion Experimental Section
PMC2693973
tear lysozyme is a high molecular weight , long chain glycolytic enzyme secreted by the lachrymal gland . among the tear proteins identified , lysozyme constitutes around 20%40% of the total tear protein ( farris 1985 ) and its concentration in the tear film is higher than in any other fluid of the body ( fleming 1922 ) . this protein has the capacity to dissolve gram - negative bacteria walls by the enzymatic digestion of mucopolysaccharides ( milder 1987 ) . due to this bactericidal action , lysozyme has been considered as one of the essential elements of the protective tear film barrier against ocular infection ( mackie and seal 1976 ) . the importance of this lachrymal component has contributed to develop methods for its detection and measurement , as well as to correlate its concentration with ocular pathologies . several studies have shown a decrease in the concentration of lysozyme in patients with keratoconjunctivitis sicca ( van bijsterveld 1969 ; mackie 1984 ; montero 1990 ) , suggesting that a drop off in tear lysozyme levels may constitute an important parameter to detect a malfunctioning lachrymal gland ( klaeger 1999 ) . recently , the presence of a new family of compounds in the tear film have been described : the diadenosine polyphosphates ( pintor , carracedo et al 2002 ) . these naturally occurring dinucleotide compounds exhibit both intracellular and extracellular physiological actions , these including vasoactive properties , neuromodulatory regulation of neurotransmitter release or intracellular modulation of ion channels ( mclennan 2000 ; hoyle 2002 ) . formed by two adenosine molecules joined by a variable phosphate chain , they are abbreviated as apna ( n = 27 , where n describes the number of phosphates ) . the activity of these nucleotides on ocular tissues is being investigated , and it is known that they act through p2 receptors to modulate intraocular pressure in rabbits ( pintor , peral , pelez et al 2002 ) ; ap4a and utp improve the rate of wound healing in the cornea of new zealand white rabbits ( pintor , bautista et al 2004 ) and also , ap4a , ap5a , and ap6a , can stimulate tear secretion after single - dose topical application in rabbits ( pintor , peral , hoyle et al 2002 ) . in order to investigate the physiological role of nucleotides onto the tear film , lysozyme levels in new zealand white rabbits tears have been measured after a topical application of the mentioned substances . the methodological approach used to get the lysozyme levels has been the agar diffusion method described by van bijsterveld ( 1974 ) . the analysis of the data suggests an increase in tear lysozyme levels with the application of the tested nucleotides . twelve male new zealand white rabbits from granja cunicula san bernardo ( navarra , spain ) weighing 2.02.5 kg were kept in individual cages with free access to food and water and subjected to regular cycles of light and dark ( 12 hours each ) . the new zealand white rabbits were 6 months old and the slit lamp exam evidenced no ocular pathology or alteration that could affect the tear secretion . all the experiments were performed according to association for research in vision and ophthalmology ( arvo ) statement for the use of animals in ophthalmic and vision research and in accordance with the european communities council directive ( 86/609/eec ) . three nucleotides were tested along this work : utp from amersham biosciences , inc . , ( piscataway , nj ) ; ap4a from sigma chemical ( st louis , mo ) ; up4u ( diquafosol or formerly ins365 ) kindly provided by inspire pharmaceuticals ( durham , nc , usa ) . the p2 antagonists employed were pyridoxalphosphate-6-azophenyl-2 , 4-disulfonic acid ( ppads ) , suramin and reactive blue-2 ( rb-2 ) were purchased from sigma / rbi ( natick ma ) . for single dose experiments , all the nucleotides were applied in a concentration of 100 m in a volume of 10 l in the tested eye . the p2 receptor antagonists ppads , suramin and rb-2 were instilled in concentrations of 100 m ( 10 l ) , 30 min before the application of any of the nucleotides . the utp , ap4a and up4u nucleotides were tested twice in the whole sample ( n = 24 each agonist ) . the p2 receptor antagonists were experienced once for each agonist in the whole sample ( n = 12 for each agonist ) . the diffusion in agar method was employed to measure the lysozyme levels in rabbit tears . callibration curves as well as all the lysozyme measurements were performed as indicated by van bijsterveld ( 1974 ) and mackie and seal ( 1976 ) . briefly , the protein amount was obtained by measuring the inhibitory halos around a whatman n1 paper disc of 5 mm in diameter . the paper discs were placed with clamps in the upper bulbar conjunctiva to avoid mucus strands . when the tears soaked the discs ; they were removed and put on petri dishes with agar medium where micrococus lisodeikticus was grown . the petri dishes were then incubated at 37 c for 24 hours and finally the zones of lysis of micrococus lisodeikticus were measured . lysozyme standard curves for its quantification was performed with known concentrations of hel ( hen egg lysozyme ) . for the mentioned curve , the 5 mm paper discs were weighed before and after being soaked by rabbit tears in order to estimate the equivalent volume of lysozyme solution to apply on the calibration paper discs . the difference in weight was the equivalent to 5 l of lysozyme solution , so that volume was applied to estimate the standard curve . concentrations between 0.1 mg / ml and 1.0 mg / ml of standard hen lysozyme were applied on the discs in a final volume of 5 l . after two control measurements , 10 l of the studied compound was topically instilled in one eye ( taking the contralateral eye as a control ) , and the tear samples were collected every hour for five hours . for measuring the diameter of the halos , petri dishes were scanned and analyzed by the computer program imagej ( v.1.37 , nih , usa ) . briefly , scanned images of the inhibition halos were transformed into 8-bit black and white images and further transformed into a binary image prior to the corresponding calculation . with this standardized method , data were analyzed using the paired t - test and significance was set at p < 0.05 . twelve male new zealand white rabbits from granja cunicula san bernardo ( navarra , spain ) weighing 2.02.5 kg were kept in individual cages with free access to food and water and subjected to regular cycles of light and dark ( 12 hours each ) . the new zealand white rabbits were 6 months old and the slit lamp exam evidenced no ocular pathology or alteration that could affect the tear secretion . all the experiments were performed according to association for research in vision and ophthalmology ( arvo ) statement for the use of animals in ophthalmic and vision research and in accordance with the european communities council directive ( 86/609/eec ) . three nucleotides were tested along this work : utp from amersham biosciences , inc . , ( piscataway , nj ) ; ap4a from sigma chemical ( st louis , mo ) ; up4u ( diquafosol or formerly ins365 ) kindly provided by inspire pharmaceuticals ( durham , nc , usa ) . the p2 antagonists employed were pyridoxalphosphate-6-azophenyl-2 , 4-disulfonic acid ( ppads ) , suramin and reactive blue-2 ( rb-2 ) were purchased from sigma / rbi ( natick ma ) . for single dose experiments , all the nucleotides were applied in a concentration of 100 m in a volume of 10 l in the tested eye . the p2 receptor antagonists ppads , suramin and rb-2 were instilled in concentrations of 100 m ( 10 l ) , 30 min before the application of any of the nucleotides . the utp , ap4a and up4u nucleotides were tested twice in the whole sample ( n = 24 each agonist ) . the p2 receptor antagonists were experienced once for each agonist in the whole sample ( n = 12 for each agonist ) . the diffusion in agar method was employed to measure the lysozyme levels in rabbit tears . callibration curves as well as all the lysozyme measurements were performed as indicated by van bijsterveld ( 1974 ) and mackie and seal ( 1976 ) . briefly , the protein amount was obtained by measuring the inhibitory halos around a whatman n1 paper disc of 5 mm in diameter . the paper discs were placed with clamps in the upper bulbar conjunctiva to avoid mucus strands . when the tears soaked the discs ; they were removed and put on petri dishes with agar medium where micrococus lisodeikticus was grown . the petri dishes were then incubated at 37 c for 24 hours and finally the zones of lysis of micrococus lisodeikticus were measured . lysozyme standard curves for its quantification was performed with known concentrations of hel ( hen egg lysozyme ) . for the mentioned curve , the 5 mm paper discs were weighed before and after being soaked by rabbit tears in order to estimate the equivalent volume of lysozyme solution to apply on the calibration paper discs . the difference in weight was the equivalent to 5 l of lysozyme solution , so that volume was applied to estimate the standard curve . concentrations between 0.1 mg / ml and 1.0 mg / ml of standard hen lysozyme were applied on the discs in a final volume of 5 l . after two control measurements , 10 l of the studied compound was topically instilled in one eye ( taking the contralateral eye as a control ) , and the tear samples were collected every hour for five hours . for measuring the diameter of the halos , petri dishes were scanned and analyzed by the computer program imagej ( v.1.37 , nih , usa ) . briefly , scanned images of the inhibition halos were transformed into 8-bit black and white images and further transformed into a binary image prior to the corresponding calculation . with this standardized method , data were analyzed using the paired t - test and significance was set at p < 0.05 . to determine whether these nucleotides were able to modify rabbit tear lysozyme levels at intervals after the nucleotide topical application , single doses of utp , ap4a , up4u were applied at 100 m ( at a final volume of 10 l ) ( figure 1 ) . the results for the time - course of the single - dose application compared with the basal curve for lysozyme is represented in figure 2 . it is possible to see that all the three substances increase the concentration of lysozyme in the rabbit tears , and that the maximal effect is obtained two hours after the instillation of the corresponding nucleotide . this effect remained for at least 3 more hours , then returned to basal levels ( figure 2 ) . a representative experiment showing the inhibitory halos around whatman n1 paper discs in the moment of maximal effect is presented in figure 3 . these halos represent the zones of lysis of micrococcus lysodeikticus due to the action of lysozyme . the first disc shows the basal diameter of lysis for a control rabbit ( basal lysozyme concentration present in tears , figure 3a ) . the others showed a maximal increased diameter of lysis due to the action of the different nucleotides on the tear lysozyme levels after their topical application ( figure 3a ) . a representative value to compare effects among nucleotides can be obtained if we represent the mean of lysozyme concentration obtained for the period between 2 and 5 hours plateau for any of the substances versus basal concentration . as it is shown in figure 3b , it is possible to see that the mean values for all the nucleotides showed a significant increase in lysozyme concentrations over the basal level . the increase in lysozyme concentration after the nucleotides application was 93 % for ap4a , 119% for up4u and 67% for utp ( figure 3b ) . the activity of the substances on tear lysozyme levels suggests the activation of p2 nucleotide receptors . to confirm this , the effects of three non - selective nucleotide receptor antagonists on lysozyme levels were studied : ppads , suramin and rb-2 . diadenosine tetraphosphate effect on lysozyme production was significantly reversed by the antagonist ppads , returning the lysozyme concentration to control values . neither of the other two , rb-2 or suramin was able to modify the effect triggered by ap4a . up4u was antagonized by ppads and suramin , the reduction being 42% and 52% for ppads and suramin respectively . moreover , in the case of rb-2 , there was a potentiation of utp effect when compared with the effect of this nucleotide alone , showing an increase of almost 25% above the lysozyme concentration obtained when utp is applied alone . this study shows for the first time the physiological effect of ap4a , up4u and utp on the tear lysozyme levels in new zealand white rabbits . all the tested nucleotides showed an increase in the lysozyme concentrations for a single - dose application , which was measurable for several hours after the topical application of the compounds . since lysozyme is one of the main proteins involved in the protective tear film barrier against ocular infection , together with lactoferrin , an increase in their levels may , in general , enhance the bactericidal action of tears , preserving the health of the ocular surface . the presence of nucleotides and dinucleotides in the tear is suggesting their involvement in several relevant physiological processes of the ocular surface . we have added to the list of biological actions of nucleotides and dinucleotides the ability of these compounds to increase the tear concentrations of lysozyme . it is moreover relevant the fact of having important changes in the concentrations of this enzyme when up4u or utp are topically applied . these two nucleotides have not been described in tears but they behave similarly to ap4a , suggesting that all may activate the same p2y purinoceptor subtype . in this sense , the three tested nucleotides have been claimed to be quite selective purinergic receptors ( lazarowski et al 1995 ; for the p2y2 mundasad et al 2001 ) . nonetheless , the results obtained by means of the non - selective p2 receptor antagonists depict a different pattern depending on the tested nucleotide . in the case of ap4a the only effective antagonist was ppads while in the case of up4u apart from ppads , suramin was also able to partially reverse the dinucleotide action . one may expect both nucleotides to stimulate the same receptor or group of receptors ( see the similarities in the structure showed in figure 1 ) . taking into account our results , it seems that ap4a and up4u share some p2y receptor activation but in the case of up4u it may activate more than one , since its increase in lysozyme is partially blocked by suramin while ap4a effect is not . this may be at least in part justified by the fact of the presence of several p2y receptor in the lachrymal gland and cornea ( cowlen et al 2003 ; pintor , bautista et al 2004 ; pintor , sanchez - nogueiro et al 2004 ) , and also by the fact that up4u is known to activate at least p2y2 and p2y4 receptors ( brunschweiger and muller 2006 ) . it is noteworthy the effect the antagonists of p2 receptors display on the utp effect . although one should expect that rb-2 might block the effect of utp , the observed behavior of this antagonist shows the opposite effect . reactive blue 2 has been described as a very potent inhibitor of all plasma membrane - bound ntpdases as reported by many authors ( mateo et al 1996 , 1997 ) . since rb-2 inhibits ecto - enzymes and does not antagonize p2y receptors it is reasonable to see a potentiation of the utp effect since it will not be degraded as fast as when it is applied alone . , it has been described a decrease in lysozyme levels with age ( mackie and seal 1976 ) as well as in other pathological states running with a drop off in tear secretion like keratoconjunctivitis sicca or sicca syndrome ( mackie and seal 1984 ) . therefore , in these cases , it may be good to incorporate any of the indicated nucleotides as a new compound as a pharmacological treatment to reinforce the barrier against opportunistic infections . recently , apart from the bactericidal role , tragoulias et al ( 2005 ) and millar et al ( 2006 ) have suggested that lysozyme contributes to lower the surface tension of the tear film . in this case , an increase in lysozyme levels could contribute to stablizing the tear film , thus avoiding an excessive evaporation and the resulting problems on the ocular surface . in summary , we have shown that the p2y receptor agonists , ap4a , up4u and utp , are able to increase the tear concentrations of lysozyme . due to the importance of this protein , these nucleotides may be used to fortify the tear film barrier by increasing a naturally occurring defender against some ocular surface infections .
the present work studies the effects of topical application of nucleotides on rabbit tear lysozyme levels . lysozyme values were determined by the diffusion in agar method described by van bijsterveld in 1974 , and the protein amount was obtained by measuring the inhibitory halos around a whatman n1 paper disc of 5 mm in diameter . the tested nucleotides were utp , ap4a and up4u . these compounds were topically instilled in a single - dose in one eye ( with the contralateral eye as a control ) and the lysozyme halos were measured along 5 hours . the obtained results showed an increase in the lysozyme concentrations of 67% , 93% , and 119% for utp , ap4a , and up4u , respectively , over the basal levels of lysozyme . for this reason , we suggest these molecules as a potential treatment for the reinforcement of the tear film barrier against ocular infection .
Introduction Experimental procedures Animals Compounds Diffusion in agar Statistical analysis Results Discussion
PMC4855966
it has been estimated that 16.8% of live births across the world in 2013 were in women who had some form of hyperglycemia in pregnancy . in india , gestational diabetes mellitus ( gdm ) has been estimated to affect over 5 million women . gdm poses serious health consequences for mother and the baby both in the short and in the long - term . in addition , this risk starts even at maternal glucose levels below those traditionally considered as diagnostic of gdm . however , treatment of maternal hyperglycemia has been shown to reduce this risk almost to the level seen in women without gdm . although most women with gdm usually return to the normoglycemic state shortly after childbirth , they still have 7 times higher risk of developing type 2 diabetes ( t2 dm ) in future . accurate and timely diagnosis of gdm , therefore , provides a window of opportunity for intervention to reduce the growing burden of t2 dm . prioritizing postpartum care and continued follow - up will help to prevent / delay the onset of t2 dm . this is particularly relevant in india , which already suffers from a massive burden of t2 dm . though the importance of recognizing and managing gdm is now well - accepted , there are no universally accepted criteria for the screening and diagnosis of this condition . despite several guidelines laid down by various organizations , some parts of the world follow risk - based screening whereas others follow universal screening . there is also no consensus on the diagnostic test to be used and the glucose cut - offs to be applied . use of different criteria makes the accurate estimation of prevalence of gdm difficult and raises the possibility of over- and under - diagnosis of the condition . in addition , in india , care of women with gdm is carried out by a variety of healthcare professionals ( hcp ) . they face unique challenges in their interactions with pregnant women , leading them to favor one diagnostic approach over the other , even though such decisions may not be based on sound scientific evidence . we , therefore , attempted to understand the perceptions and practices of two important categories of hcps involved in the care of gdm in india ( physicians / diabetologists / endocrinologists and obstetricians / gynecologists [ ob / gyns ] ) , regarding diagnosis , management , and follow - up of gdm . the results from this study will help to gauge the different screening methods and diagnostic criteria employed in india for the diagnosis and management of gdm , and thus understand the gaps and highlight target areas for improvement with regard to provision of care for women with gdm . the present study aims to obtain information on existing practices for the diagnosis and management of gdm among physicians / diabetologists / endocrinologists and ob / gyns from different parts of india . the present study aims to obtain information on existing practices for the diagnosis and management of gdm among physicians / diabetologists / endocrinologists and ob / gyns from different parts of india . a nationwide survey on the practice patterns with respect to diagnosis and management of gdm was carried out covering physicians / diabetologists / endocrinologists and ob / gyns from 24 states of india . data collection involved two methods : self - completed questionnaires and an online web - based software . questionnaires were handed over directly to the doctors to be filled on the spot and returned . the online questionnaire was filled through customized web - based software called wings survey created by the madras diabetes research foundation . the frontend software was developed using microsoft asp.net , and the data collected was stored in the microsoft sql server 2000 database . the questionnaire addressed different screening techniques employed ; gdm diagnostic guidelines and diagnostic cut - offs based on blood glucose levels ; management and follow - up , pharmacotherapy and postpartum follow - up . several of the questions allowed multiple responses leading to the reported percentage adding up to more than 100% . a total of 3841 doctors ( 2020 physicians / diabetologists / endocrinologists and 1821 ob / gyns ) participated in the survey . the survey covered 24 states of india including the national capital territory of delhi and two union territories , chandigarh , and puducherry . figure 1 shows the state wise percentage distribution of physicians / diabetologists / endocrinologists and ob / gyns , who participated in the survey . percentage of respondents in different states of india more than half of the doctors who participated in the survey practiced in private clinics and hospitals whereas the rest worked in multispecialty hospitals and government hospitals [ table 1 ] . type of institution where the doctors practised the vast majority of the ob / gyns ( 84.9% ) screened all pregnant women for gdm , i.e. , universal screening while 14.5% preferred to do only risk - based screening . the rest ( 0.6% ) reported that they do not screen for gdm in pregnant women [ figure 2 ] . universal versus risk based screening most of the ob / gyns performed screening for gdm in the first trimester ( 18.8% at booking and 49% between 8 and 20 weeks ) . the screening was performed between 20 and 28 weeks by 40% and after 28 weeks by 2.8% . this question allowed multiple responses , and hence , the percentages reported are more than 100 . among the 1634 ( 89.7% ) ob / gyns who responded to this question , 600 ( 36.7% ) reported using the diabetes in pregnancy study group india ( dipsi ) criteria , 403 ( 24.7% ) , the world health organization ( who ) 1999 criteria , 389 ( 23.8% ) , the international association for diabetes and pregnancy study groups ( iadpsg ) criteria , and 242 ( 14.8% ) , the american diabetes association ( ada ) 2-step method ( 50 g glucose challenge test followed by 100 g 3 h glucose tolerance test with the cut - offs proposed by carpenter and coustan or the national diabetes data group ) . among the physicians / diabetologists / endocrinologists , 1903 ( 94.2% ) responded to this question , out of whom 560 ( 29.4% ) reported using the dipsi criteria , 428 ( 22.5% ) the who 1999 criteria , 364 ( 19.1% ) , the iadpsg criteria , and 551 ( 29% ) the ada criteria . however , as shown in table 2 , responses to subsequent questions on type of blood sample collected , the glucose load used , and the cut - offs as per the criteria revealed that 54.9% ob / gyns and 54.7% diabetologists / endocrinologists did not correctly follow any of the criteria . screening criteria for gestational diabetes mellitus followed by the obstetricians / gynecologists and physicians / diabetologists / endocrinologists when women required pharmacological treatment to manage their gdm , 1019 ( 50.4% ) of diabetologists / endocrinologists said they used insulin for all women with gdm , 758 ( 37.5% ) said they used insulin only for some , and 243 ( 12.1% ) reported using no insulin at all . among those who preferred using oral hypoglycemic agents , metformin was used by 1087 ( 53.8% ) of diabetologists / endocrinologists , sulfonylureas by 72 ( 3.6% ) , and a combination of the two by 247 ( 12.2% ) whereas 591 ( 29.3% ) reported no use of oral hypoglycemic agents ( ohas ) and 23 ( 1.1% ) reported using other ohas such as alpha - glucose inhibitors . the majority of ob / gyns ( 78.3% ) routinely delivered women with gdm before 38 weeks gestation whereas 10.9% waited beyond 38 weeks and 10.8% up to 40 weeks . fifty - six percent of physicians / diabetologists / endocrinologists and 71.6% ob / gyns said they advised women with gdm to undergo oral glucose tolerance testing ( ogtt ) after delivery . ogtt was advised within 6 weeks of delivery by 42.4% of diabetologists / endocrinologists and 44.2% of ob / gyns , and between 6 and 2 weeks after delivery by 48% of diabetologists / endocrinologists and 49.4% of ob / gyns . more than half of the doctors who participated in the survey practiced in private clinics and hospitals whereas the rest worked in multispecialty hospitals and government hospitals [ table 1 ] . the vast majority of the ob / gyns ( 84.9% ) screened all pregnant women for gdm , i.e. , universal screening while 14.5% preferred to do only risk - based screening . the rest ( 0.6% ) reported that they do not screen for gdm in pregnant women [ figure 2 ] . most of the ob / gyns performed screening for gdm in the first trimester ( 18.8% at booking and 49% between 8 and 20 weeks ) . the screening was performed between 20 and 28 weeks by 40% and after 28 weeks by 2.8% . this question allowed multiple responses , and hence , the percentages reported are more than 100 . among the 1634 ( 89.7% ) ob / gyns who responded to this question , 600 ( 36.7% ) reported using the diabetes in pregnancy study group india ( dipsi ) criteria , 403 ( 24.7% ) , the world health organization ( who ) 1999 criteria , 389 ( 23.8% ) , the international association for diabetes and pregnancy study groups ( iadpsg ) criteria , and 242 ( 14.8% ) , the american diabetes association ( ada ) 2-step method ( 50 g glucose challenge test followed by 100 g 3 h glucose tolerance test with the cut - offs proposed by carpenter and coustan or the national diabetes data group ) . among the physicians / diabetologists / endocrinologists , 1903 ( 94.2% ) responded to this question , out of whom 560 ( 29.4% ) reported using the dipsi criteria , 428 ( 22.5% ) the who 1999 criteria , 364 ( 19.1% ) , the iadpsg criteria , and 551 ( 29% ) the ada criteria . however , as shown in table 2 , responses to subsequent questions on type of blood sample collected , the glucose load used , and the cut - offs as per the criteria revealed that 54.9% ob / gyns and 54.7% diabetologists / endocrinologists did not correctly follow any of the criteria . screening criteria for gestational diabetes mellitus followed by the obstetricians / gynecologists and physicians / diabetologists / endocrinologists when women required pharmacological treatment to manage their gdm , 1019 ( 50.4% ) of diabetologists / endocrinologists said they used insulin for all women with gdm , 758 ( 37.5% ) said they used insulin only for some , and 243 ( 12.1% ) reported using no insulin at all . among those who preferred using oral hypoglycemic agents , metformin was used by 1087 ( 53.8% ) of diabetologists / endocrinologists , sulfonylureas by 72 ( 3.6% ) , and a combination of the two by 247 ( 12.2% ) whereas 591 ( 29.3% ) reported no use of oral hypoglycemic agents ( ohas ) and 23 ( 1.1% ) reported using other ohas such as alpha - glucose inhibitors . the majority of ob / gyns ( 78.3% ) routinely delivered women with gdm before 38 weeks gestation whereas 10.9% waited beyond 38 weeks and 10.8% up to 40 weeks . fifty - six percent of physicians / diabetologists / endocrinologists and 71.6% ob / gyns said they advised women with gdm to undergo oral glucose tolerance testing ( ogtt ) after delivery . ogtt was advised within 6 weeks of delivery by 42.4% of diabetologists / endocrinologists and 44.2% of ob / gyns , and between 6 and 2 weeks after delivery by 48% of diabetologists / endocrinologists and 49.4% of ob / gyns . it is evident from our survey that the majority of doctors in india prefer universal screening for gestational diabetes . this is consistent with findings from an online survey conducted by divakar and manyonda in 2011 among physicians in india and is in line with the dipsi guidelines , which favor universal screening as well as with many of the international guidelines , which recommend that all women of high - risk ethnicity be screened for gdm . as the prevalence of gdm is almost 11-fold higher in indian women when compared to their caucasian counterparts , universal screening is an essential tool in india to ensure that no case of gdm or preexisting diabetes is missed out . it is heartening to note that the majority of ob / gyns screen their antenatal patients for diabetes in the first trimester itself . delaying screening till the second trimester carries the risk of missing preexisting ( pregestational ) diabetes , especially in a population such as india , where the background prevalence of t2 dm is high . it would , however , be ideal if all pregnant women ( as opposed to 18.8% in the present survey ) were to be screened at the first booking visit itself as recommended by the guidelines . the multiplicity of available criteria for the diagnosis of gdm , and frequent changes in recommendations made by international organizations , have led to confusion among hcps as to the best screening test to be used for gdm . this is reflected in the results of our survey where a variety of guidelines were reported to be used by ob / gyns and physicians / diabetologists / endocrinologists for diagnosing gdm . what is even more worrisome is that the majority of hcps applied these criteria incorrectly , raising the possibility of over- and under - diagnosis of gdm in india . for example , 36.7% ob / gyns and 29.4% physicians / diabetologists / endocrinologists said they used the dipsi criteria , but in reality , only 12.7% and 3.8% , respectively , used the dipsi criteria . our results highlight the need for creating greater awareness among hcps regarding the currently accepted guidelines for the diagnosis of gdm so that the majority of women with gdm are accurately identified and appropriately managed . insulin is the first - line pharmacologic therapy for gdm ; however , there is some evidence that ohas such as metformin are safe in pregnancy . decision - making with respect to the initiation of insulin is often driven by patient preference . data from our survey shows that more than half of the physicians / diabetologists / endocrinologists preferred to use insulin for women with gdm . when oha were preferred , 53.8% reported the use of metformin , 3.6% used sulfonylurea alone , and 12.2% used both . it is a matter of some concern that 1.1% of all physicians / diabetologists / endocrinologists reported using other ohas during pregnancy ( none of which have been approved for use in this setting ) . there is little consensus regarding the preferred method and timing of delivery in women with gestational diabetes , which is usually based on expert opinion . in the absence of any maternal or fetal complications , ob / gyns routinely deliver at 40 weeks of gestation . however , our survey reports that majority of the ob / gyns deliver their patients before 38 weeks , presumably to minimize the risk of sudden intra uterine death during the 3 trimester , which has been shown to occur despite intensive fetal surveillance . they are also at risk of development of gdm at an earlier stage during subsequent pregnancies . the ada recommends that women with gdm should undergo screening for t2 dm with ogtt , 6 weeks after delivery . although guidelines for postpartum care are established , data from our survey reports that not all diabetologists / endocrinologists and ob / gyns advise postpartum ogtt . a possible explanation could be that many physicians do not prioritize these guidelines in practice or may view another health practitioner as responsible for the follow - up . omitting this important piece of advice will lead to the majority of women with gdm remaining unaware of their glycemic status postdelivery , and thereby running the risk of entering another pregnancy with undiagnosed diabetes , as well as accumulating prolonged duration of unrecognized , uncontrolled hyperglycemia . both physicians / diabetologists / endocrinologists as well as ob / gyns have an enormous responsibility for providing proper care and management for women with gdm . this large survey conducted all over india , covering 24 states reveals that more than half of the diabetologists / endocrinologists and ob / gyns in india do not follow any of the recommended guidelines for the diagnosis of gdm possibly due to lack of awareness about these guidelines . this emphasizes the need for increased awareness about screening and diagnosis of gdm both among physicians and ob / gyns . proper educational intervention to address these gaps will help doctors to understand their role in promoting better pregnancy outcomes .
aim : to obtain information on existing practices in the diagnosis and management of gestational diabetes mellitus ( gdm ) among physicians / diabetologists / endocrinologists and obstetricians / gynecologists ( ob / gyns ) in india.methods:details regarding diagnostic criteria used , screening methods , management strategies , and the postpartum follow - up of gdm were obtained from physicians / diabetologists / endocrinologists and ob / gyns across 24 states of india using online / in - person surveys using a structured questionnaire.results:a total of 3841 doctors participated in the survey of whom 68.6% worked in private clinics . majority of ob / gyns ( 84.9% ) preferred universal screening for gdm , and screening in the first trimester was performed by 67% of them . among the ob / gyns , 600 ( 36.7% ) reported using the nonfasting 2 h criteria for diagnosing gdm whereas 560 ( 29.4% ) of the diabetologists / endocrinologists reported using the same . however , further questioning on the type of blood sample collected and the glucose load used revealed that , in reality , only 208 ( 12.7% ) and 72 ( 3.8% ) , respectively , used these criteria properly . the survey also revealed that the international association of diabetes and pregnancy study groups criteria was followed properly by 299 ( 18.3% ) of ob / gyns and 376 ( 19.7% ) of physicians / diabetologists / endocrinologists . postpartum oral glucose tolerance testing was advised by 56% of diabetologists and 71.6% of ob / gyns.conclusion : more than half of the physicians / diabetologists / endocrinologists and ob / gyns in india do not follow any of the recommended guidelines for the diagnosis of gdm . this emphasizes the need for increased awareness about screening and diagnosis of gdm both among physicians / diabetologists / endocrinologists and ob / gyns in india .
I Aim M R Primary institution of practice Universal screening versus risk-based screening Mean week of gestation at which screening was performed Guidelines used to diagnose gestational diabetes mellitus Use of insulin and oral hypoglycemic agents for treatment Timing of delivery of women with gestational diabetes mellitus Postpartum follow-up of women with gestational diabetes mellitus D C Financial support and sponsorship Conflicts of interest
PMC4344839
the study was conducted in a wintering population of tits in wytham woods , u.k . 1 ) . 1018 nest - boxes suitable for great tits are installed at this site , with the vast majority of great tits breeding in boxes . individuals are trapped as nestlings and breeding adults at nest - boxes and fitted with both a british trust for ornithology metal leg ring and a plastic leg ring containing a uniquely identifiable passive integrated transponder ( pit ) tag ( ib technology , aylesbury , u.k . ) . there is a further mist - netting effort over autumn and winter to tag individuals immigrating into the population , and we estimate that over 90% of individuals were pit - tagged at the time of the study . in this population , great tits form loose fission - fusion flocks of unrelated individuals in autumn and winter . flocks congregate at patchy food sources , and can be observed at bird feeders fitted with pit - tag detecting antennae . experiments were conducted in eight sub - populations within wytham woods that had relatively little short - term between - area movement of individuals ( extended data fig . the experimental apparatus consisted of an opaque plastic box with a perch positioned in front of a door that could be slid to either side with the bill to gain access to a feeder concealed behind . the left side of the door was colored blue and the right side red , with a raised front section on the door to allow an easier grip . the concealed feeder contained approximately 500 live mealworms and was refilled up to twice daily . 5 ) , and as live mealworms were used , solvers typically extracted one worm and then carried it away from the puzzle - box to kill and eat it ( confirmed with video observations ) ; supplementary video 1 - 2 . each puzzle - box was surrounded by a 11 m cage with a 55 cm mesh that gave unlimited access to small birds , but prevented access by large non - target species such as corvids or squirrels . a freely accessible bird feeder filled with peanut granules was also provided in the cage , at approximately 1 m from the puzzle - box . peanut granules are a much less preferred food source ( extended data fig . 5 ) . each peanut feeder had two access points fitted with rfid antenna and data - logging hardware . this feeder was used to attract the original demonstrator to the location , and to record the identity of individuals that did not contact the puzzle - box . all puzzle - boxes contained a printed circuit board ( pcb ) and motor , and were powered by a 12v sealed battery . the perch also functioned as an rfid antenna that registered the visit duration ( time to nearest second ) and identity of the visiting individual . solve was recorded if the door was opened during an individual visit to the device , with the side direction also noted . if a solution occurred without an accompanying identified individual , this was recorded as unidentified solve . if further individuals visited before this happened , then a scrounge was recorded , as they were assumed to have taken food from the open door ( confirmed from video observations ) . the door reset immediately after two individuals were registered scrounging , preventing more than two possible scrounging events per solve ( supplementary video 2 ) . two males were captured from each sub - population ( 11 adults , 5 juveniles ) to act as demonstrators , either by removal from roosting boxes on sunday night , or by mist - netting at a sunflower - seed feeder on monday morning . they were transferred to individual cages in indoor captive facilities , and over four days each pair of birds was subjected to one of three training regimes using step - wise shaping , either : ( i ) given no training and left in the cage with ab lib food ( control ) ; ( ii ) trained to solve the novel puzzle - box by pushing the blue side of the door to the right ( option b ) ; or ( iii ) trained to solve the novel puzzle - box by pushing the red side of the door to the left ( option a ) . with the exception of control areas , which were clustered in the south of the woodland to avoid cross - contamination , sub - populations were randomly assigned to a training regime , with both demonstrators from a single sub - population trained on the same technique . during training , the demonstrators were initially exposed to an open puzzle - box baited with mealworms , which was then gradually closed over the course of four days until the subjects were reliably re - opening it . the birds were released back at the site of capture in each respective sub - population ; puzzle - boxes at which both options were available and equally rewarding were installed at three sites 250 m apart on the following sunday night ( extended data fig . these puzzle - boxes were run over a four - week period at each site , continuously operating from monday to friday and then removed on saturday and sunday , for a total of 20 days of data collection . four replicates were conducted in the first year of data collection ( december 2012-february 2013 ; c1 - 2 , t1 , t3 ) . at three of these replicates ( c1 , t1 , t3 ) puzzle - boxes were simultaneously re - installed at the same locations for 5 days of further data collection in december 2013 . no additional demonstrators were trained , and no individual had contact with the puzzle - box in the 9 months between the two data collection periods . this second exposure aimed to test the long - term stability of social learning at the sub - population level . they were run prior to the second year of data - collection for the cultural diffusion experiment in order to exclude the possibility that dispersing individuals from new replicates could be re - introducing the novel behaviour . an additional four replicates were then conducted from december 2013 - february 2014 in new sub - populations , using the same initial protocol ( c3 , t2 , t4 , t5 ) . the local population size for each replicate was defined as comprising all individuals in a replicate that had been recorded at least once at either : ( i ) the puzzle - box , ( ii ) the nearby peanut feeder , or ( iii ) the nearest network - logger feeders ( operated saturday - sunday , see below ) , during the experimental period ( i.e. from the weekend following the release of the demonstrators , to the weekend after the 20 day of operation of the puzzle - boxes ) . persistence trial in the following year , the local population was defined just as ( i ) all individuals observed at the puzzle - box or ( ii ) nearby peanut feeder , so that areas were comparable . to analyse the results of the initial experiment we first compared control replicates and treatment replicates , using welch two - sided t - tests , and by fitting linear and sigmoidal models to the data , with the best model ascertained by difference in aic values . if individuals were using social information when learning about the puzzle - box , then we expected that there would be a difference between areas seeded with a trained demonstrator ( treatment ) and those without ( control ) . replicates were thus compared in terms of latency to first solve ( seconds from beginning of the experimental period , excluding demonstrator ) , and the total number of solutions . secondly , we compared the total number of solutions in the two different experimental treatments . here if a more complex form of social learning than local enhancement to the feeding site was occurring , then we expected a consistent bias towards the seeded variant in the different treatments . to analyse the change in individual and population preferences for option a or b over time , we used a generalised estimating equation model ( gee ) where the dependent variable was the proportion of solutions using the seeded technique on each day of data collection , and the explanatory variables were the individuals and replicate , weighted by the overall number of solutions per day . the seeded technique ( a / b ) was initially also included as an explanatory variable , but was not significant ( coefficient se = 0.130.22 , p = 0.55 ) . three individual variables were included in a gee model ; sex , age and natal origin . sex was determined at capture using plumage coloration , age was either determined from breeding records or plumage coloration , and individuals were classed as immigrants if they had dispersed into the study site , and locally - born if they had been ringed as a nestling in the study site . only age was significant ( coefficient se = 0.920.20 , p < 0.001 ) , and was included in the final model ( sex : coefficient se = 0.380.22 , p = 0.08 ; natal origin : coefficient se = 0.380.22 , p = 0.08 ) . if population - level conformity was partly the result of a conformist transmission bias at first acquisition we would expect a sigmoidal relationship between population - level frequency of the variant and adoption probability , with adoption of the majority variant disproportionately more likely than its absolute frequency . by contrast , copying the last individual observed , or random copying , should yield a linear relationship , with probability of adopting option a / b roughly equal to its proportion in the overall population . to investigate this , we isolated all individuals first observed solutions in all experimental replicates , and compared the option choice to the proportion of all previous solves as option a observed in the individual s group at that site . group length was set at 245 sec , which was the average group length observed using gaussian mixture models on temporal patterns of flocking ( see below ) at network - logging sunflower feeders . both linear and sigmoidal models were then fitted to the data , with the best model ascertained by difference in aic values . we further examined the subset of individuals that moved between sub - populations ( n=40 ) . this subset included all individuals recorded in more than one experimental replicate , whether within the season ( n=16 ) , or between seasons ( n=24 ) . no individual was observed in more than two replicates , and this analysis did not include individuals in the persistence trial. a preference for option a / b at each location was defined as more than 75% of all solves for either option a / b in that replicate . finally , in order to analyse the change in within - individual bias towards option a / b between the initial experiment and the second - year persistence trials , we used a general linear model where the dependent variable was the number of solves as the seeded variant over the total number of solves for each individual observed in both years . explanatory variables were treatment type and year , with individual identity as a random effect . sunflower bird - feeding stations were deployed at 65 locations around wytham woods on an approximate 250250 m square grid , as part of long - term research into social - network structure in tits ( see ) . each station had two access points , each fitted with rfid antennae and data logging hardware . feeding stations automatically opened from dawn to dusk on saturday and sunday , scanning for pit - tags every 16 of a second . this study used the data from the eight nearest locations to each set of puzzle - boxes , for 10 dates within and surrounding the cultural diffusion experiment ( the standard logging protocol runs from september - february in wytham woods ) . great tits were detecting visiting feeding stations and individually identified by their pit - tags . we then applied a gaussian mixture model to the spatiotemporal data stream to detect distinct clusters of visits . this method locates high - density periods of feeding activity , isolating flocks of feeding birds without imposing artificial assumptions about group boundaries . a gambit of the group approach was used with a simple - ratio index to calculate social associations , where individual association strengths ( network edges ) were scaled between 0 ( never observed foraging together in the same group ) to 1 ( always observed in the same group , never observed apart ) . while a single co - occurrence may not be meaningful , our automated data collection method resulted in thousands of repeated group sampling events , allowing social ties between individuals to be built up from multiple observations of co - occurrences over time and across spatial locations . networks contained 123 ( t1 ) , 137 ( t2 ) , 154 ( t3 ) , 95 ( t4 ) and 110 ( t5 ) nodes ; average edge strength was 0.09 ( t1 ) , 0.05 ( t2 ) , 0.08 ( t3 ) , 0.07 ( t4 ) and 0.07 ( t5 ) . to test whether networks contained significantly preferred and avoided relationships , we ran permutation tests on the grouping data , controlling for group size and the number of observations , restricting swaps within days and sites . we tested whether observed patterns of associations were non - random by comparing the coefficient of variance in the observed network to the coefficient of variance in the randomised networks . social networks for all replicates significantly differed from random , even at local scales ( t1 : p<0.0001 ; t2 : p=0.0005 ; t3 : p<0.0001 ; t4 : p=0.0002 ; t5 : p=0.0002 ) finally , we used network - based approaches to ask whether the behaviour was socially transmitted through foraging associations . network - based diffusion analysis ( nbda ) tests for social learning by assuming that if social transmission is occurring , then the spread of trait acquisition should follow patterns of relationships between individuals , with transmission rate linearly proportional to association strength . we used nbda r code v.1.2 , with the time of each individual s first solution ( seconds since the beginning of the experiment ) entered into the continuous time of acquisition analysis function . individuals that solved , but that did not appear in the social network ( i.e. had not been recorded in the standardised weekend logging ) were excluded from the analysis . the effects of three individual level variables were also incorporated into the analysis : sex , age , and natal origin . all combinations of nbda provided in the nbda r code v1.2 were run with social transmission rate allowed to vary for each replicate . the study was conducted in a wintering population of tits in wytham woods , u.k . 1 ) . 1018 nest - boxes suitable for great tits are installed at this site , with the vast majority of great tits breeding in boxes . individuals are trapped as nestlings and breeding adults at nest - boxes and fitted with both a british trust for ornithology metal leg ring and a plastic leg ring containing a uniquely identifiable passive integrated transponder ( pit ) tag ( ib technology , aylesbury , u.k . ) . there is a further mist - netting effort over autumn and winter to tag individuals immigrating into the population , and we estimate that over 90% of individuals were pit - tagged at the time of the study . in this population , great tits form loose fission - fusion flocks of unrelated individuals in autumn and winter . flocks congregate at patchy food sources , and can be observed at bird feeders fitted with pit - tag detecting antennae . experiments were conducted in eight sub - populations within wytham woods that had relatively little short - term between - area movement of individuals ( extended data fig . the experimental apparatus consisted of an opaque plastic box with a perch positioned in front of a door that could be slid to either side with the bill to gain access to a feeder concealed behind . the left side of the door was colored blue and the right side red , with a raised front section on the door to allow an easier grip . the concealed feeder contained approximately 500 live mealworms and was refilled up to twice daily . 5 ) , and as live mealworms were used , solvers typically extracted one worm and then carried it away from the puzzle - box to kill and eat it ( confirmed with video observations ) ; supplementary video 1 - 2 . each puzzle - box was surrounded by a 11 m cage with a 55 cm mesh that gave unlimited access to small birds , but prevented access by large non - target species such as corvids or squirrels . a freely accessible bird feeder filled with peanut granules was also provided in the cage , at approximately 1 m from the puzzle - box . each peanut feeder had two access points fitted with rfid antenna and data - logging hardware . this feeder was used to attract the original demonstrator to the location , and to record the identity of individuals that did not contact the puzzle - box . all puzzle - boxes contained a printed circuit board ( pcb ) and motor , and were powered by a 12v sealed battery . the perch also functioned as an rfid antenna that registered the visit duration ( time to nearest second ) and identity of the visiting individual . solve was recorded if the door was opened during an individual visit to the device , with the side direction also noted . if a solution occurred without an accompanying identified individual , this was recorded as unidentified solve . if further individuals visited before this happened , then a scrounge was recorded , as they were assumed to have taken food from the open door ( confirmed from video observations ) . the door reset immediately after two individuals were registered scrounging , preventing more than two possible scrounging events per solve ( supplementary video 2 ) . two males were captured from each sub - population ( 11 adults , 5 juveniles ) to act as demonstrators , either by removal from roosting boxes on sunday night , or by mist - netting at a sunflower - seed feeder on monday morning . they were transferred to individual cages in indoor captive facilities , and over four days each pair of birds was subjected to one of three training regimes using step - wise shaping , either : ( i ) given no training and left in the cage with ab lib food ( control ) ; ( ii ) trained to solve the novel puzzle - box by pushing the blue side of the door to the right ( option b ) ; or ( iii ) trained to solve the novel puzzle - box by pushing the red side of the door to the left ( option a ) . with the exception of control areas , which were clustered in the south of the woodland to avoid cross - contamination , sub - populations were randomly assigned to a training regime , with both demonstrators from a single sub - population trained on the same technique . during training , the demonstrators were initially exposed to an open puzzle - box baited with mealworms , which was then gradually closed over the course of four days until the subjects were reliably re - opening it . the birds were released back at the site of capture in each respective sub - population ; puzzle - boxes at which both options were available and equally rewarding were installed at three sites 250 m apart on the following sunday night ( extended data fig . these puzzle - boxes were run over a four - week period at each site , continuously operating from monday to friday and then removed on saturday and sunday , for a total of 20 days of data collection . four replicates were conducted in the first year of data collection ( december 2012-february 2013 ; c1 - 2 , t1 , t3 ) . at three of these replicates ( c1 , t1 , t3 ) puzzle - boxes were simultaneously re - installed at the same locations for 5 days of further data collection in december 2013 . no additional demonstrators were trained , and no individual had contact with the puzzle - box in the 9 months between the two data collection periods . this second exposure aimed to test the long - term stability of social learning at the sub - population level . they were run prior to the second year of data - collection for the cultural diffusion experiment in order to exclude the possibility that dispersing individuals from new replicates could be re - introducing the novel behaviour . an additional four replicates were then conducted from december 2013 - february 2014 in new sub - populations , using the same initial protocol ( c3 , t2 , t4 , t5 ) . the local population size for each replicate was defined as comprising all individuals in a replicate that had been recorded at least once at either : ( i ) the puzzle - box , ( ii ) the nearby peanut feeder , or ( iii ) the nearest network - logger feeders ( operated saturday - sunday , see below ) , during the experimental period ( i.e. from the weekend following the release of the demonstrators , to the weekend after the 20 day of operation of the puzzle - boxes ) . persistence trial in the following year , the local population was defined just as ( i ) all individuals observed at the puzzle - box or ( ii ) nearby peanut feeder , so that areas were comparable . to analyse the results of the initial experiment we first compared control replicates and treatment replicates , using welch two - sided t - tests , and by fitting linear and sigmoidal models to the data , with the best model ascertained by difference in aic values . if individuals were using social information when learning about the puzzle - box , then we expected that there would be a difference between areas seeded with a trained demonstrator ( treatment ) and those without ( control ) . replicates were thus compared in terms of latency to first solve ( seconds from beginning of the experimental period , excluding demonstrator ) , and the total number of solutions . secondly , we compared the total number of solutions in the two different experimental treatments . here if a more complex form of social learning than local enhancement to the feeding site was occurring , then we expected a consistent bias towards the seeded variant in the different treatments . to analyse the change in individual and population preferences for option a or b over time , we used a generalised estimating equation model ( gee ) where the dependent variable was the proportion of solutions using the seeded technique on each day of data collection , and the explanatory variables were the individuals and replicate , weighted by the overall number of solutions per day . the seeded technique ( a / b ) was initially also included as an explanatory variable , but was not significant ( coefficient se = 0.130.22 , p = 0.55 ) . three individual variables were included in a gee model ; sex , age and natal origin . sex was determined at capture using plumage coloration , age was either determined from breeding records or plumage coloration , and individuals were classed as immigrants if they had dispersed into the study site , and locally - born if they had been ringed as a nestling in the study site . only age was significant ( coefficient se = 0.920.20 , p < 0.001 ) , and was included in the final model ( sex : coefficient se = 0.380.22 , p = 0.08 ; natal origin : coefficient se = 0.380.22 , p = 0.08 ) . if population - level conformity was partly the result of a conformist transmission bias at first acquisition we would expect a sigmoidal relationship between population - level frequency of the variant and adoption probability , with adoption of the majority variant disproportionately more likely than its absolute frequency . by contrast , copying the last individual observed , or random copying , should yield a linear relationship , with probability of adopting option a / b roughly equal to its proportion in the overall population . to investigate this , we isolated all individuals first observed solutions in all experimental replicates , and compared the option choice to the proportion of all previous solves as option a observed in the individual s group at that site . group length was set at 245 sec , which was the average group length observed using gaussian mixture models on temporal patterns of flocking ( see below ) at network - logging sunflower feeders . both linear and sigmoidal models were then fitted to the data , with the best model ascertained by difference in aic values . we further examined the subset of individuals that moved between sub - populations ( n=40 ) . this subset included all individuals recorded in more than one experimental replicate , whether within the season ( n=16 ) , or between seasons ( n=24 ) . no individual was observed in more than two replicates , and this analysis did not include individuals in the persistence trial. a preference for option a / b at each location was defined as more than 75% of all solves for either option a / b in that replicate . finally , in order to analyse the change in within - individual bias towards option a / b between the initial experiment and the second - year persistence trials , we used a general linear model where the dependent variable was the number of solves as the seeded variant over the total number of solves for each individual observed in both years . explanatory variables were treatment type and year , with individual identity as a random effect . sunflower bird - feeding stations were deployed at 65 locations around wytham woods on an approximate 250250 m square grid , as part of long - term research into social - network structure in tits ( see ) . each station had two access points , each fitted with rfid antennae and data logging hardware . feeding stations automatically opened from dawn to dusk on saturday and sunday , scanning for pit - tags every 16 of a second . this study used the data from the eight nearest locations to each set of puzzle - boxes , for 10 dates within and surrounding the cultural diffusion experiment ( the standard logging protocol runs from september - february in wytham woods ) . great tits were detecting visiting feeding stations and individually identified by their pit - tags . we then applied a gaussian mixture model to the spatiotemporal data stream to detect distinct clusters of visits . this method locates high - density periods of feeding activity , isolating flocks of feeding birds without imposing artificial assumptions about group boundaries . a gambit of the group approach was used with a simple - ratio index to calculate social associations , where individual association strengths ( network edges ) were scaled between 0 ( never observed foraging together in the same group ) to 1 ( always observed in the same group , never observed apart ) . while a single co - occurrence may not be meaningful , our automated data collection method resulted in thousands of repeated group sampling events , allowing social ties between individuals to be built up from multiple observations of co - occurrences over time and across spatial locations . networks contained 123 ( t1 ) , 137 ( t2 ) , 154 ( t3 ) , 95 ( t4 ) and 110 ( t5 ) nodes ; average edge strength was 0.09 ( t1 ) , 0.05 ( t2 ) , 0.08 ( t3 ) , 0.07 ( t4 ) and 0.07 ( t5 ) . to test whether networks contained significantly preferred and avoided relationships , we ran permutation tests on the grouping data , controlling for group size and the number of observations , restricting swaps within days and sites . we tested whether observed patterns of associations were non - random by comparing the coefficient of variance in the observed network to the coefficient of variance in the randomised networks . social networks for all replicates significantly differed from random , even at local scales ( t1 : p<0.0001 ; t2 : p=0.0005 ; t3 : p<0.0001 ; t4 : p=0.0002 ; t5 : p=0.0002 ) finally , we used network - based approaches to ask whether the behaviour was socially transmitted through foraging associations . network - based diffusion analysis ( nbda ) tests for social learning by assuming that if social transmission is occurring , then the spread of trait acquisition should follow patterns of relationships between individuals , with transmission rate linearly proportional to association strength . we used nbda r code v.1.2 , with the time of each individual s first solution ( seconds since the beginning of the experiment ) entered into the continuous time of acquisition analysis function . individuals that solved , but that did not appear in the social network ( i.e. had not been recorded in the standardised weekend logging ) were excluded from the analysis . the effects of three individual level variables were also incorporated into the analysis : sex , age , and natal origin . all combinations of nbda provided in the nbda r code v1.2 were run with social transmission rate allowed to vary for each replicate . total area of wytham woods is 385 ha ; location and size of the separate woodland areas within this are labeled on the map . ( d ) indicates locations where trained demonstrators were caught from and released to . a , feeding station ( shut ) , with sunflower - feeder , rfid antennae , and data - logging hardware . stations are approximately 250 m apart and open simultaneously dawn - dusk on saturday and sunday over winter . c , grouping events are inferred from the temporal data stream gained from feeding stations , with individuals assigned to grouping events in a bipartite network . d , repeated co - occurrences are used to create social networks ( adapted from psorakis et al . red nodes are individuals that acquired the novel behaviour after 20 days of exposure , black nodes are nave individuals and yellow nodes are trained demonstrators . networks are heavily thresholded to only show links above the average edge strength for each replicate ( t1 - 5 : 0.09 , 0.05 , 0.08 0.07 , 0.07 ) . a , social network for t1 replicate ( n=123 ) . e , network for t5 replicate ( n=110 ) . only individuals that performed both options are included , and individuals that moved between replicates are excluded . lines are running proportions of seeded variant for each individual over its last 10 visits . a , t1 ( option a ) , n=30 ; b , t2 ( option a ) , n=10 ; c , t3 ( option b ) , n=19 ; d , t4 ( option b ) , n=4 ; e , t5 ( option b ) , n=15 . birds were presented with a freely available mix of 40 mealworms , peanut granules and sunflower seeds for 1 hr on 2 days over 1 week at 6 sites ( 3 sites in t4 and t2 ) . trials were conducted 2 weeks after the end of the main experiment , in march 2014 . food choice was identified from video camera footage , and the trial was halted when all of one prey item was taken . only great tits were included , but birds could not be individually identified . birds clearly preferred the live mealworms to either peanut granules or sunflower seeds .
in human societies , cultural norms arise when behaviours are transmitted with high - fidelity social learning through social networks1 . however a paucity of experimental studies has meant that there is no comparable understanding of the process by which socially transmitted behaviours may spread and persist in animal populations2,3 . here , we introduce alternative novel foraging techniques into replicated wild sub - populations of great tits ( parus major ) , and employ automated tracking to map the diffusion , establishment and long - term persistence of seeded behaviours . we further use social network analysis to examine social factors influencing diffusion dynamics . from just two trained birds in each sub - population , information spread rapidly through social network ties to reach an average of 75% of individuals , with 508 knowledgeable individuals performing 58,975 solutions . sub - populations were heavily biased towards the technique originally introduced , resulting in established local arbitrary traditions that were stable over two generations , despite high population turnover . finally , we demonstrate a strong effect of social conformity , with individuals disproportionately adopting the most frequent local variant when first learning , but then also continuing to favour social over personal information by matching their technique to the majority variant . cultural conformity is thought to be a key factor in the evolution of complex culture in humans4 - 7 . in providing the first experimental demonstration of conformity in a wild non - primate , and of cultural norms in foraging techniques in any wild animal , our results suggest a much wider evolutionary occurrence of such apparently complex cultural behaviour .
Materials and Methods Study Population and Area Puzzle-box Design Experimental Procedure Data Analysis Network Data Collection and Analysis Extended Data Supplementary Material
PMC4890918
a stepwise model of development , from well - differentiated precursors to poorly differentiated progressed hepatocellular carcinomas ( hccs ) , is well established by evidences accumulated in the past three decades . in 1995 and 2009 , international reproducible criteria for the diagnosis of nodular lesions of hepatocarcinogenesis were developed by the remarkable endeavors of the east and the west pathologists . at present , the criteria involve not only pathologic but also radiologic features , especially hemodynamic findings . for example , progressed hccs are defined as radiologically hypervascular lesions without portovenous supply which histologically appear moderately or poorly differentiated ; early hccs and dysplastic nodules are defined as iso- or hypovascular lesions with portovenous supply on radiological images which appear well differentiated on histology [ 1 , 2 ] . thus , an intimate knowledge of the relations between tumor hemodynamics and hepatocarcinogenesis would be useful for the management of carcinogenic nodules as well as for the better understanding of multistep models of hepatocarcinogenesis . this review focuses on radiologic hemodynamic features of carcinogenic hepatocyte nodules arising from cirrhotic livers , that is , dysplastic nodules , early hccs , and progressed hccs , paying close attention to radio - pathological correlations and outcomes of the nodules . on histology , the blood circulation is not kinetically depicted , but tumor blood inflows can still be analyzed through the quantification of the inflow vessels with morphometric technique . the pathology of inflows of carcinogenic hepatocyte nodules has been morphometrically evaluated by two methods : counting the number of inflows vessels and measuring the luminal area of them . there are three types of inflow vessels of carcinogenic hepatocyte nodules portal vein , hepatic artery , and abnormal artery which are termed unpaired or nontriadal arteries ( figures 1 and 2 ) [ 310 ] . the former two ones , portal vein and hepatic artery , are accompanied by bile ducts running within the portal tracts , whereas the last , unpaired artery , is not accompanied by bile ducts or portal veins independently running outside of portal tracts . it is now considered that the unpaired arteries are new vessels developed through neovascularization during hepatocarcinogenesis [ 310 ] . the inflow vessels in the carcinogenic nodules of the liver have been histomorphometrically quantified by counting the number of vessels in the unit area : vessel density . during dedifferentiation of the nodules , portovenous densities of the nodules the portovenous densities within dysplastic nodules are lower than those within the liver [ 3 , 9].the arterial densities of low grade dysplastic nodules are almost as high as those of the surrounding livers ; those of high grade dysplastic nodules are also almost as high as those of the livers ; those of progressed hccs are much higher than those of the livers . the proportions of the numbers of unpaired arteries to those of total arteries increase during the dedifferentiation . measuring luminal areas of the vessels is more recommendable than counting their number although the former is much more laborious than the latter , because luminal areas reflect angiography more accurately than numbers . the portovenous luminal areas of low grade dysplastic nodules per unit area are as large as those of the surrounding livers ; those of high grade dysplastic nodules are smaller than those of the livers ; those of progressed hccs are almost null . the arterial luminal areas of low grade dysplastic nodules per unit area are slightly smaller than those of the livers ; those of high grade dysplastic nodules are smaller than those of the livers ; those of progressed hccs are much larger than those of the livers . the proportions of unpaired arteries to total arteries of low grade dysplastic nodules in luminal area are small , those of high grade dysplastic nodules are moderate , and those of progressed hccs are almost 100% . to summarize , during the dedifferentiation , the portovenous areas of carcinogenic hepatocyte nodules monotonically decrease whereas the arterial areas bitonically change , first decrease because hepatic arterial areas decrease and then increase because unpaired arterial areas increase ( figure 3 ) . hepatocyte nodules with smaller portovenous areas and the same arterial areas to the surrounding livers are just on the final stage of high grade dysplastic nodules transiting to progressed hccs . tumor blood inflows , both portovenous and arterial , can be evaluated with imaging techniques . there are two techniques for evaluation of tumor portovenous inflows : doppler sonography and ct during arterial portography ( ctap ) . with doppler sonography , inflows in some large portovenous branches in tumors may be visualized , but portovenous perfusions can not be evaluated . with ctap , tumor portovenous perfusions can be evaluated not only qualitatively but semiquantitatively [ 1315 ] . for example , we can tell that a portovenous perfusion of a nodule is null or reduced . in addition , we can identify the perfusion difference between nodules and the surrounding livers . thus , ctap should be applied in patients with carcinogenic hepatocyte nodules as principal technique to confirm the portovenous inflows of the nodules . there are six imaging techniques for evaluation of tumor arterial inflows : doppler sonography as noncontrast imaging [ 11 , 16 ] , contrast sonography with perfluorobutane microbubbles [ 1719 ] , dynamic contrast - enhanced ct and dynamic contrast - enhanced mr imaging as intravenous contrast enhanced imaging , sonography with carbon dioxide microbubbles , and ct during hepatic arteriography ( ctha ) as intra - arterial contrast - enhanced imaging . among these techniques , ctha is the best suited to the understanding of the complex hemodynamic interactions in an organ with a dual blood inflow since it depicts both the lesions and the livers free from portovenous enhancement and also avoids improper scan delays that occasionally occur in dynamic contrast - enhanced ct or mr imaging . imaging techniques illustrate the relations , mentioned in section 2.1 , between tumor blood inflows and histological grades in carcinogenic hepatocyte nodules [ 14 , 20 , 21 ] . the attenuations of the nodules on ctap decrease during dedifferentiation of the nodules : low grade dysplastic nodules show isoattenuating ; high grade dysplastic nodules and early hccs show isoattenuating or slightly hypoattenuating ; progressed hccs show hypoattenuating(figures 4 and 5 ) . the attenuations of the nodules on ctha first decrease and then increase : low grade dysplastic nodules show isoattenuating or slightly hypoattenuating ; high grade dysplastic nodules and early hccs show hypoattenuating or isoattenuating ; progressed hccs exclusively show hyperattenuating ( figures 4 and 5 ) . it is noteworthy that high grade dysplastic nodules and/or early hccs harboring progressed hccs in nodule - in - nodule fashion are demonstrated on ctap as isoattenuating or slightly hypoattenuating nodules carrying internal definite hypoattenuating foci , but as hypo- or isoattenuating nodules containing hyperattenuating nodules on ctha ( figure 6 ) . tumor arterial inflows via unpaired arteries can not be differentiated from those via hepatic arteries by radiological imaging . however , when an isointense nodule on ctha is depicted as a hypointense nodule on ctap , the nodule can be interpreted as receiving arterial inflows via unpaired arteries and on the final stage of dysplastic nodule just before progressed hcc ( figure 3 ) . although ctap and ctha are rather invasive , they are applied as principal techniques for diagnoses of carcinogenic hepatocyte nodules because intravenous contrast enhanced ct is relatively insensitive not only for detection but also for the characterization of these nodules [ 22 , 23 ] . it is reported that seven dysplastic nodules were categorized as hypoattenuating or isoattenuating on arterial phase ct , but another dysplastic nodule was hyperattenuating ; therefore , incorrectly categorized as progressed hcc ; 31 progressed hccs were hyperattenuating and correctly categorized but one progressed hcc was incorrectly categorized as dysplastic nodule . evaluation of arterial inflows of dysplastic nodules by dynamic contrast enhanced mr imaging is often difficult because dysplastic nodules are generally hyperintense on noncontrast t1-weighted images [ 2426 ] . however , a progressed hcc harbored by dysplastic nodule is depicted as a hyperintense nodule within a hypointense or isointense nodule on arterial phase images . the outcomes of the carcinogenic hepatocyte nodules can be predicted when both portovenous and arterial inflows are evaluated by ctap and ctha , respectively , . no nodules which are isoattenuating on both ctap and on ctha became progressed hccs ; approximately 30% of the nodules showing slightly hypoattenuating on both ctap and ctha became progressed hccs after two years . on the other hand , approximately 90% of the nodules carrying hypoattenuating foci on ctap and hyperattenuating foci on ctha became entirely hypervascular progressed hccs after two years ( figure 7 ) . reduced portal blood flow in the nodule on ctap is one of the most important predictors for development of progressed hcc ( figure 8) . on histology , tumor blood outflows can be speculated through the connections between the vessels within the tumors ( capillarized intratumoral sinusoids ) and those in the surrounding livers . there are three types of growth patterns in hccs , namely , replacing growth ( figure 9 ) , compressing growth without capsule ( figure 10 ) , and that with capsule ( figure 11 ) . replacing growth is seen both in hypovascular early hccs and in some well differentiated progressed hccs . hypovascular early hcc with replacing growth carry more hepatic veins in and around themselves connected with intratumoral blood sinusoids than hypervascular progressed hccs . nodules with compressing growth can be classified into two types : nodules without capsule and those with capsule . the former type carries intranodular capillarized sinusoids connected directly to the surrounding hepatic sinusoids and partly to extranodular portal veins . the latter carries intranodular capillarized sinusoids connected to portal venules within the capsule but not directly to the surrounding hepatic sinusoids . intra- or perinodular hepatic venules decrease in accordance with the grade of malignancy of the nodules and usually no intratumoral hepatic venules are observed in encapsulated hccs . in poorly differentiated hccs with invasive growth to the surrounding liver , these connections there are five imaging techniques for evaluations of tumor blood outflow : single - level dynamic ctha [ 29 , 30 ] , biphasic ctha , contrast sonography with perfluorobutane microbubbles , intravenous dynamic contrast enhanced ct , and intravenous dynamic contrast enhanced mr imaging [ 26 , 33 , 34 ] . single - level dynamic ctha , the first technique which demonstrated the tumor blood outflow ever , is the principal technique kinetically depicting both inflow and outflow at one time with the highest temporal resolution . images can be obtained with 30- to 40-second continuous acquisition without table increment under intrahepatic arterial injection of a small amount of contrast material , for example , 10 ml of 300370 mgi / ml at a rate of 1.0 ml / sec . biphasic ctha is the second technique for evaluations of tumor blood outflow , which covers the whole liver and multiple lesions . images are obtained with two whole - liver acquisitions at following delays : 10 seconds after start of intrahepatic arterial injection of contrast material of 30 ml of 300370 mgi / ml at a rate of 1.0 ml / sec for first scan ( early phase ctha ) and 60 seconds after start of injection for second scan ( late phase ctha ) . we can evaluate the arterial inflows with the first scan and the sequential outflows with the second scan . with the remaining three techniques , contrast sonography with perfluorobutane microbubbles , intravenous dynamic contrast enhanced ct , and mr imaging , we can also estimate the outflows , but the reproducibilities and accuracies of the evaluations are much reduced compared with the former two techniques . radiological findings of tumor blood outflows of the carcinogenic hepatocyte nodules depend on the channels of blood drainage : hepatic veins , sinusoid , or portal veins . a nodule shows isoattenuation or slight hypoattenuation on ctap and early phase ctha and does not show corona enhancement on late phase ctha when its main drainage channel is hepatic vein . a nodule shows hypoattenuation on ctap , hyperattenuation on early phase ctha , and thin corona enhancement ( 2 mm or less in thickness ) on late phase ctha when its main drainage channel is sinusoid . a nodule shows hypoattenuation on ctap , hyperattenuation on early phase ctha , and thick corona enhancement ( more than 2 mm in thickness ) on late phase ctha when its main drainage channel is portal vein . nodules with thick corona enhancement on late phase ctha commonly show compressing growth with fibrous capsule ( figure 12 ) . nodules with thin corona enhancement on late phase may show compressing growth without fibrous capsule ( figure 13 ) . nodules without corona enhancement on late phase ctha , which appear slightly hypoattenuating on both ctap and early phase ctha , show replacing growth ( figure 14 ) . , the nodules show replacing growth with indistinct margin at first , then expansile growth without fibrous capsule and finally expansile growth with fibrous capsule . thus , radiologic evaluations of tumor blood outflows of carcinogenic hepatocyte nodules , especially corona enhancement on late phase ctha , demonstrate their differentiations as well as their growth pattern . hemodynamics of carcinogenic hepatocyte nodules depicted with imaging techniques , especially with ctap and ctha , are closely related to differentiations , growth patterns , and outcomes of the nodules . hyperattenuating nodules with thick corona enhancement on ctha showing hypoattenuation on ctap are encapsulated progressed hccs . hypoattenuating or isoattenuating (= invisible ) nodules carrying small hyperattenuating areas on ctha are early hccs or high grade dysplastic nodules containing tiny progressed hccs , 90% of which would become wholly progressed hccs in two years . isoattenuating nodules on ctha showing hypoattenuation on ctap are early hccs or the final stage of high grade dysplastic nodule just before progressed hcc , 90% of which would become progressed hccs in two years . hypoattenuating nodules on ctha and ctap are high grade dysplastic nodules , 30% of which would become progressed hccs in two years . isoattenuating nodules on ctha and ctap detected by other imaging techniques , such as mr imaging , are low grade dysplastic nodule which will seldom change into progressed hccs . these guides will be helpful in drawing up therapeutic strategies for hepatocyte nodules arising in cirrhosis .
tumor hemodynamics of carcinogenic hepatocytes nodules , that is , low grade dysplastic nodules , high grade dysplastic nodules , early hepatocellular carcinomas ( hccs ) , and progressed hccs , change during multistep dedifferentiation of the nodules . morphometric analyses of inflow vessels of these nodules indicate that the portal veins of carcinogenic hepatocyte nodules monotonically decrease whereas the arteries bitonically change , first decrease and then increase . findings on imaging techniques depicting these changes in tumor blood inflows , especially intra - arterial contrast - enhanced computed tomography , closely related not only to the histological differentiation of the nodules but also to the outcomes of the nodules . histological analyses of connections between the vessels within the tumors and those in the surrounding livers and findings on imaging techniques indicate that drainage vessels of hcc change from hepatic veins to hepatic sinusoids and then to portal veins during multistep hepatocarcinogenesis . understanding of tumor hemodynamics through radio - pathological correlations will be helpful in drawing up therapeutic strategies for carcinogenic hepatocyte nodules arising in cirrhosis .
1. Introduction 2. Tumor Blood Inflows and Hepatocarcinogenesis 3. Tumor Blood Outflows and Hepatocarcinogenesis 4. Summary
PMC3056454
alzheimer 's disease ( ad ) is a neurodegenerative disorder that will affect 15 million people in usa alone in the next ten years [ 1 , 2 ] . the most common form of the disease is the late onset form ( load ) that affects people older than 65 . load is caused by a complex interaction of risk factors including age , genetics , and environmental factors , such as level of education , diet , and physical activity [ 37 ] . the accumulation of a , a neurotoxic product of amyloid precursor protein ( app ) cleavage , is central to ad pathogenesis [ 810 ] . this accumulation causes synaptic dysfunction and eventually neuronal death [ 9 , 10 ] . therefore proteins that affect app metabolism and synaptic function are likely to be important in ad pathogenesis . the neurotrophin receptors ( trka , trkb , and trkc ) are important in neuronal development and synaptic function [ 11 , 12 ] . levels of trka , trkb , and trkc , but not p75 , are downregulated in ad brain samples . trk downregulation has been proposed as a biomarker of ad progression since trk mrna levels correlate with the degree of cognitive impairment . further evidence for a role of trkb in ad is the fact that trkb can modulate app levels and proteolysis . expression of the longest trkb isoform , full - length trkb ( trkb fl ) , can increase app promoter transcription and promotes accumulation of sapp- [ 1417 ] . conversely , a has been found to reduce trkb fl / bdnf levels and to impair trkb - mediated signaling [ 1821 ] . interestingly , knockdown of another splice variant of trkb , truncated trkb ( trkb t ) in a mouse model of down syndrome rescued neuronal death . conversely , mice overexpressing trkb t display synaptic dysfunction and long - term potentiation defects . the gene encoding trkb , ntrk2 , is located on chromosome 9 , specifically 9q22 . this region has been genetically linked to ad [ 24 , 25 ] . despite the experimental evidence functionally linking trkb signaling to app metabolism and synaptic function , case - control and genomewide association studies of ntrk2 single nucleotide polymorphisms ( snps ) found no significant association with ad [ 2530 ] . one family - based study did observe genetic association of ntrk2 haplotypes with ad . three major splice variants of trkb are expressed in neurons , trkb fl , trkb shc , and trkb t. we hypothesized that these different trkb isoforms differentially affect app metabolism and could play a role in the pathogenesis of ad . the three trkb splicing isoforms we investigated share an extracellular bdnf binding domain and differ in their cytoplasmic - domain ( figure 1 ) . two splice variants encode full - length receptors , trkb full - length ( fl ) , that contain a tyrosine kinase domain , an shc - binding domain and a plc--binding domain in the intracellular portion [ 32 , 33 ] . two isoforms encode shorter receptors , trkb shc , that contain only an shc - binding domain , and the remaining isoform is a truncated receptor , trkb t , that does not have any known intracellular functional domain ( figure 1 ) . we used a previously described cell - based functional screen to identify putative app metabolism regulators . we found that ntrk2 knockdown altered both aicd - mediated luciferase activity and app full - length levels . to characterize the role of trkb fl and truncated isoforms we knocked down and overexpressed the isoforms in an sh - sy5y neuroblastoma cell line overexpressing app as a fusion protein with the yeast transcription factor gal-4 . we then measured app fl levels and proteolytic products using western blots and luciferase assays . specifically , overexpression of trkb fl increases aicd - gal4-mediated luciferase activity . while overexpression of trkb t does not alter the luciferase activity and trkb shc decreases the luciferase activity compared to control . we determined that the tyrosine kinase and plc- functional domains contribute to the observed trkb fl - mediated effects . we also found that the shc - binding site contributed to the observed trkb - shc - mediated effects . bdnf stimulation of the exogenously expressed trkb receptors amplified the app metabolism effects and cotransfection of the trkb - truncated isoforms with trkb fl altered the effects on app metabolism . four shrna - containing plasmids specific for ntrk2 were obtained from the psm2 retroviral library of the drexel rnai resource center purchased from open biosystems . the constructs i d numbers are ntrk2.1 : 1920 ; ntrk2.2 : 2295 ; ntrk2.3 : 29734 ; ntrk2.4 : 30795 . we also used app ( i d 39147 ) and luciferase targeting shrna ( rhs1705 ) as positive controls and a scrambled shrna sequence ( nonsilencing , ns , rhs1707 ) as a negative control . the trkb full - length and truncated gfp fusion constructs and the gfp - f control overexpression plasmid were kindly donated by dr . eero castren ( university of helsinki , finland ) and were previously described [ 36 , 37 ] . site - directed mutagenesis ( stratagene , quikchange mutagenesis kit ) was utilized to generate point mutants on the trkb full - length receptor functional domains . mutagenesis was carried out according to manufacturer 's instructions and the primers employed are reported in table 1 ( the bolded sequences represent the mutations / insertion ) . therefore trkb fl k571 m indicates the tyrosine kinase dead receptor since it is mutated on the atp - binding site ; trkb fl y515f indicates the receptor mutated on the shc - binding site ; trkb fl y816f indicates the receptor mutated on the plc--binding site . note that in some literature the trkb mutants are referred to with the numbering of the amino acidic sequence of trka , the ngf receptor , that has functional sites in common with trkb and therefore are referred to as k560 m , y490f and y785f respectively [ 32 , 33 ] . trkb shc indicates the other human truncated isoform ( isoform d and e , ncbi gene nm_0010180642 and nm_0010180662 ) . trkb shc was obtained by insertion of the exon 19 followed by a stop codon after the shc - binding site on the trkb fl constructs . after obtaining the trkb shc isoform by insertion , the shc - binding site was mutated on that isoforms using the same primer sequence employed for the trkb fl mutant on the shc - binding site ( trkb y515f ) . one clone per construct was transformed in e. coli ( dh5- competent cells , invitrogen ) . transformed bacteria were selected on 100 g / ml ampicillin lb - agar plates and liquid cultures were grown overnight at 37c . bacterial cultures were miniprepped ( miniprep kit , quiagen ) and used for transfection after dna quantification . sh - sy5y cells stably transfected with uas - luciferase and app - gal4 described before were maintained in dmem ( gibco ) supplemented with 10% fbs , penicillin streptomycin and 200 g / ml g418 ( gibco ) . to assess the effects of trkb knockdown or overexpression on aicd - gal4 mediated luciferase we used the following transfection protocol previously described . briefly , one day before transfection cells were plated in 96-wells plates at approximately 4050% confluency . the day of transfection media was removed from the cells and replaced with transfection media : 100 l of serum free dmem media containing 2 g / well arrest - in ( open biosystems ) and 0.2 g / well plasmid dna . cells were also transfected with shrna targeting app , luciferase and a control shrna that contains a scrambled sequence that does not target any human gene . in addition a mock transfection , containing only arrest - in was performed to control for selection effectiveness . 6 replicate wells per shrna constructs and mock control transfection were set up for each independent experiment . the transfection media was left on the cells for 8 hours and then replaced with complete media . 48 hours after transfection , transfected cells were selected with 4 g / ml puromycin ( sigma ) in 10% fbs dmem with 200 g / ml g418 . the media was changed every 48 hours and cell death was monitored and compared to the mock - transfected control . once all the cells in the mock control wells were dead , surviving cells in the shrna transfected wells were split and transferred to another 96-well plate and a 24-well plate . cell lysates were collected from 6080% confluent 96-wells 1113 days after transfection in 100 l glo lysis buffer per well ( promega ) . lysates were used immediately after collection or frozen prior to performing steady glo luciferase assays ( promega ) . shrna - mediated knockdown effectiveness was monitored by comparing the luciferase signal of the nonsilencing control shrna with the app targeting shrna . after assessing successful knock - down , luciferase data for the experimental shrna targeting ntrk2 was collected and analyzed . in parallel , 24-well plates and 12-well plates were seeded with the same cells that had been assayed for luciferase signal and collected for western blot analysis . the same transfection procedure was followed for the overexpression experiments , but lysates were collected 48 hours after transfection and transfection efficiency was monitored by fluorescence microscopy , no antibiotic selection was performed in this case . conditioned media was collected from the cells ( 48 hours after transfection ) in eppendorf tubes and centrifuged at 14,000 rpm for 10 minutes at 4c ( beckman coulter , microfuge 22r ) . the resulting supernatant was collected and 142 l were mixed with 33 l of 4x reducing loading buffer ( invitrogen ) supplemented with 0.4% -mercaptoethanol ( sigma ) . whole cell lysates were collected ( 48 hours after transfection ) by lysing the cells with ice - cold radio immuno - precipitation ( ripa ) buffer ( 150 mm nacl , 1% np40 , 0.5% doc , 1% sds , 50 mm tris , ph 8.0 ) supplemented with halt cocktail of protease and phosphatase inhibitors ( thermoscientific ) . cell lysates were sonicated in an ice - cold water bath sonicator for 6 minutes then centrifuged 20 minutes at 4c at 14,000 rpm . the resulting supernatants were collected and protein concentration measured with a bca protein concentration kit ( pierce ) according to manufacturer 's instructions . western blot samples were prepared at a final concentration of 1 - 2 g/l in 4x reducing loading buffer ( invitrogen ) and heated at 70c for 10 minutes . 1525 g of total protein / well were separated on 412% tris - glycine midi gels ( invitrogen ) in mes - sds running buffer ( invitrogen ) and run at 190 mvolts for 45 minutes . the separated proteins were transferred to pvdf fl membranes ( millipore ) in a semi - dry transfer apparatus ( aa hoefer te77x ) for 3 hours at 125 milli amp per gel . membranes were blocked one hour at room temperature using licor blocking buffer then probed overnight with primary antibodies diluted in licor blocking buffer at 4 or 25c . membranes were then washed for 5 minutes 4 times with 0.1% tween ( sigma ) in pbs . after washing , membranes were incubated in the dark with the appropriate secondary antibody irdye ( licor ) diluted in licor blocking buffer for one hour . membranes were scanned on an odyssey infrared scanner ( licor ) at appropriate intensities and images acquired . band intensities were quantified with the provided in - built software ( licor ) and always normalized to the actin loading control . when conditioned media was analyzed the signals were normalized to the protein concentration of the corresponding lysates . detection of trkb - gfp tagged constructs utilized mouse anti - gfp antibody ( 1 : 1000 , living colors , clontech ) ; detection of app full - length and c - terminal fragments utilized a8717 rabbit antibody ( 1 : 2000 , rb , sigma ) ; detection of sapp 22c11 utilized mouse antibody ( 1 : 1000 , millipore ) ; detection of sapp- 6e10 utilized mouse antibody ( 1 : 1000 , covance ) ; detection of actin a5441 utilized mouse antibody ( 1 : 15,000 , sigma ) . the secondary antibodies : irdye700 anti - mouse antibody ( 1 : 15,000 ) and irdye800 anti - rabbit antibody ( 1 : 15,000 ) were obtained from by licor . we applied our functional screening method to all the genes in the linkage region on chromosome 9 that displays a high likelihood of disease score for ad . this screening is conducted in sh - sy5y cells stably transfected with a luciferase reporter driven by the yeast uas promoter and app fused to gal4 . when app is cleaved by the secretases , the aicd - gal4 domain is released and can activate the transcription of the luciferase reporter . since changes in aicd - mediated luciferase activity can occur through a variety of mechanisms affecting app , this is an effective and general way of identifying regulators of app metabolism . we targeted ntrk2 with 4 different shrna constructs ( see supplementary figures 1 and 2 in the supplementary material available online at doi : 10.4061/2011/729382 ) . three shrnas targeted all the trkb isoforms ( ntrk2.1 - 3 ) and one ( ntrk2.4 ) targeted all the isoforms except the trkb t. we also transfected a nonsilencing scrambled shrna ( ns ) that does not target any human gene as a negative control and a shrna targeting app as a positive control . of the four transfected constructs ntrk2.1 - 3 decreased aicd - mediated luciferase to the same extent of the app targeting shrna compared to the ns shrna ( figure 2(a ) ) . the fourth construct , ntrk2.4 targeting all trkb isoforms except trkb t , consistently caused cell death ( data not shown ) . this result suggests that ntrk2 can affect app metabolism and that the isoforms have different roles since downregulation of all the isoforms except trkb t was lethal . therefore we investigated the effect of the single isoforms in the same experimental model . we transiently transfected individual trkb isoform overexpression constructs in the cells and measured aicd - mediated luciferase activity . we found that there was no difference in aicd - mediated luciferase activity between trkb t and the gfp - control , while trkb fl significantly increased luciferase activity ( p = .01 ) and trkb shc significantly decreased it ( p = .01 ) ( figure 2(b ) ) . moreover , we show that there is a difference between the isoforms trkb shc and trkb t , even though both isoforms lack the tyrosine kinase domain . trkb t did not alter aicd - mediated luciferase activity compared to the gfp - f control , while trkb shc decreased it and trkb fl increased it . we hypothesized that the intracellular domains of the trkb shc and trkb fl are responsible for the effects observed . to determine which domain was responsible for this effect we generated a mutant of the trkb shc isoform that can not bind shc ( y515f ) . we transfected this mutant and the other trkb wild - type isoforms , into sh - sy5y - app - gal4 cells and measured aicd - mediated luciferase activity . we observed that trkb shc y515f ( shc - binding site mutant ) does not significantly alter luciferase activity compared to trkb t but significantly increased it compared to the trkb shc wild - type isoform ( p < .001 ) ( figure 3(a ) ) . therefore , disrupting the shc - binding site on the trkb shc isoform impairs its ability to decrease aicd - mediated luciferase activity . we mutated the shc - binding site ( y515f ) to generate a mutant that can not bind shc . then we mutated the atp - binding site ( k571 m ) to generate a trkb fl tyrosine kinase inactive receptor k571 m [ 32 , 33 ] . we also generated a double mutant that is both tyrosine kinase inactive and does not bind shc ( trkb y515f / k571 m ) . we then transfected these trkb fl mutant constructs in sh - sy5y - app - gal4 cells . we measured the aicd - mediated luciferase activity and compared it to trkb fl wild - type ( figure 3(b ) ) . the trkb y515f mutant ( preventing shc - binding ) did not significantly alter aicd - mediated luciferase activity compared to trkb fl ( figure 3(b ) ) . the trkb fl k571 m ( tyrosine kinase inactive ) significantly decreased luciferase activity compared to trkb fl ( p = .0006 ) . trkb fl y816f , ( preventing plc--binding ) also significantly decreased luciferase activity compared to trkb fl ( p = .0002 ) . the double mutant trkb y515f / k571 m ( preventing shc - binding and tyrosine kinase inactive ) significantly decreased luciferase compared to trkb fl ( p = .002 ) but did not differ from the tyrosine kinase inactive trkb k571 m ( figure 2(a ) ) . in summary , trkb fl overexpression increases aicd - gal4 mediated luciferase activity two - fold compared to controls trkb t ( figure 2(b ) ) . the tyrosine kinase inactive mutant receptor , trkb k571 m , the plc--binding site mutant , and the trkb shc isoform mutated on the shc - binding site also cause a 6070% decrease in aicd - mediated luciferase activity compared to the trkb fl ( figures 3(a ) and 3(b ) ) . the trkb shc wild - type isoform causes an aicd - mediated luciferase decrease of about 90% ( figure 2(b ) ) . the effects we observe on aicd - mediated luciferase activity can occur through many different mechanisms : decreased app transcription , increased app degradation , decreased app cleavage , destabilization of aicd , and trafficking that affects app localization . the most immediate way of decreasing aicd levels is to decrease app levels . to determine if ntrk2 knockdown decreased app levels , we tested if app levels were altered . we transfected the ntrk2 targeting shrna , a ctrl shrna , and an app targeting shrna as a positive control . as an additional control we used shrna targeting the luciferase gene : this construct accounts for overexpression of shrna that have to be processed by the endogenous rnai machinery . we found that knockdown of all the trkb splice variants cause a significant decrease in app fl levels ( p < .05 ) ( figure 4(b ) ) and we concluded that decreased app levels might be at least partially responsible for the observed reduction in luciferase activity . based on the previous knockdown results , we then hypothesized that overexpression of trkb fl causes increased aicd - mediated luciferase activity by increasing app fl levels . we transfected the trkb isoforms in the cells , performed western blot analysis and quantified app fl levels in cell lysates . overexpression of trkb fl significantly increased app fl levels compared to trkb t ( p = .03 ) and trkb shc ( p = .008 ) ( figure 5(a ) ) . there was no difference in app levels between trkb t- and trkb shc - transfected cells ( figure 5(a ) ) . we then verified that aicd - gal4 levels in trkb fl - transfected cells correlated to the observed increase in luciferase activity . aicd - gal4 is the intracellular domain of app that is , generated by -secretase cleavage , translocates to the nucleus and activates transcription . we found that , as expected , trkb fl displayed increased aicd - gal4 levels compared to trkb t , but this difference was not statistically significant . compared to trkb t , trkb shc overexpression resulted in a decrease in aicd - gal4 levels , as expected , but this difference was not statistically significant ( figure 5(b ) ) . interestingly , we consistently observed trkb fl lower levels compared to trkb t and trkb shc in our western blot analysis ( figure 5(a ) ) . to assess changes in app proteolysis we measured app c - terminal fragments ( ctfs ) and sapp levels upon trkb transfection . c83 and c99 are generated by the cleavage of app by -secretase and -secretase , respectively . in our cell line we measure c83-gal4 and c99-gal4 levels since app overexpressed is a fusion protein with gal4 . these fragments are the precursors of aicd , that is , released in the cytoplasm by -secretase cleavage [ 40 , 41 ] . while c83 and c99 are membrane - bound fragments of app , the soluble n - terminal fragment of app , sapp , generated by /-secretase cleavage is released in the extracellular environment . in sh - sy5y cells , therefore , the majority of the luciferase signal observed is due to aicd - gal4 generated from -secretase cleavage of c83-gal4 . if the aicd - gal4 levels are increased , as measured by luciferase and western blot , then the levels of its precursor c83-gal4 should also be increased . we then tested the hypothesis that c83-gal4 and sapp levels are increased by trkb fl overexpression and decreased by trkb shc . to aid detection of c83-gal4 we treated cells with the -secretase inhibitor l-685,485 . we did not detect a difference in c83-gal4 levels among the cells transfected with the different trkb isoforms ( figure 5(c ) ) . surprisingly , trkb fl decreased sapp levels compared to trkb t ( p = .01 ) . trkb shc showed a nonsignificant difference in sapp levels compared to trkb t ( figure 5(d ) ) . all three trkb isoforms studied here are capable of binding bdnf . moreover , it has been previously shown that trkb fl bdnf - mediated intra - cellular signaling can alter app metabolism [ 1417 ] . we hypothesized that application of exogenous bdnf would stimulate the trkb fl - mediated effects on app fl and proteolytic products levels . we then tested this hypothesis applying bdnf to cells transfected with trkb isoforms and measured the levels of app fl by western blot . we found that short term ( 10 minutes ) bdnf application increases app fl levels in cells transfected with the trkb t or trkb shc isoforms and to a greater degree in cells that had been transfected with trkb fl ( figure 6 ) . twenty - four hour bdnf treatment of trkb fl transfected cells did not further increase app fl levels compared to short - term bdnf treatment . it has been previously shown that trkb t has a dominant negative effect on trkb fl . we hypothesized that cotransfection of trkb t with trkb fl would eliminate the trkb fl effects on app metabolism observed when we transfect trkb fl alone . moreover we hypothesized that cotransfection of the trkb shc with trkb fl would also have dominant negative effect on trkb fl . finally we hypothesized that cotransfection of trkb fl with trkb y515f or trkb y816f would not significantly alter the effects seen on app since they seem to be primarily mediated by the tyrosine kinase domain and not by the shc - binding domain . for this reason we also hypothesized that cotransfection of trkb fl with trkb k571 m ( trkb fl / k571 m ) would have the same effect as the cotransfection of trkb fl and trkb t ( trkb fl / t ) . consistent with our hypothesis , trkb fl / t cotransfection did not increase app fl levels , nor did cotransfection of trkb fl / k571 m , the tyrosine kinase inactive mutant ( figure 7 ) . also , as expected , there was very little difference between the app fl levels in cells transfected with trkb fl / y515f , trkb fl / y816f , and trkb fl / fl . surprisingly , cotransfection of trkb fl / shc increased app fl levels compared to trkb fl / t cotransfection but not compared to trkb fl / fl ( figure 7 ) . we also hypothesized that bdnf treatment of the cotransfected cells would affect the transfected isoforms mediated effects on app . surprisingly bdnf treatment did not significantly alter these effects of the cotransfected trkb receptors . in summary , both truncated isoforms were able to decrease app fl levels compared to trkb fl / fl transfection ; trkb t to a greater extent than trkb shc . the tyrosine kinase inactive receptor decreased app fl levels to the same extent of trkb fl / t cotransfection while trkb fl / y515f and trkb fl / y816f cotransfection did not alter app fl levels compared to trkb fl / fl . we investigated the role of the trkb isoforms on app metabolism in sh - sy5y cells overexpressing an app - gal4 fusion protein that can transactivate a luciferase reporter gene . this system monitors changes in app metabolism that are reflected in altered aicd - mediated transcription of the luciferase gene . we found that knockdown of all trkb isoforms in sh - sy5y - app - gal4 cells caused a decrease in aicd - mediated luciferase activity . this decrease is probably due to a decrease in app levels observed in cells with ntrk2 knock - down . we hypothesize that decreased app levels in this system are mainly due to increased app degradation caused by altered trafficking in absence of trkb . transcriptional downregulation of app might be partially responsible for the decreased signal observed in the western blot but that is only possible for the endogenous app . because the endogenous app gene is under the physiologic transcriptional regulation , while the app - gal4 overexpressed is under cmv promoter regulation . concomitant knockdown of the trkb fl and shc isoforms lead to cell death , and this is consistent with the finding that trkb t is one of the causes of neuronal death in a mouse model of trisomy 21 . to discriminate between the effects of the different isoforms , we overexpressed one isoform at a time and measured the resulting aicd - mediated luciferase activity . as a control we employed a gfp expression vector ( gfp - f ) that includes a farnesylation sequence that targets gfp to the cell membrane this is a better control for a membrane - bound receptor than a cytoplasmatic gfp . trkb fl increased luciferase activity while no difference was observed between trkb t and gfp - f control transfected cells . we hypothesize that the decrease in aicd - mediated luciferase activity induced by trkb shc might be mediated by binding of shc adaptor proteins . binding of adaptor proteins to trkb and possibly to app , might decrease the endocytosis of app decreasing its -secretase cleavage . the luciferase assay described here has been found to be particularly sensitive in detecting decreased -secretase processing and that can be the cause of the decrease in luciferase activity that we observe , at least with cotransfection of the trkb shc isoform . our data demonstrates differential effects of the trkb isoforms on aicd - mediated transcription showing that trkb shc behaves differently from both trkb fl and trkb t. it has been previously demonstrated that bdnf application does not improve the cognitive function in a trisomy 21 mouse model because trkb t is upregulated [ 43 , 46 ] . therefore , a better understanding of the individual trkb isoforms and their signaling role will improve the therapeutic potential of bdnf or bdnf agonists . experimentally , we found that the detected protein levels of trkb fl were much lower than trkb t and trkb shc levels . we can exclude effects due to plasmid copy number in the cells since we used equimolar amounts of plasmid dna that account for differences in plasmid size . we can also exclude differences in transcription levels due to plasmid promoters since the trkb shc vector was generated by mutagenesis of the trkb fl vector . the difference in expression levels of the trkb isoforms is highly reproducible suggesting that there might be a tight regulation of trkb fl expression levels . trkb fl is stored in intracellular vesicles that rapidly fuse to the cell membrane upon bdnf stimulation of the cells . this causes a fast bdnf - mediated phosphorylation of the receptor and initiates intracellular signaling . after this spike of activity trkb / bdnf complexes high trkb fl expression levels increase malignancy in neuroblastomas reinforcing the idea that regulatory mechanisms of trkb expression and signaling are necessary to maintain homeostasis [ 49 , 50 ] . trkb fl expression is also decreased by chronic bdnf stimulation of h4 neuroblastoma cells while trkb t levels remain constant . we therefore hypothesize that in our model system , trkb fl levels are controlled by mechanisms that can not be overcome by trkb fl overexpression and that bdnf expressed by the cell line might be one of the causes of this downregulation . to determine which trkb functional domain and signaling pathway was mediating the trkb mediate effects , we overexpressed the mutant trkb isoforms and monitored aicd - mediated luciferase activity . the observed trkb fl - mediated increase in luciferase activity was suppressed by either inactivating the tyrosine kinase activity ( k571 m ) or mutating the plc--binding site ( y816f ) . we hypothesize that the plc- effect is due to lack of plc- activation which produces dag ( diacyl glycerol ) , an activator of pkc , a protein that mediates adam10 activation . the fact that there is a difference between the trkb k571 m mutant and the trkb y816f plc--binding site mutant suggests both of these functional domains and their associated pathways can regulate app metabolism . the shc - binding site on the trkb fl receptor did not seem to be involved in mediating increased aicd - mediated luciferase activity since the trkb y515f mutant did not differ from the trkb fl isoform in increasing aicd - mediated luciferase activity . also the aicd - mediated luciferase signal in cells transfected with the double mutant trkb k571m / y515f did not differ from the cells transfected with the trkb k571 m mutant suggesting that there is no additive effect in eliminating both signaling pathways . in fact , the binding of shc might occur more efficiently when the site is phosphorylated so that when phosphorylation is prevented the small change in luciferase signal is not detectable in our experimental system . supporting this hypothesis is a nonsignificant decrease of aicd - mediated activity caused by the trkb y515f mutant compared to trkb fl . we observe a significant effect when the trkb shc isoform is mutated to eliminate shc - binding . this mutant induces the same luciferase activity signal of the trkb t. this finding suggests that binding of shc to the trkb shc isoform might mediate signaling pathways independently of phosphorylation . importantly , we demonstrate that there is a difference in signal transduction between the two truncated trkb isoforms and that they act on app - mediated transcription . moreover , we identify the shc - binding domain as responsible for the difference in signaling mechanism between trkb t and shc . mutation of the binding site for shc adaptor proteins on the truncated trkb shc isoform increases aicd - mediated luciferase signal , while the same shc site on the trkb fl is not responsible for the observed changes in luciferase activity . this contrasting result suggests that interaction between the same proteins and specific trkb isoforms mediates different signaling pathways . the cell line used , sh - sy5y , expresses basal levels of trkb receptors and bdnf , the endogenously expressed bdnf can promote dimerization and activation of the overexpressed receptors . also , bdnf independent activation of trkb fl receptors has been previously demonstrated and we hypothesize that both bdnf independent and dependent activation coexist in our experimental system . endogenous trkb receptors might also be upregulated or downregulated in response to exogenous trkb expression . to assess the effect of bdnf - dependent activation we added exogenous bdnf to the transfected cells . bdnf is hypothesized to activate the receptors by mediating their dimerization . in our experimental system bdnf treatment did not significantly alter the effects of the trkb isoform overexpression on app fl levels . the close proximity of the overexpressed receptors on the membrane probably allows dimerization and activation of the receptors independently from bdnf so that even when bdnf is added to the system any additional effect on trkb activation is not detectable . we mentioned above that sh - sy5y cells express basal levels of the trkb receptors . to investigate the role of trkb isoforms interaction on trkb fl - mediated signaling , we coexpressed exogenous trkb fl with truncated isoforms and mutated variants . cotransfection of the trkb fl with the truncated t and shc isoforms or the tyrosine kinase inactive mutant abrogated the increase in app fl levels induced by trkb fl . interestingly trkb fl / shc cotransfection had higher app fl levels than trkb fl / t cotransfection . this points to a possible difference between the two trkb truncated isoforms in the regulation of the trkb fl catalytic receptor . the fact that in the cotransfection experiments trkb fl / shc showed increased app fl levels compared to trkb fl / t also suggests that shc - binding to this isoform might occur more efficiently when trkb fl and trkb shc interact , maybe causing phosphorylation on the shc - binding site . cotransfection of trkb fl / y515f had similar effects on app fl to trkb fl / fl but was less effective in inducing an increase in app fl levels . trkb fl / y816f cotransfection was indistinguishable from trkb fl single transfection suggesting that plc- signaling is not involved in determining app fl levels . bdnf treatment of the cotransfected cells seemed to accentuate the effect of trkb fl on app fl levels . for example , it increased app fl in cells cotransfected with trkb fl / t but not in cells cotransfected with trkb fl / k571 m . on the contrary , trkb fl / y515f cotransfection seemed to cause lower app fl levels when bdnf was applied . it is intriguing to think that when all trkb isoforms are expressed in the cells , as should be the case in our model , bdnf promotes homodimerization versus heterodimerization . the issue of preferential homo- versus heterointeraction of trkb isoforms has not been investigated so far and it would be important to address . this work demonstrates that truncated trkb isoforms affect app processing and transcriptional signaling differently than full - length trkb . not only do the truncated isoforms have a different effects when transfected alone , they were also able to modify the trkb fl effects when co - transected with it . these findings point to the possible roles of the trkb isoforms in the pathogenesis of ad . in fact all the isoforms are present on neurons and other cell types of the cns . the proportion of trkb fl to trkb t and trkb shc is then important in determining the effect on trkb signal transduction and app metabolism . since all the isoforms bind bdnf in the extracellular domain , a therapeutic approach that uses bdnf biomimetic drugs in fact , expression of truncated isoforms could scavenge the drugs , decrease the benefit of engaging trkb fl triggered pathways , and also inhibit the trkb fl effects . depending on the relative amounts of the trkb receptors on the cells , bdnf - mimetic drugs could cause an overall worsening of the conditions by , for example , increasing the inflammatory response . it will be important in the future to dissect the contributions of the trkb isoforms to bdnf - dependent and -independent signaling pathways in the context of ad to better understand which isoforms and pathways are beneficial and which ones are detrimental .
we report that ntrk2 , the gene encoding for the trkb receptor , can regulate app metabolism , specifically aicd levels . using the human neuroblastoma cell line sh - sy5y , we characterized the effect of three trkb isoforms ( fl , shc , t ) on app metabolism by knockdown and overexpression . we found that trkb fl increases aicd - mediated transcription and app levels while it decreases sapp levels . these effects were mainly mediated by the tyrosine kinase activity of the receptor and partially by the plc-- and shc - binding sites . the trkb t truncated isoform did not have significant effects on app metabolism when transfected by itself , while the trkb shc decreased aicd - mediated transcription . trkb t abolished trkb fl effects on app metabolism when cotransfected with it while trkb shc cotransfected with trkb fl still showed increased app levels . in conclusion , we demonstrated that trkb isoforms have differential effects on app metabolism .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion 5. Conclusions
PMC4877561
external ventricular drainage ( evd ) is an important neuro - surgical procedure performing under emergent conditions . the ideal target for ventricular placement is usually within the ipsilateral frontal horn just anterior to the monro foramen2 ) . even though the freehand technique using superficial anatomical landmarks is traditional and generally accepted method for evd , the accuracy rate of evd catheter placement has been reported about 39.9% to 84%3457 ) . evd tips locating in nonventricular space have been reported to be 8.2% to 22.4%457 ) . in 1985 , the ghajar guide was introduced for ventricular catheter placement1 ) , but it was unfamiliar with most neurosurgeon . currently , navigation guided evd may increase the accuracy of placement of evd , but it requires a lot of time and general anesthesia . thus , it is not suitable and reasonable in consideration of an emergent procedure and cost effectiveness of evd . this study was approved by the institutional review board for the medical instrument and its registration number is 2009 - 08 . it was designed to direct a ventricular catheter along a course pointing the inner canthus and the tragus . it was composed of three portions , main body , rectangular pillar and an arm pointing the tragus . main body shaped a large letter t which was composed of horizontal and vertical portions . vertical portion has a role to direct a ventricular catheter toward the right and left inner canthi , respectively , and horizontal portion has a shallow longitudinal opening to connect the rectangular pillar and move it back and forth . rectangular pillar is 228 cm in size and has a longitudinal central hole of 4 mm in diameter to insert an evd catheter into the frontal horn of the lateral ventricle . vertical portion pointing the inner canthus was made to be placed coaxially with the central hole of the pillar . on the lateral surface of this pillar , there is a longitudinal groove running parallel with the central hole of the pillar to insert the arm pointing the tragus ( fig . the rectangular pillar was connected with the horizontal portion of main body through its opening using a screw . the arm pointing the tragus was inserted into the longitudinal groove of the pillar . to position the tip of verticalportion toward the inner canthus , the main body was slightly moved from side to side centering the burr hole . the direction of the arm pointing the tragus was also controlled by back and forth movement and turn of the pillar attached to the main body . evd catheter through the central hole of the pillar was finally inserted into the ventricle in depth of 5.5 cm from the dura mater ( fig . if the thick blood had been pushed out of the ventricle through the evd catheter due to the increased intracranial pressure in a case of intraventricular hemorrhage ( ivh ) , ivh was aspirated using a syringe of 10 cc . between april 2012 and december 2014 , 57 emergency evds were performed in 52 patients using this device in the operating room . admission diagnoses in 52 patients were aneurysmal subarachnoid hemorrhage in 21 , intracerebral hemorrhage ( ich ) associated with ivh in 21 ( thalamus 15 , caudate nucleus 5 , pons 1 ) , pure ivh in 2 , moyamoya disease in 2 , cerebellar infarction in 2 , meningitis in 1 , ventriculitis in 1 , ruptured arteriovenous malformation in 2 . bilateral evds were performed in 5 patients who had thalamic hemorrhage and ruptured avm , aneurysmal subarachnoid hemorrhage , respectively . this study was approved by the institutional review board for the medical instrument and its registration number is 2009 - 08 . it was designed to direct a ventricular catheter along a course pointing the inner canthus and the tragus . it was composed of three portions , main body , rectangular pillar and an arm pointing the tragus . main body shaped a large letter t which was composed of horizontal and vertical portions . vertical portion has a role to direct a ventricular catheter toward the right and left inner canthi , respectively , and horizontal portion has a shallow longitudinal opening to connect the rectangular pillar and move it back and forth . rectangular pillar is 228 cm in size and has a longitudinal central hole of 4 mm in diameter to insert an evd catheter into the frontal horn of the lateral ventricle . vertical portion pointing the inner canthus was made to be placed coaxially with the central hole of the pillar . on the lateral surface of this pillar , there is a longitudinal groove running parallel with the central hole of the pillar to insert the arm pointing the tragus ( fig . the rectangular pillar was connected with the horizontal portion of main body through its opening using a screw . the arm pointing the tragus was inserted into the longitudinal groove of the pillar . to position the tip of verticalportion toward the inner canthus , the main body was slightly moved from side to side centering the burr hole . the direction of the arm pointing the tragus was also controlled by back and forth movement and turn of the pillar attached to the main body . evd catheter through the central hole of the pillar was finally inserted into the ventricle in depth of 5.5 cm from the dura mater ( fig . if the thick blood had been pushed out of the ventricle through the evd catheter due to the increased intracranial pressure in a case of intraventricular hemorrhage ( ivh ) , ivh was aspirated using a syringe of 10 cc . between april 2012 and december 2014 , 57 emergency evds were performed in 52 patients using this device in the operating room . admission diagnoses in 52 patients were aneurysmal subarachnoid hemorrhage in 21 , intracerebral hemorrhage ( ich ) associated with ivh in 21 ( thalamus 15 , caudate nucleus 5 , pons 1 ) , pure ivh in 2 , moyamoya disease in 2 , cerebellar infarction in 2 , meningitis in 1 , ventriculitis in 1 , ruptured arteriovenous malformation in 2 . bilateral evds were performed in 5 patients who had thalamic hemorrhage and ruptured avm , aneurysmal subarachnoid hemorrhage , respectively . all ventricular punctures were accomplished at one time . catheter tip located in the frontal horn in 52 evds and in the 3rd ventricle in 2 , in the wall of the frontal horn of the lateral ventricle in 3 . small hemorrhage along to catheter tract occurred in 1 evd . even though 3 evd catheters located in the wall of the frontal horn of lateral ventricle , csf was well drained . even though freehand insertion of an evd using superficial anatomical landmarks is the most common method practiced by young neurosurgical trainees , the catheter tip locations have been reported to be unsatisfactory . evd catheter placement has been reported to be 39.984% in the ipsilateral frontal horn4567 ) , 2.712.4% in the contralateral frontal horn457 ) , 18% in the lateral ventricle body7 ) , 1.810.4% in the subarachnoid space57 ) , approximately 10% in the brain parenchyme57 ) , 1.822.4% in the extraventricular space45 ) , and 8.219.5% in the third ventricle457 ) . they recommended to use neuronavigation , ultrasonography , or other guidance techniques to position the catheter tip accurately in the frontal horn of the lateral ventricle . to increase the accuracy of ventriculostomy at kocher 's point , it was designed to direct a catheter along a course that lies at a right angle to the cranial surface . this device is rigid and consists of three equal - length standards that are applied to the patient 's scalp and a central tube at the apex of the formed pyramid for passage of the catheter . currently , navigation guided evd may increase the accuracy of placement of evd , but it requires a lot of time , room for procedure and frequent general anesthesia . in consideration of an emergent procedure and cost effectiveness of evd , navigation guided evd is not suitable . in this presentation , evd device could be used very conveniently and quickly . when performing evd using this device , the location of evd catheter tip ca n't be reach enough in the frontal horn due to the slight backward movement of the catheter in the process removing the device after ventricular puncture or due to the sight midline shift . in this study , 3 patients showed that catheter tip located in the ventricular wall of the frontal horn . however , in all patients , csf was well drained and well functioning up to the removal of evd . there was no problem for csf drain through the evd catheter because the ventricular puncture had been already accomplished . in addition , the accuracy of the direction of catheter tip toward the frontal horn was 100% . if the surgeon use this device , extraventricle location of the catheter tip will be never happened . if this device is used in patient with a slight midline shifting , evd catheter tip can be located in the contralateral frontal horn . the depth of an evd catheter inserting into the ventricle seems to be suitable to be positioned at a depth of 5.5 to 6 cm from the dura mater . an accurate placement of the ventricular catheter tip is also very important , particularly , in view point of the intraventricular thrombolytic therapy in ivh and the direct conversion of an evd to ventriculoperitoneal shunt . if the neurosurgical residency use this device more than several times during the training course , the accuracy of evd by freehand technique will be also improved because of the familiar direction of evd device for ventricular puncture . this device for evd guides to provide an accurate position of catheter tip safely and easily .
to introduce a new device for catheter placement of an external ventricular drain ( evd ) of cerebrospinal fluid ( csf ) . this device was composed of three portions , t - shaped main body , rectangular pillar having a central hole to insert a catheter and an arm pointing the tragus . the main body has a role to direct a ventricular catheter toward the right or left inner canthus and has a shallow longitudinal opening to connect the rectangular pillar . the arm pointing the tragus is controlled by back and forth movement and turn of the pillar attached to the main body . between april 2012 and december 2014 , 57 emergency evds were performed in 52 patients using this device in the operating room . catheter tip located in the frontal horn in 52 ( 91.2% ) , 3rd ventricle in 2 ( 3.5% ) and in the wall of the frontal horn of the lateral ventricle in 3 evds ( 5.2% ) . small hemorrhage along to catheter tract occurred in 1 evd . csf was well drained through the all evd catheters . the accuracy of the catheter position and direction using this device were 91% and 100% , respectively . this device for evd guides to provide an accurate position of catheter tip safely and easily .
INTRODUCTION MATERIALS AND METHODS EVD device Patients for EVD RESULTS DISCUSSION CONCLUSION
PMC3099110
the carious process usually progresses as a series of exacerbations and remissions that are characterized by periods of high production of acid that are responsible for the dissolution of the hard tissues of the tooth . if allowed to proceed untreated , it results in the progressive destruction of the tooth and eventual infection of the dental pulp . carious dentin has been identified by two layers of soft dentin , the outer carious layer is infected unremineralizable with irreversible deteriorated collagen fibers , with no odontoblastic processes , insensitive and therefore , should be removed . the inner carious layer is uninfected , remineralizable with reversibly denatured collagen fibers , alive with living odontoblastic processes , sensitive , and so should be preserved . caries detector dyes have been developed to further help the diagnosis and removal of dental caries , by differentiating between infected and affected dentin . the dye stains only the infected outer carious dentin.[24 ] dye usage allows dentists to perform an ideal cavity preparation for adhesive restorations . to ensure that all carious dentin has been removed , use of dye is indicated as the last step in tooth preparation . the bonding mechanism for current adhesive agents is based on the acid removal of the smear layer and demineralization of the underlying dentin , which leaves an exposed collagen network . the application of hydrophilic primers followed by the adhesive , which encapsulate this collagen network and form a resin impregnated layer or hybrid layer . few reports are available in literatures regarding the effect of caries detection dye on the bond strength of sound and carious affected dentin . the present study has been designed to evaluate the influence of caries detection dye on the in - vitro tensile bond strength of adhesive materials to sound and carious affected dentin . materials along with their composition used in the present study are summarized in table 1 . materials and their composition used in the study forty freshly extracted ( both carious and non - carious ) human mandibular molar teeth were selected for this study . the samples were stored in normal saline at room temperature until they were subjected to the experimental procedure . for carious affected dentin , twenty samples with coronal caries extending approximately halfway through the dentin were used in this study . the buccal carious surface was ground parallel to the long axis of the tooth to expose a flat surface of normal dentin surrounding the carious lesion . the buccal enamel was grinded with the help of carborundum disc , made smoothened by the sand paper ( silicon carbide , 220 - 600 grit ) and washed copiously with distilled water . to obtain carious affected dentin , grinding was performed using combined criteria of visual examination and staining with caries detector dye ( kurary , japan ) as described that is the dentin was hard to an explorer and no longer stained bright red with caries detector dye [ figure 1 ] . flow chart showing distribution of samples into groups the samples in group - a ( n=20 , control group ) were without application of caries detection dye on both sound and carious affected dentin surfaces . the samples in group - b ( n=20 , experimental group ) were with application of caries detection dye on sound and carious affected dentin surfaces . the control and experimental group were further divided into two subgroups a1 and a2 and b1 and b2 with 10 samples in each as shown in the following flowchart . buccal surface of each sample was exposed from acrylic resin block . in control subgroups , dentin surfaces were etched with 37% phosphoric acid gel for 15 sec while in experimental subgroups all samples were etched after application of caries detection dye . after etching , it was rinsed with spray water and dried leaving a moist dentin surface for application of the bonding agent . the adhesive resin ( single bond ) was applied in a single layer on the sound and carious affected dentin surface as per the manufacturer 's instruction and photo cured for 10 sec using qth light source ( 3 m curing light 2500 ) . a plastic cylindrical mould with a internal diameter of 3 mm and length 4 mm was placed atop the bonded surface . a flexible orthodontic wire to be used during testing procedure the composite resin was build up in increments and each layer was cured for 20 sec . after complete curing , the plastic mold was easily removed with the help of tweezers . debonding procedure was performed in tension on instron universal testing machine at a crosshead speed 0.5 mm / min . after testing , the fracture mode of each specimen was determined visually under 5x magnification . forty freshly extracted ( both carious and non - carious ) human mandibular molar teeth were selected for this study . the samples were stored in normal saline at room temperature until they were subjected to the experimental procedure . for carious affected dentin , twenty samples with coronal caries extending approximately halfway through the dentin were used in this study . the buccal carious surface was ground parallel to the long axis of the tooth to expose a flat surface of normal dentin surrounding the carious lesion . the buccal enamel was grinded with the help of carborundum disc , made smoothened by the sand paper ( silicon carbide , 220 - 600 grit ) and washed copiously with distilled water . to obtain carious affected dentin , grinding was performed using combined criteria of visual examination and staining with caries detector dye ( kurary , japan ) as described that is the dentin was hard to an explorer and no longer stained bright red with caries detector dye [ figure 1 ] . flow chart showing distribution of samples into groups the samples in group - a ( n=20 , control group ) were without application of caries detection dye on both sound and carious affected dentin surfaces . the samples in group - b ( n=20 , experimental group ) were with application of caries detection dye on sound and carious affected dentin surfaces . the control and experimental group were further divided into two subgroups a1 and a2 and b1 and b2 with 10 samples in each as shown in the following flowchart . buccal surface of each sample was exposed from acrylic resin block . in control subgroups , dentin surfaces were etched with 37% phosphoric acid gel for 15 sec while in experimental subgroups all samples were etched after application of caries detection dye . after etching , it was rinsed with spray water and dried leaving a moist dentin surface for application of the bonding agent . the adhesive resin ( single bond ) was applied in a single layer on the sound and carious affected dentin surface as per the manufacturer 's instruction and photo cured for 10 sec using qth light source ( 3 m curing light 2500 ) . a plastic cylindrical mould with a internal diameter of 3 mm and length 4 mm was placed atop the bonded surface . a flexible orthodontic wire to be used during testing procedure the composite resin was build up in increments and each layer was cured for 20 sec . after complete curing , the plastic mold was easily removed with the help of tweezers . debonding procedure was performed in tension on instron universal testing machine at a crosshead speed 0.5 mm / min . tensile bond strength of each group was calculated in mpa . after testing , the fracture mode of each specimen tensile strength is significantly higher in sound dentine from carious affected dentine in control subgroups as well as experimental subgroups [ table 3 ] . the mean value of tensile strength within control group and experimental group anova table for tensile strength in subgroups since f is highly significant hence there are significant difference in tensile strength of with and without application of dye in experimental and control group . there are significant differences between the control subgroups ( a1 vs a2 ) and experimental subgroups ( b1 vs b2 ) in tensile strength values . ( p<.001 ) [ table 4 ] . comparison of tensile strength between control ( without application of caries detection dye ) and experimental subgroup ( with application caries detection dye ) caries- detector dyes have proven to be useful in the identification and removal of carious dentin . these agents made from basic fuschin in a propylene glycol base reliably stain only the dentin that is infected with bacteria and irreversibly demineralized without staining the affected dentin . therefore , the presence of stain reliably determines the part of dentin to be removed . early formulations of the caries detector included a 5% basic fuchsin solution in propylene glycol as a solvent ; however , it was replaced with 1% acid red 52 solution in the same solvent as a substitute dye because fuchsin is believed to be carcinogenic . studies demonstrated that dentin containing less than 10000 cfu / mg was normally not disclosed by the fuschin dye , whereas counts of greater than 550,000 cfu /mg dentin were readily stained . this suggests that the dye can be used to approximate the bacterial load of the dentinal surface because a certain bacterial mass apparently needs to be present before the dye is absorbed by the dentin . the mechanism by which caries disclosing agents selectively stain only carious , irreversibly demineralized dentin has been determined . both basic fuchsin and acid red stain the collagen fibers exposed by the bacteria caused dentin demineralization process . the results of this study show that the mean value of tensile bond strength of single bond was higher in the control subgroup than experimental subgroups which can be explained as that caries - affected dentin contain some substances that interfere with free radical generation or propagation , leading to improper polymerization of resins in such dentin . the peritubular dentin matrix of caries - affected dentin , which take up much more toluidine blue stain and exhibit more intense metachromasia than normal peritubular dentin , suggests the presence of mucopolysaccharides or glycoprotiens . these molecules may interfere with resin wetting of fine porosities within both intertubular and peritubular dentin and/or fusayama et al , observed affected dentin to have turbid , transparent and subtransparent zones . there is limited information on the structure of these zones as well as conflicting evidences about their properties . general perception is that transparent dentin is sclerotic and hypermineralized due to tubular occlusion which might act as a barrier to penetration of primers and bonding agents . although the adhesive resin may have followed the primer , it may not have copolymerized well with the primer . thus the adhesion of resins to caries - affected dentin may be inferior to that of normal dentin , due to weaker collagen and/or weaker resin even though most of the tubules in such dentin are filled with mineral deposits . these intratubular crystals are not well - packed and are softer than well - packed apatite even through they are more acid - resistant.[1015 ] in relation to the influence of dyes on the adhesion of filtekz250 , the decrease in the bond strength may be due to dye solution remaining in sound and affected dentin as mentioned in other studies . single bond is an adhesive that needs to have close contact with the dentin substrate to produce the desired bond strength . dye remaining trapped in dentin may adversely affect the wetting of dentin by materials , thereby decreasing micromechanical retention of these materials . it was observed that despite the application on sound dentin and carious affected dentin , caries - detecting dyes , even after being rinsed and acid etched , were not completely removed , as evidenced by some samples with sound tissue remaining lightly colored which might have influenced the results . within the limitation of the present study , it is concluded that the tensile bond strength was higher in sound and carious affected dentin in both control and experimental group without application of caries detection dyes than that with the application of caries detection dyes .
objectives : the objective of this study was to evaluate the influence of caries detection dye on the in - vitro tensile bond strength of adhesive materials to sound and carious affected dentin.materials and methods : forty healthy and carious human molars were ground to expose superficial sound dentin and carious affected dentin . caries detector dye was applied to sound and carious affected dentin and rinsed . subsequently the dentin was etched with 37% phosphoric acid and rinsed leaving a moist dentin surface . the adhesive ( single bond ) was applied in single layers and light cured . a posterior composite ( filtek z 250 ) were used to prepare the bond strength specimens with a 3 mm in diameter bonding area . control and experimental groups were made with and without application of dye respectively . each group includes both sound and carious affected dentin . after 24 hour immersion in distilled water , tensile bond strength ( mpa ) was measured using an instron testing machine.results:analysis of variance ( anova ) was used to evaluate the data . the tensile bond strength were significantly less in experimental subgroup than control subgroups.conclusion:the tensile bond strengths were higher in sound and carious affected dentin without application of caries detection dyes .
INTRODUCTION MATERIALS AND METHODS Experimental groups RESULTS DISCUSSION CONCLUSION
PMC3015997
we report a case of left adrenal schwannoma in a 62-year - old man , incidentally discovered on an abdominal computed tomography . on admission , no remarkable findings were recognized in the patient 's blood and urine examination , including adrenal function . macroscopically , the tumor ( 45 mm 30 mm , 60 g ) arose from the medulla of the adrenal gland with a clear border distinguishing it from surrounding tissues . although an increasing number of adrenal incidentaloma have been identified with the recent advances in imaging techniques , only a few cases of schwannoma of the adrenal gland have been reported . we reviewed the cases reported previously in an attempt to reveal the characteristic features of this rare disease . schwannoma is a benign neurogenic tumor originating from schwann cells , which also produce the myelin sheath that covers peripheral nerves . however , schwannoma arising from the adrenal gland is an extremely rare entity ; only 11 cases have been reported in the literature . herein , we report a case of adrenal schwannoma , successfully treated with laparoscopic adrenalectomy and review the cases previously reported in an attempt to clarify the characteristic features of this rare disease . a 62-year - old man was referred to our hospital for further examination and treatment of his incidentaloma of the left adrenal gland . a local physician diagnosed the patient with mild liver dysfunction , and the patient had undergone an abdominal computed tomography as part of his workup . a round mass with a maximum diameter of 40 mm was recognized on the left adrenal gland . on admission , no physical features were present to suggest cushing 's syndrome or von recklinghausen 's disease . magnetic resonance imaging ( mri ) revealed a round mass 40 mm in diameter in the patient 's left adrenal gland . the mass had a clear border setting it off from surrounding structures , and a portion of the mass had a lobular surface . the mass had low - signal intensity on a t1-weighted image and demonstrated slightly heterogeneous high - signal intensity on a t2-weighted image ( figure 1 ) . endocrinological examinations revealed no remarkable findings to suggest a functioning adrenal tumor ( table 1 ) . although acth was slightly elevated , it was thought to be nonspecific , because no alterations in either the morning level or the daily rhythm of cortisol secretion was observed . the diagnosis was made of a nonfunctioning left adrenal tumor . because of the size and partly irregular shape of the mass , a laparoscopic transabdominal left adrenalectomy was performed to rule out a malignant tumor . the mass registered low - signal intensity on the t1-weighted image ( a ) , and a slightly heterogeneously high - signal intensity on the t2-weighted image ( b ) . endocrinological data on admission while under general anesthesia , the patient was placed in a right hemi - lateral position with a pillow under the right lower limb to lift up the left flank . an 11-mm trocar for a laparoscope was introduced by the open method at the level of 5 cm cranial to the navel on the left midclavicular line . after investigating the abdominal cavity to confirm no abnormal tumors , ascites , or adhesions , a 12-mm trocar for a vessel sealing system and 10-mm trocar for forceps were introduced under video vision at the left subcostal level on the anterior axillary line and on the median line , respectively . the spleen was mobilized by cutting the lieno - renal ligament , and the mass was easily found between the pancreas tail and the left kidney , surrounded by fatty tissue ( figure 2 ) . the tumor with the adrenal gland was carefully isolated with sharp or blunt dissection , with surrounding fatty tissue , by using a vessel - sealing system . after the left adrenal vein was clipped and cut , preserving the left subphrenic vein , the isolated tumor was enclosed within an endoscopic pouch and was extracted via the extended wound of the 12-mm trocar . a clearly bordered round , firm mass was found within the left adrenal gland . the operation time was 86 minutes , and the blood loss was 100 g. the 45 mm 30 mm 30 mm tumor was observed to have developed from the medulla of the adrenal gland and weighed 60 g ( figure 3 ) . histologically , the tumor consisted of spindle cells without atypia or mitosis . a thin , fibrous area indistinctly separated the tumor from the surrounding nonatrophic adrenal cortex . immunohistochemical analysis revealed that the tumor cells were uniformly positive for s-100 , and partly positive for myeline , but were negative for c - kit , smooth muscle actin , cd34 , and desmin . abdominal ultrasonography confirmed no recurrent disease , and the patient is in good health without disease 36 months after surgery . the tumor ( arrows ) was removed with left adrenal gland and surrounding fatty tissues . macroscopic view of the tumor that arose from the adrenal medulla , compressing the normally developed cortex . the majority of the tumors originating in the adrenal medulla are pheochromocytoma , neuroblastoma , or ganglioneuroma . schwannoma of the adrenal gland is rarely encountered , although it similarly originates from the cells of the neural crest . the origin of the adrenal schwannoma is considered to be either of 2 myelinized nerve systems innervated to the adrenal medulla . one is the sympathetic nerve from the upper lumbar plexus , and the other is the phrenic or vagus nerve . all reported cases demonstrate that the schwannoma originated in the medulla ; no schwannoma has been reported to arise from the cortex . this is likely the consequence of the nerve in the adrenal cortex developing quite poorly compared with that in the medulla and only a few thin nerves running along the vasculature . the diagnosis of schwannoma of the adrenal gland was difficult to make based on the imaging studies . typical mri findings report solid tumors with low - signal intensity on t1-weighted images and heterogeneously high - signal intensity on t2-weighted images , with occasional cystic components . however , imaging findings were nonspecific for the tumors with a nerve origin , such as neuroblastoma or ganglioneuroma . every reported case has been treated as a nonfunctioning incidentaloma of the adrenal gland , and the diagnosis of schwannoma was determined postoperatively through histological examination . conventionally , schwannoma are divided into 2 distinct subtypes , hyper - cellular antoni a type and hypo - cellular antoni b type . spindle cells are arranged in fascicles in areas of high cellularity with little stromal matrix in the antoni a type . nuclear - free zones called verocay bodies can also be found in between the regions of nuclear palisading . tumor cells are less densely found , forming loose meshwork often accompanied by microcysts or myxoid changes in antoni b type , and are considered to be a degenerated form of the antoni a type . both sub - types can be found as a mixture in adrenal schwannoma ; 3 of 11 reported cases demonstrated both subtypes in a tumor . the differential diagnosis made using immunostaining should exclude gastrointestinal stromal tumor ( gist ) or leiomyoma . by contrast , gist is positive for c - kit , and leiomyoma is positive for smooth muscle actin . eleven cases of adrenal schwannoma have been previously reported in english and japanese ( table 2 ) as far as we can determine from a search of pubmed and japana centro revuo medicina . three patients experienced abdominal pain . the tumors in these 3 patients were large in size ( 75 mm , 90 mm , and 124 mm ) . in addition , a histological demonstration of a large cystic tumor and 2 small tumors incidentally found in the autopsy samples have been reported . excluding these 5 cases , 7 adrenal schwannomas , including our case , were incidentally found by imaging analyses performed as a part of the workup for the nonspecific symptoms . the size ranged from 28 mm to 90 mm ( average , 56 ) when identified , which was not considered large enough to cause clinical symptoms . they were demonstrated to be large solid masses ( 60 mm and 180 mm ) extending over the adrenal gland occupying the retroperitoneum , and with unfavorable outcomes . however , no case of catecholamine - secreting benign adrenal schwannoma has been reported . as in our case and however , these elevations were considered nonspecific because no evidence was present of alteration in cortisol secretion in either case . thus , features of adrenal schwannoma should be considered the same as so - called typical reported cases of adrenal schwannoma nd = not described ; us = abdominal ultrasonography ; ugi = upper gastrointestinal series ; ct : computed tomography died of myocardial infarction no abnormal findings with either 123i - metaiodobenzylguanidine nor 131i-6b - iodomethyl-19-nor - cholest-5(10)-en-3b - ol scintigraphy , incidentally observed asymptomatic tumor in the adrenal gland , incidentaloma , has been increasing , along with the advance in the field of imaging technology . these incidentalomas are most frequently cortical adenomas followed by carcinoma , pheochromocytoma , and myelolipoma . when excluding these major tumors by hormonal and imaging analysis , adrenal schwannoma should be expected , although it has been extremely rare . however , no well - established recommendation for surgical application has been developed for patients with tumors between 4 cm and 6 cm . the consensus suggests that criteria in addition to size should be considered in making the decision to monitor or proceed to adrenalectomy . relevant variables to perform laparoscopic adrenalectomy for incidentaloma have been proposed that include a low complication rate ( less than 3% ) and the informed consent . the present case suggests that application of laparoscopic adrenalectomy to the incidentaloma could be a feasible choice , when performed by an experienced surgical team with appropriate informed consent , not only as a minimally invasive treatment , but also as a whole tumor biopsy to establish a correct diagnosis .
objective : we report a case of left adrenal schwannoma in a 62-year - old man , incidentally discovered on an abdominal computed tomography . it was successfully treated with laparoscopic adrenalectomy.methods:on admission , no remarkable findings were recognized in the patient 's blood and urine examination , including adrenal function . laparoscopic left adrenalectomy was performed with the diagnosis of a nonfunctioning adrenal tumor.results:macroscopically , the tumor ( 45 mm 30 mm , 60 g ) arose from the medulla of the adrenal gland with a clear border distinguishing it from surrounding tissues . histologically , the tumor consisted uniformly of spindle cells that were positive for s-100 . the cortex was compressed but showed no atrophy . the diagnosis of adrenal schwannoma was made.conclusion:although an increasing number of adrenal incidentaloma have been identified with the recent advances in imaging techniques , only a few cases of schwannoma of the adrenal gland have been reported . we reviewed the cases reported previously in an attempt to reveal the characteristic features of this rare disease .
Objective: Methods: Results: Conclusion: INTRODUCTION CASE REPORT DISCUSSION CONCLUSION
PMC4947409
commonly used scores include the visual analog scale ( vas ) ; short form ( sf)-8 , sf-12 , sf-36 ; and oswestry disability index ( odi ) . although these scores have been used extensively to demonstrate the safety and efficacy of surgical interventions , the main drawback of this type of assessment is the inherent subjectivity involved in patient scoring.2 3 indeed , patients ' own perception of their disability and symptoms , as well as variation in each patient 's own environment , can have an unaccounted influence on the subjective patient - based scores.3 4 furthermore , the issue of subjective outcome rating is increasingly important in the compensation arena . the studies that assessed patient outcome and fusion status following anterior cervical decompression fusion for radiculopathy note similar rates of fusion between compensable and noncompensable patients , but with a higher rate of poorer outcome based on subjective scoring analysis in the compensable group.5 although pain scores have been previously used as a surrogate indicator of the level of ambulatory impairment , several recent studies have demonstrated poor correlation between subjective pain scores and ambulatory capacity measured using treadmill and walking test assessments.6 advances in technology have led to the advent of accelerometer devices , which have the ability to accurately record physical activity data in real time , including number of steps and distance traveled . accelerometer devices are able to produce objective measurements of physical activity outcome,7 and thus may potentially give rise to an opportunity to assess postoperative recovery in terms of physical capacity using objective measurements.8 to the best of our knowledge , no study thus far has prospectively investigated objective physical activity measurements after lumbar spine surgery and tested whether these measurements correlate well with subjective functional scores . therefore , the aim of this study was to objectively measure functional outcome in patients who had lumbar spine surgery using quantitative physical activity measurements as derived from the accelerometers . approval was obtained from the south eastern sydney local health district , new south wales , australia ( hrec 13/090 ) . the patients were enrolled between 2013 and 2014 by the senior author ( r.j.m . ) , who performed all the surgical procedures . the inclusion criteria were patients who underwent lumbar spine surgery within this recruitment period , with indications including low back pain , radiculopathy , and claudication . the procedures performed included anterior lumbar interbody fusion ( alif ) , laminectomy , diskectomy , and posterior lumbar interbody fusion . the exclusion criteria included infection , osteoporosis , cancer , and any other comorbid conditions that were thought to substantially limit activity beyond symptoms of back pain , radiculopathy , and neurogenic claudication . patients who were not motivated to pursue the requirements of the study , those with poor memory or mental health issues , and those who would not consent to the study were excluded . the fitbit activity monitor is a small , lightweight , commercially available device that is clipped to the patient 's belt or waistband or can be worn in pant pockets . the fitbit was utilized as the battery life was 6 months , which enhanced compliance . patients were given a unique username and password ( consent obtained ) to access the data . the fitbit was synced to the patients ' smartphone or computer , if a smartphone was not available . based on the inbuilt algorithms and validation studies,9 the fitbit device is able to estimate the number of steps taken , flights of stairs climbed , distance walked , and calories expended . ) verified the accuracy of the fitbit by testing it while walking and running , and the accuracy was found to be 98 1% . the fitbit activity monitor was used to record average physical activity date , preoperatively and postoperatively . for the present study , follow - ups at 1 , the parameters recorded included number of steps taken , distance traveled , and calories burned , which were used to calculate the average number of steps per day , distance traveled per day , and calories burned per day on follow - up . an example of such data recorded by the fitbit activity tracker and synced to a mobile / computer is demonstrated in fig . 1 . screenshot of prospective data collection indicating ( a ) average steps per day and ( b ) average distance traveled per day from a patient recovering from a two - level fusion over 12-month periods . the initial month of data shown is the average number of steps per day or distance traveled per day preoperatively . patient clinical outcomes were measured using self - reported scores , including the 10-point vas for back and leg pain , the odi , and the sf-12 , which included mental component summary ( mcs ) and physical component summary ( pcs ) . the demographic variables including age and gender were summarized using descriptive statistics ( mean standard deviation or percentage ) . the pre- and postoperative parameters were compared with a two - tailed , paired - sample t test . a p value < 0.05 was considered significant . all statistical analyses were performed using spss software ( version 22.0 , ibm , armonk , new york , united states ) . the pearson correlation test was performed to determine whether there was a significant correlation between changes in physical activity parameters ( steps , distance , calories ) versus changes in clinical outcome ( vas , odi , and sf-12 mcs and pcs scores ) . the pearson correlation was presented as r and p value , where r signifies the strength of the correlation . the close the value of r is to 1 or 1 , the stronger the correlation ; an r value close to 0 signifies almost negligible correlation . approval was obtained from the south eastern sydney local health district , new south wales , australia ( hrec 13/090 ) . the patients were enrolled between 2013 and 2014 by the senior author ( r.j.m . ) , who performed all the surgical procedures . the inclusion criteria were patients who underwent lumbar spine surgery within this recruitment period , with indications including low back pain , radiculopathy , and claudication . the procedures performed included anterior lumbar interbody fusion ( alif ) , laminectomy , diskectomy , and posterior lumbar interbody fusion . the exclusion criteria included infection , osteoporosis , cancer , and any other comorbid conditions that were thought to substantially limit activity beyond symptoms of back pain , radiculopathy , and neurogenic claudication . patients who were not motivated to pursue the requirements of the study , those with poor memory or mental health issues , and those who would not consent to the study were excluded . the fitbit activity monitor is a small , lightweight , commercially available device that is clipped to the patient 's belt or waistband or can be worn in pant pockets . the fitbit was utilized as the battery life was 6 months , which enhanced compliance . patients were given a unique username and password ( consent obtained ) to access the data . the fitbit was synced to the patients ' smartphone or computer , if a smartphone was not available . based on the inbuilt algorithms and validation studies,9 the fitbit device is able to estimate the number of steps taken , flights of stairs climbed , distance walked , and calories expended . ) verified the accuracy of the fitbit by testing it while walking and running , and the accuracy was found to be 98 1% . the fitbit activity monitor was used to record average physical activity date , preoperatively and postoperatively . for the present study , follow - ups at 1 , the parameters recorded included number of steps taken , distance traveled , and calories burned , which were used to calculate the average number of steps per day , distance traveled per day , and calories burned per day on follow - up . an example of such data recorded by the fitbit activity tracker and synced to a mobile / computer is demonstrated in fig . 1 . screenshot of prospective data collection indicating ( a ) average steps per day and ( b ) average distance traveled per day from a patient recovering from a two - level fusion over 12-month periods . the initial month of data shown is the average number of steps per day or distance traveled per day preoperatively . patient clinical outcomes were measured using self - reported scores , including the 10-point vas for back and leg pain , the odi , and the sf-12 , which included mental component summary ( mcs ) and physical component summary ( pcs ) . the demographic variables including age and gender were summarized using descriptive statistics ( mean standard deviation or percentage ) . the pre- and postoperative parameters were compared with a two - tailed , paired - sample t test . a p value < 0.05 was considered significant . all statistical analyses were performed using spss software ( version 22.0 , ibm , armonk , new york , united states ) . the pearson correlation test was performed to determine whether there was a significant correlation between changes in physical activity parameters ( steps , distance , calories ) versus changes in clinical outcome ( vas , odi , and sf-12 mcs and pcs scores ) . the pearson correlation was presented as r and p value , where r signifies the strength of the correlation . the close the value of r is to 1 or 1 , the stronger the correlation ; an r value close to 0 signifies almost negligible correlation . twenty - eight patients completed the accelerometer physical activity and clinical follow - up period . two patients lost their fitbit and therefore objective data analysis was not complete and they were excluded . the average age of the cohort was 42.60 10.34 years , and 17 patients were men ( 60.7% ) . alif was performed in 7 patients ( 25% ) , laminectomy in 13 patients ( 46.4% ) , posterior lumbar interbody fusion in 2 patients ( 7.1% ) and diskectomy in 6 patients ( 21.4% ) . the primary indications included low back pain ( n = 4 ) , radiculopathy ( n = 14 ) , and claudication ( n = 4 ) with several patients having multiple indications . in the preoperative period , the mean number of steps taken per day was 5,255 2,883 steps . following lumbar spine surgery , the number of steps per day at 1-month follow - up was 4,574 2,186 , compared with 7,135 3,112 at 2-month follow - up and 8,312 4,218 at 3-month follow - up . there was a significant increase in the number of steps compared with preoperative status at 2-month follow - up ( 35.8% , p = 0.002 ) and 3-month follow - up ( 58.2% , p = 0.008 ; fig . 2 ) . change in number of steps per day taken at follow - up . the mean distance traveled in the preoperative period was 3.8 2.2 km / d , compared with 3.4 1.7 km / d at 1-month follow - up . there was a 39.5% significant increase in distance traveled per day at 2-month follow - up to 5.3 2.5 km ( p = 0.002 ) , and 63% increase to 6.2 3.6 km / d at 3-month follow - up ( p = 0.0004 ) . there was no difference in the number of steps taken preoperatively versus early postoperative phase ( 1-month follow - up ; fig . the mean number of calories consumed in the preoperative phase was 2,137 481 per day , compared with 2,089 401 per day at 1-month , 2,228 490 per day at 2-month , and 2,592 1,185 per day at 3-month follow - up . 4 ) . change in calories per day consumed at follow - up . at latest follow - up , there was a significant reduction in vas back pain score from 7.0 2.7 to 2.8 1.9 ( p = 0.0002 ) . vas leg pain scores also significant decreased postoperatively from 6.3 3.3 to 1.5 0.7 ( p = 0.01 ) . after surgery and follow - up , there was a significant increase in odi scores from 46.0 19.0 to 26.9 17.7 ( p = 0.005 ) . although there was no change in the mcs component of sf-12 scores ( p = 0.65 ) , a significant increase in the pcs component of sf-12 scores from 33.0 8.2 to 44.0 8.7 ( p = 0.02 ) was noted ( fig . preoperative and postoperative clinical outcomes following lumbar spine surgery : ( a ) visual analog scale ( vas ) back pain ; ( b ) vas leg pain ; ( c ) oswestry disability index ( odi ) ; ( d ) short form 12 ( sf-12 ) mental component summary ( mcs ) score ; ( e ) sf-12 physical component summary ( pcs ) score . the pearson correlation test was used to evaluate the contributions of physical performance to clinical and function outcome . the analysis between improvement in number of steps per day with change in vas back pain ( r = 0.446 , p = 0.316 ) , vas leg pain ( r = 0.472 , p = 0.285 ) , and pcs ( r = 0.058 there was also no significant correlation between improved distance traveled per day with change in vas back pain ( r = 0.333 , p = 0.348 ) , vas leg pain ( r = 0.012 , p = 0.975 ) , and pcs ( r = 0.117 , p = 0.747 ) scores . the current prospective series demonstrates that : ( 1 ) accelerometry is a feasible method of measuring objective physical activity parameters with high patient compliance ; ( 2 ) there was a significant improvement in steps per day and distance traveled per day at follow - up following lumbar spine surgery ; and ( 3 ) although both subjective pain / functional scores and physical activity parameters improved with follow - up , the lack of correlation indicates the limited power of subjective scores to predict physical activity levels during recovery and follow - up from lumbar spine surgery . few researchers have studied the role of accelerometers for objective measurements of physical activity in patients with lumbar spinal pathology and in patients undergoing lumbar surgery . in one such study in patients with lumbar spine stenosis by pryce et al,10 accelerometers was used to measure physical activity , including the number of calories consumed per kilogram per day , the intensity and duration of exercise , and ambulation via bout length . linear regression and adjusted models were then use to correlate this data with clinical outcome scores including odi and sf-36 in 33 patients with lumbar spinal stenosis . the authors concluded that subjective measurements for pain and disability had limited ability to predict real - life physical performance in patients with lumbar spine stenosis . this study is the first to our knowledge that is focused on patients undergoing lumbar spine surgery for pain , radiculopathy , or claudication . the physical activity parameters were very similar preoperatively versus the early postoperative phase ( 1-month follow - up ) , given that the patients were still recovering from their surgery and had reduced ambulation . however , beyond this temporal threshold , significant improvements in number of steps taken and distance traveled were seen at 2-month and 3-month follow - up . this result was also similarly reflected in the significant improvements in vas back and leg pain scores , odi , and physical component of sf-12 . similar to the study by pryce et al,10 no correlation was found between clinical physical activity and functional clinical scores . thus , these results provide evidence that subjective patient - based scores are not adequate to predict real - life physical activity at follow - up . this trend suggests that pain is not the only factor responsible for physical activity impairment following lumbar surgery , and thus , data based only on patient self - scores should be interpreted with caution.1 rather , a holistic assessment of patient function and recovery following spinal surgery may be achieved with objective measurements , which can be obtained using accelerometers . overall , the present results suggest that the use of accelerometers to measure physical activity parameters over follow - up is feasible in patients having spine surgery . the use of objective physical activity measurements with the accelerometer may potentially help overcome several limitations of self - reported outcomes , including documenting inaccuracies , the prevalence of overreporting activity levels , and the lack of standardization across different publications as to which scoring system to use.11 12 in contrast to the relatively labor - intensive surveys and repetition at different follow - up periods , continuous accelerometer collection of data can be automated with inbuilt algorithms.8 13 14 however , objective physical activity measurements using accelerometers still require validation in a spine surgery population , which should be addressed in future prospective trials . the introduction of accelerometer - based objective physical activity measurements provides a whole new platform for new research opportunities . potential future studies may look to explore any differences between the different surgical approaches in terms of postoperative recovery physical activity . for example , objective physical activity measurements may have a potential role in the assessment of minimally invasive lumbar fusion surgery versus traditional open surgery , whereas the present studies in the literature based on self - scored clinical outcomes have been met with resistance and controversy . in addition , accelerometers may potentially be useful in planning and evaluating physical activity interventions as part of follow - up physiotherapy . this study is the first to evaluate physical activity in a spine surgery cohort using accelerometers . the strengths also include the prospective nature of the assessment and the relatively long - term evaluation of physical activity , with prior studies using only 7-day assessment.10 a high rate of compliance was found , demonstrating that the use of accelerometers for postoperative follow - up after lumbar spine surgery is feasible . first , the prospective study cohort is heterogeneous , including patients with various procedures such as alif , diskectomy , and laminectomy . given the relatively small number of patients in this cohort , subgroup and multivariate analysis to compare outcome differences between these procedures was not feasible.15 in addition , the indications for lumbar spine surgery were varied and included radiculopathy , claudication , and low back pain . because of the small number of patients and the different pathologies surgically treated in multiple ways , we could not assess which pathology and which surgery would be best suited for this activity - measuring technique . recent studies have also suggested differences in accuracies in various physical activity tracking technologies , with smartphone applications as a potential alternative for measuring objective measurements.16 future prospective studies should investigate the long - term outcomes in terms of objective physical activity measurements and how they compare and correlate with long - term clinical outcome based on subjective rating scores . future studies should also use a larger sample size and stratify outcomes according to surgery and indication . this study is the first to evaluate physical activity in a spine surgery cohort using accelerometers . the strengths also include the prospective nature of the assessment and the relatively long - term evaluation of physical activity , with prior studies using only 7-day assessment.10 a high rate of compliance was found , demonstrating that the use of accelerometers for postoperative follow - up after lumbar spine surgery is feasible . first , the prospective study cohort is heterogeneous , including patients with various procedures such as alif , diskectomy , and laminectomy . given the relatively small number of patients in this cohort , subgroup and multivariate analysis to compare outcome differences between these procedures was not feasible.15 in addition , the indications for lumbar spine surgery were varied and included radiculopathy , claudication , and low back pain . because of the small number of patients and the different pathologies surgically treated in multiple ways , we could not assess which pathology and which surgery would be best suited for this activity - measuring technique . recent studies have also suggested differences in accuracies in various physical activity tracking technologies , with smartphone applications as a potential alternative for measuring objective measurements.16 future prospective studies should investigate the long - term outcomes in terms of objective physical activity measurements and how they compare and correlate with long - term clinical outcome based on subjective rating scores . future studies should also use a larger sample size and stratify outcomes according to surgery and indication . this study is the first to examine the role of measuring objective physical activity and demonstrates high compliance and statistically significant improvement in patients having lumbar decompression and lumbar fusion . however , there was no significant correlation found between improvements in subjective clinical outcome scores with changes in physical activity measurements at follow - up the validity of objective physical activity measurements should be assessed in future larger , prospective studies .
study design prospective observational study.objective patient - based subjective ratings of symptoms and function have traditionally been used to gauge the success and extent of recovery following spine surgery . the main drawback of this type of assessment is the inherent subjectivity involved in patient scoring . we aimed to objectively measure functional outcome in patients having lumbar spine surgery using quantitative physical activity measurements derived from accelerometers.methods a prospective study of 30 patients undergoing spine surgery was conducted with subjective outcome scores ( visual analog scale [ vas ] , oswestry disability index [ odi ] and short form 12 [ sf-12 ] ) recorded ; patients were given a fitbit accelerometer ( fitbit inc . , san francisco , california , united states ) at least 7 days in advance of surgery to record physical activity ( step count , distance traveled , calories burned ) per day . following surgery , postoperative activity levels were reported at 1- , 2- , and 3-month follow-up.results of the 28 compliant patients who completed the full trial period , mean steps taken per day increased 58.2% ( p = 0.008 ) and mean distance traveled per day increased 63% ( p = 0.0004 ) at 3-month follow - up . significant improvements were noted for mean changes in vas back pain , vas leg pain , odi , and sf-12 physical component summary ( pcs ) scores . there was no significant correlation between the improvement in steps or distance traveled per day with improvements in vas back or leg pain , odi , or pcs scores at follow-up.conclusions high compliance and statistically significant improvement in physical activity were demonstrated in patients who had lumbar decompression and lumbar fusion . there was no significant correlation between improvements in subjective clinical outcome scores with changes in physical activity measurements at follow - up . limitations of the present study include its small sample size , and the validity of objective physical activity measurements should be assessed in future larger , prospective studies .
Introduction Methods Ethics Approval Patient Recruitment Accelerometry Self-Reported Outcomes Statistics Results Discussion Strengths of the Study Limitations Conclusions
PMC3983377
it is now widely recognized that many rna molecules are predisposed to forming complexes with proteins by fluctuating spontaneously through an ensemble of structural states . protein recognition is referred to as conformational capture . a description of the physical principles involved in forming rna protein complexes via conformational capture requires complete description of the dynamics of these structurally labile rna molecules , including a characterization of long- and short - lived conformational states sampled by the rna . however , experimental characterization of transitory states is complicated by the fact that the rate of transition may be too fast to allow for a comprehensive catalogue of all states . thus , although progress has been made recently in experimentally isolating the partially folded states of proteins , the complete elucidation of protein or nucleic acid dynamics requires analytical and computational modeling to complement experimental observations . a common approach toward this goal is to fit a limited set of model parameters to experimental data that are sensitive to dynamics , such as nmr relaxation rates , line shapes , or residual dipolar couplings . this procedure involves guessing - and - checking , using physical constraints on possible motions of the labeled residue(s ) to guide the model - building process . however , this semianalytic approach becomes complicated when further degrees of freedom and new free parameters are required by the model to fit the data adequately . a purely computational approach to the description of molecular states requires an accurate potential energy function ( pef ) , followed by either molecular dynamics ( md ) or energy - minimization calculations . molecular dynamics simulations are able to generate dynamic trajectories of the molecule and , in principle , explore molecular parameter space if sufficient numbers of trajectories are available ; examples are found using the amber and charmm packages and others , such as lindorff - larsen et al . however , the extrapolation of dynamics in nucleic acids to time scales of the order of microseconds or longer , where many conformational changes are expected to take place , has only recently begun to be explored . energy - minimization techniques also rely on a well - validated energy function but involve the subsequent alteration of the relative conformations of parts of the molecule in an iterative manner to find the global energy minima . it is then possible to generate multiple sample structures more easily than for a complete md calculation . in the current manuscript , we utilize structures generated by energy - minimization techniques as a complement to md - based analyses of dynamics . use of energy - minimized structures is facilitated by the availability of structures on the rosetta server . other servers such as the mc - sym / mc - fold pipeline are also available that allow the user to obtain all - atom rna models . we use the hiv-1 tar ( trans - activation response ) rna molecule as a model for a dynamics simulation based on a set of 500 low energy models generated using the program farfar for the 29-nucleotide apical section of the rna ( figure 1 ) . the tar rna binds the viral regulatory protein tat , a critical transcription elongation factor essential for viral replication . the tat binding site surrounds the single - stranded , trinucleotide ( ucu ) bulge and is contained within the 29-nucleotide construct . the bulge region interlinking the two helical stems is the primary binding site for the tat protein and exhibits considerable flexibility allowing for the two helical regions to adopt a wide range of relative orientations . , where the free tar rna exchanges between multiple conformers , one or more of which are amenable to tat binding . tar rna also provides a common rna structural motif , where two helices are connected by a single - stranded bulge on one end and a backbone hinge at the other . other rna s exhibit similar structures , such as k - turns , or the hiv-1 rre ( rev - response element ) . it is therefore worthwhile to characterize the dynamics of such fundamental motifs , especially to characterize large - scale motions ( as opposed to more localized motions ) of one helical domain relative to another . 29-nucleotide apical stem - loop of hiv-1 tar rna : ( a ) secondary structure ; ( b ) sample tertiary structure . the different submotifs and the component residues used in the analyses are color - coded : upper helix ( turquoise ) , lower helix ( red ) , and the single - stranded bulge ( green ) . earlier work in our group used solid - state nmr to identify intermediate rate motions for the residue u38 in the upper helix . we more recently characterized the motions of u40 and u42 in the lower helix ( wei huang , unpublished data ) . these data indicate that the two helices move relative to each other at slow rates ( 10 to 10 s ) relative to the rotational diffusion rate of the entire molecule ( 10 s ) . these results motivated us to look at the distributions of the orientations of the upper helix relative to the lower helix among the set of lowest energy structures , as characterized by the euler angles of an upper helix - attached reference frame relative to a different frame attached to the lower helix . reduction to a set of three angles characterizes many features of the dynamical trajectory of this particular structural motif , while simplifying a very large - dimensional problem to a set of three essential coordinates . local base librations , i.e. , rotations of the bases around a base normal ( representing vibrations of the base around the equilibrium base - paired orientation ) or rotations of the base around the glycosidic bond ( for the single stranded bases ) , are included in the simulations as well and provide atomistic detail regarding the motions of individual residues . here we extend our prior studies of nonrigid rotation and helix reorientation in tar rna to a full description of the concerted dynamics of the upper and lower helices and the bulged loop . our approach describes a dynamic trajectory based on an ensemble of energy minimized structures with the rosetta program farfar . because farfar does not provide a boltzmann - weighted distribution of states ( and thus does not provide the entropy and free energy ) our calculations rely on fitting phenomenological parameters to the data . to reduce the complexity of the problem , a predominant set of conformers we used experimental residual dipolar couplings ( rdcs ) obtained from partial alignment of the tar rna by one alignment medium to achieve this selection , and other groups have recently used similar filtering techniques . in addition to filtering out a set of long - lived conformers along the stochastic trajectory , we acquire information regarding their relative populations . we complement the rdc filter with principal component analyses ( pcas ) based on a judiciously chosen set of backbone torsion angles , as done to establish conformational clustering and free energy landscapes of rnas and proteins , and a pca can serve as a useful aid in setting up dynamics calculations . finally , we use the outcome of that analysis to calculate solution spin lattice relaxation times t1 and the rotating - frame spin lattice relaxation time t1 for c spins in nucleotides located in the upper and lower helices and in the bulge . the protocol would allow extensions to other observables as well , but the current work focuses on these to provide a clear proof of method . we direct the interested reader to more extensive work on other nmr relaxation parameters and their simulations . the first approach ( slow exchange or se model ) assumes an effectively infinite time scale of exchange ( i.e. , the conformational exchange is much slower than any other relevant motional time scale ) . the second approach ( general rate or gr model ) allows for an arbitrary time scale of exchange ( including one that overlaps with the rotational diffusion time scale ) . by combining complementary experimental and analytical techniques into a single framework , we have been able to construct a viable dynamic trajectory for the tar rna . protocol used in this manuscript for the simulation of domain - dynamics in a molecule . we present here the solid - state nmr ( ssnmr)-derived models that served as motivations for the solution relaxation simulations . ssnmr studies of the dynamics of the uridine bases u38 , u23 , and u25 in tar were carried out using samples selectively deuterated at the 5- and 6-carbon base sites . u38 was chosen to represent the dynamics of the upper helical stem , whereas u23 and u25 are of interest on account of their positions in the single - stranded bulge . more recently , 5,6-h labels were introduced at the u40 and u42 positions in tar , in correspondence with the lower helix ( u42 ) and an unstable base pair that closes the bulge region ( u40 ) ( wei huang , unpublished data ) . the data included line shapes as well as t1z and t1q relaxation times collected on samples hydrated in all cases to 16 water molecules per nucleotide to reproduce conditions where motions were shown to be solution - like . each of the upper helical and single - stranded sites investigated had characteristically distinct spectral features . within the lower helix , u40 and u42 showed results similar to each other , but distinct from the upper helix or single - stranded sites . motional models generated to fit the data included a slower base motion consisting of jump between two equally populated sites , superposed on a faster motion occurring around the normal to the base - plane for the helical residues and around the glycosidic bond for the bulge residues . the analyses of the spectra recorded for the upper and lower helical sites were done independently resulting in two different sets of parameters . the u38 base was modeled as undergoing a two - site 4 jump process around the base - normal at a rate of 2 10 s in addition to a two - equivalent - site conformational exchange process , where the upper helix underwent a 9 bend and a 15 twist at a rate of 10 s relative to a crystal - fixed frame . a similar model could fit the data for the lower helix bases u40 and u42 as well , resulting in a 018 bending motion and a 1825 twisting motion of the lower helix , but at slower rates on the order of 10 s. by analyzing relaxation data , we observed small amplitude ( 6 to 9 ) local motions of the base at a rate of 10 s for both u40 and u42 . the u23 data were fit by significantly different models , involving a local two - site jump about the glycosidic bond of 11 at a rate of 10 s , and a 24 hop of the base at a rate of 10 s , whereas u25 was modeled as experiencing a 30 jump at 6 10 s in addition to a much slower twisting of the base ( 6 10 s ) with a large amplitude of 40. we utilized the same two - site jump models for the local base motions of both the helical and bulge residues , under the rationale that solid - state sample conditions should be able to replicate solution conditions at least in the small - amplitude local motions at the hydration conditions of our studies . we also used the time scale of the local base jumps as a starting point for simulations of the solution conditions . finally , we used the observation that conformational exchange motions in solution conditions occur on a time scale similar to that in solid - state samples . formalism for part of the relaxation simulations , which assumes an exchange process much slower than all other motional rates . we did , however , also consider the more general case of an arbitrary rate of conformational exchange . the torsion angles that were altered in the generation of the structures used here were those of the residues in the bulge and the bulge - adjacent base pairs of the two helical domains . the 500 lowest energy structures represented a distribution in energy of about 13 rosetta units , where 1 rosetta unit is approximately 1 kbt ( rhiju das , private communication ) . to reduce the dimensionality of the problem , we assumed that the simple motif of a helix bulge helix can be represented by a reduced set of relative helical orientation parameters . a set of three euler angles transforming between the two helices is taken to be sufficient to discriminate between helical conformations ( figure 3 ) if the helices behave as rigid units . relation between the euler angles determined from the atomic coordinates and the associated domain motions . we have used experimental residual dipolar couplings ( rdcs ) for several bonds along the molecule . because rdcs are long - time weighted averages over all conformational states of the molecule , they potentially provide means of extracting both the best - fit conformers and their relative populations . the structures that were selected by this rdc - filtering process , along with the best - fit populations , were subsequently used to simulate the solution relaxation times . we incorporated the local base motions obtained by fitting the ssnmr data into the rdc - selected structures and calculated the solution relaxation times using previously published methods . when a small amount of variation for the local amplitudes and rates of motion , and for the rates of conformational exchange between the selected structures , was allowed , the solution relaxation times for the 17 residues in the molecule could be calculated ( 7 in the lower helix , 3 in the bulge , and 7 in the upper helix ) with this method . the relative orientations of the upper and lower helices were quantified by considering the orientation of the normal vector of the u38 base relative to a frame defined by a z - axis aligned with the lower helical axis . the method of evaluating this relative orientation and the subsequent binning of structures within this scheme is as follows:(a)define the upper and lower helical axes for all structures using the program 3dna . the upper helical axis is taken to be the average local helical axis of the a27-u38::g28-c37 and g28-c37::c29-g36 dinucleotide steps , where the base pairs flanking the bulge were excluded due to possible distortions from ideal a - form helical structure . the lower helical axis is calculated similarly as the average over the c19-g43::a20-u42 and a20-u42::g21-c41 dinucleotide steps.(b)define the lower helix coordinate frame ( lhf ) by choosing the lower helical axis ( calculated above ) as the z - axis and the perpendicular from the z - axis to the g43 c8 atom as the y - axis . this choice of the y - axis was made as the 500 structures did not differ in the orientations of the first few base pairs ( including c19-g43 ) , so the y - axis would be the same across all structures . however , the specific choice of the g43 c8 atom for this purpose was arbitrary.(c)calculate the angles for each structure , defined as the angle between the projection of the upper helical axis onto the xy - plane of the lhf and the x - axis of the lhf.(d)calculate the angles , defined as the angle between the upper helical axis and lower helical axis.(e)the angle is then defined by the orientation of the normal of u38 base ( the vector perpendicular to the c4c2 and c6c2 bonds ) about the upper helical axis . extracting this information from the structures requires first removing the and dependence by rotating the original u38 base - normal vector vu38norm about the fixed lhf axes as follows : vu38norm = rylhf()rzlhf()vu38norm . the resultant vectors are distributed around the lhf z - axis as a function of their angles . the euler angles described above are related to the domain motions as shown in figure 1.(f)bin the 500 structures as a function of the euler angle set { , , }. the bins were chosen in 10 increments for ( 0 360 ) and ( 0 180 ) angles , resulting in 36 and 18 bins , respectively . instead of binning the angle in terms of degree increments , we fixed the number of bins to 5 because of an observed correlation between the and angles , which results in a shift in the values of for every bin , as well as the restriction of the values to only a portion of the full phase space . thus , trying to bin all possible angles would have unnecessarily increased the computational time . define the upper and lower helical axes for all structures using the program 3dna . the upper helical axis is taken to be the average local helical axis of the a27-u38::g28-c37 and g28-c37::c29-g36 dinucleotide steps , where the base pairs flanking the bulge were excluded due to possible distortions from ideal a - form helical structure . the lower helical axis is calculated similarly as the average over the c19-g43::a20-u42 and a20-u42::g21-c41 dinucleotide steps . define the lower helix coordinate frame ( lhf ) by choosing the lower helical axis ( calculated above ) as the z - axis and the perpendicular from the z - axis to the g43 c8 atom as the y - axis . this choice of the y - axis was made as the 500 structures did not differ in the orientations of the first few base pairs ( including c19-g43 ) , so the y - axis would be the same across all structures . however , the specific choice of the g43 c8 atom for this purpose was arbitrary . calculate the angles for each structure , defined as the angle between the projection of the upper helical axis onto the xy - plane of the lhf and the x - axis of the lhf . calculate the angles , defined as the angle between the upper helical axis and lower helical axis . the angle is then defined by the orientation of the normal of u38 base ( the vector perpendicular to the c4c2 and c6c2 bonds ) about the upper helical axis . extracting this information from the structures requires first removing the and dependence by rotating the original u38 base - normal vector vu38norm about the fixed lhf axes as follows : vu38norm = rylhf()rzlhf()vu38norm . the resultant vectors are distributed around the lhf z - axis as a function of their angles . the euler angles described above are related to the domain motions as shown in figure 1 . bin the 500 structures as a function of the euler angle set { , , }. the bins were chosen in 10 increments for ( 0 360 ) and ( 0 180 ) angles , resulting in 36 and 18 bins , respectively . instead of binning the angle in terms of degree increments , we fixed the number of bins to 5 because of an observed correlation between the and angles , which results in a shift in the values of for every bin , as well as the restriction of the values to only a portion of the full phase space . thus , trying to bin all possible angles would have unnecessarily increased the computational time . to select a subset of structures for multisite jump simulations from a starting set of 500 energy - minimized structures generated by farfar we only consider the data from a peg / hexanol mixture because this uncharged medium aligns the charged rna only through steric hindrance of the overall rotation . this situation does not require a detailed characterization of the rna charge density , as would be required for the simulation of rdcs measured in charged alignment media . extension to charged media data may be considered in the future . the purely steric version of pales was used to calculate the eigenvalues and eigenvectors of the saupe alignment tensor of the molecule by sampling multiple allowed orientations in the presence of a flat ( infinitely large ) obstruction . the alignment tensor eigenvalues are then used in conjunction with directional information for a particular bond relative to the alignment tensor principal axis frame ( pas ) . for the purposes of simulating the experimental rdcs , the only motions to be considered are then the exchanges between distinct conformers . to include dynamic sampling of multiple conformations , we use the weighted average of the rdcs of each conformer rdcistotal = i=1nconformerspirdcisi for the rdc of a bond between spins i and s , with the population of each conformer i being represented by pi . the residues considered for this study were c19 , a20 , g21 , a22 , u40 , c41 , u42 , and g43 from the lower helical stem , u23 , c24 , and u25 from the bulge , and g26 , a27 , g28 , c29 , g36 , c37 , u38 , and c39 from the upper helical stem . the bond types included the c6h6 bond ( pyrimidines ) , the c8h8 bond ( purines ) , and the c5c6 bond ( pyrimidines ) from the bases , the c1n1 ( pyrimidines ) and the c1n9 ( purines ) glycosidic bonds , the c1h1 and c4h4 bonds from the furanose rings , and the c5h5 and c5h5 bonds from the backbone . furthermore , the rdcs for the bulge residues were considered separately , and not in the same simulations as the helical residues . this is because the bulge is likely to be significantly more mobile than the helices and will sample a significantly larger number of configurations , requiring a different set of simulation conditions . effective bond lengths of the species in question are required in the final pales calculation . the values we chose for the bond lengths are the aliphatic c h bond length , rch , aliph = 1.1 , the aromatic c h bond length , rch , arom = 1.09 ( average of ying et al . and allen et al . ) , the aromatic c c bond length , rcc , arom = 1.4 , the glycosidic c n bond length for the cytosines , rcn , glyccyt = 1.47 , and the glycosidic c n bond lengths for the remaining base types , rcn , glycrest = 1.48 . to gain a geometric perspective on the best - fit structures from the rdc comparison , we binned the simulated couplings for each structure according to the { , , } angles determined for the 500 structures . the rdcs within each bin were then uniformly averaged to produce a single bin - rdc for each bond type and residue , rather than retaining the values for individual structures . this was done because , although the structures within each bin have similar helical conformations , they may differ in the orientations of bonds of certain residues relative to the large - scale conformations . by averaging over these variations in the bonds , we effectively included in the simulations the small amplitude thermal fluctuations of the atomic bond orientations . to select the best - fit set of structures , we started with an arbitrary initial set of n structures and allowed the choice of the ( n + 1)th angular bin to float while attempting to optimize the total as well as the pearson s correlation coefficient between the simulated and experimental rdcs . in addition , we varied the relative weights of the ( n + 1 ) bins . once the best - fit parameters were obtained for this first iteration , the ( n + 1)th bin so obtained replaced the lowest probability structure from the remaining n , and the search was repeated for a second iteration and beyond . to make the final choice of n , we started with n = 2 and after minimizing the for a choice of two structures , we added a third and repeated the process . proceeding thus , we found that n = 5 structures ( i.e. , 5 bins which also happened to have only one structure in each of them ) were sufficient to produce the best - fit to the rdc data , with no improvement after the fifth iteration . in addition , as a separate , independent check of the results of the previous procedure and to allow for the variation of the number of bins more easily , we generated a markov - chain monte carlo ( mcmc ) simulation to search through the bins and populations for a best - fit to the rdcs , with the potential for varying the number of bins . the number of parameters for an n - conformer search was 2n 1 ( n bin choices + n 1 probabilities to be floated ) . the markov - chain monte carlo method did not yield better results than the iterative technique . to determine the number of exchanging conformers required to describe the dynamics in tar rna , and to corroborate the results of our rdc filter , we performed a principal component analysis ( pca ) , following procedures applied to molecular ensembles of proteins , and rnas . the covariance matrix ij = ( qi qi)(qj qj ) was calculated as described by mu et al . , i.e. , by proposing the following variable set { q2j}:1where j is the jth torsion angle of interest , and ntorsion is the number of torsion angles used in the analysis . the use of the cosine and sine functions removes complications associated with the periodicity of the torsion angles by helping to uniquely identify particular values of the angles . the covariance matrix of these variables is calculated with averages over the full ensemble of structures and is subsequently diagonalized . the eigenvalues ( and associated eigenvectors ) are arranged in descending order , with the highest values representing modes with the largest contributions to the structural scatter . in our results , we have found that 2 or 3 modes contain a majority ( 70% ) of the total variance in the data . in addition , histograms of the projections of the ensembles for each eigenvalue are examined for gaussianity . as discussed in the above references , a mode whose histogram consists of a single gaussian - like peak only represents continuous fluctuations about a central structure , whereas multimodal distributions describe discrete conformations separated by free - energy barriers . thus , from the perspective of assessing the conformational transitions of the molecule , only modes with multimodal distributions are considered relevant . our results for pcas with different choices of torsion angle sets ( as described in the following ) indicate that the first 2 or 3 modes were non - unimodal in distribution and therefore of primary significance in describing conformational exchanges . the first set included all torsional suites along the bulge and hinge region ( pca method 1 ) . the sugar - to - sugar suite as defined in the above reference consists of the set of 6 backbone torsion angles { , , , , , } as well as the glycosidic torsion angle . in our study , we focused attention only on the 6 backbone torsion angles . the residues included were a22 , u23 , c24 , u25 , g26 , c39 , and u40 . these were meant to encompass the conformationally relevant part of the molecule under the assumption of relatively rigid helices . the number of torsion angles considered was therefore 42 ( 7 bases 6 torsion angles ) resulting in an 84 dimensional pca ( because the cosine and sine functions of the angles are treated as separate variables ) . however , the pca of this set resulted in several modes which contributed substantially , with no clear clusters in any of the largest modes . the inclusion of the single - stranded bulge region may have caused the lack of structure in the pca projections calculated in this first method , because single - stranded regions are significantly more flexible and may add a level of disorder to the torsional distribution . to isolate the interhelical motions , we evaluated the pca of only those torsional suites that extend from u38 to c39 and from c39 to u40 ( pca method 2 ) . these suites include only the hinge region of the molecule , and resulted in a 24-dimensional pca . to check the robustness of the clustering results obtained from method 2 , we then carried out a series of pcas by successively including an additional sugar - to - sugar backbone suite : u38 through c41 ( pca method 3 ) , u38 through u42 ( pca method 4 ) , c37 through u42 ( pca method 5 ) , and c37 through u40 ( pca method 6 ) . the results of these analyses and the relation to the results from the rdc fit will be discussed in the results below . the evaluation of the solution relaxation times is based on two previously explored techniques . both methods involve the calculation of the two - time correlation function for the orientation of an atomic bond located within a nonrigid brownian rotator relative to a fixed laboratory frame . the evaluation proceeds by introducing an intervening reference frame associated with the principal axis system ( pas ) for the rotational diffusion tensor , which is time dependent as a result of exchange between different structural conformers . the correlation function then becomes dependent on wigner rotation matrices whose arguments are the euler angles that orient the rotational diffusion tensor of the molecule relative to the laboratory frame . in turn , the fokker planck equation in three - dimensions allows the evaluation of the transition probabilities from one set of euler angle orientations to another . planck equation , which accounts for coupling between rotational diffusion and conformational changes , is2where p(,,t|0, ) is the probability that the molecule will transition from a diffusion tensor - to - laboratory frame orientation of 0 at time 0 and in a conformational state to an orientation of and a conformational state at time t , given diffusion tensor elements of dij in the conformational state and an exchange rate of r between conformational states and . this equation can also formally cover the case of continuous transitions to new conformational states by allowing the number of conformers to be infinite . however , in all cases considered in the original references , and here as well , only discrete jumps will be considered . if the exchange rate is much slower than the rate of diffusion and all other time scales of exchange processes , yet faster than the relaxation rates themselves , the rotational diffusion problem is effectively decoupled from conformational exchange . under these conditions , the se model applies , and it is possible to calculate the relaxation rates for the molecule as the weighted - average over the relaxation rates for the individual conformational states of the molecule:3 expressions for the noes can also be derived by appropriate weighting of the noes of individual conformational states . the procedure to calculate the relaxation times for each conformer are provided in the original reference . solution of eq 2 for the case of a general exchange rate ( i.e. , the gr model ) between conformers involves considerably more analytical and computational processing . we have solved this problem for the case in which the eigenvectors of the diffusion tensors of the exchanging conformers are coincident at the moment of exchange ( see ryabov et al . for general case ) and correlation functions have been published . this general rate analysis also indicates that the slow exchange regime occurs at time scales longer than about 1 s . for the carbons considered in this analysis , the rotating frame z - relaxation time t1 was measured instead of t2 . under the application of a weak spin - lock field and the assumption of lorentzian spectral densities therefore , for the purposes of this paper we operate under this assumption and simulate the t2 relaxation times . the structure set and the corresponding populations preselected by the rdc filter are the basis for simulations of the relaxation times using eq 2 . eigenvalues and eigenvectors of the rotational diffusion tensor are first calculated using the public - domain program hydronmr . then orientations of the atomic bonds of the residues of interest are calculated with respect to this axis system , i.e. , the principal axis system of the rotational diffusion tensor . the orientational parameters , together with the diffusion tensor eigenvalues are input into the two algorithms we have derived for simulations of the relaxation times of nonrigidly rotating macromolecules : ( a ) the slow - exchange formalism , describing the case where the conformational jumps occur at a rate much slower than the rate of overall rotational diffusion of the molecule , and ( b ) the general rate formalism , where arbitrary rates of exchange are allowed . the slow - exchange formalism , though merely a limiting case of the general rate theory , has the advantage of being significantly faster and easier to implement and so is considered here . the residues considered were a20 , g21 , a22 , u40 , c41 , u42 , and g43 from the lower helical stem , u23 , c24 , and u25 from the bulge , and g26 , a27 , g28 , c29 , g36 , u38 , and c39 from the upper helical stem . in the current work , we have only simulated the motions of the bases of these residues : the c6h6 bonds for the pyrimidines and the c8h8 bonds for the purines . the parameters used in the simulation include the atomic element radius ( aer ) , the radius of the beads used in the hydronmr calculation of the diffusion tensor , and the bond lengths . the aer was chosen to be 2.3 and the bond lengths for the carbon hydrogen bonds for the aromatic bases were chosen to be 1.1 , both choices having been justified in previously published work . the viscosity was chosen to be 1.096 cp to correspond to the conditions of the solution experiments ( 99.9% d2o at 25 c ) . we have also incorporated the two - site base motions inspired by simulations of the solid - state nmr ( ssnmr ) data : the so - called base libration occurs around a vector normal to the plane of the base in the case of helical residues , whereas the two - site motion is modeled to be around the glycosidic bond for the bulge residues . we floated the values of the rates and amplitudes of these two - site jumps relative to the ssnmr models , which were found to be on a time scale much shorter than that of the conformational exchange . for the slow exchange simulations , these internal , local motional rates and amplitudes were the only free parameters , whereas in the general rate simulations we floated the conformational exchange rates between the states . the fitting procedures were carried out in a combination of grid - searches and mcmc techniques . five structures provided the best values to fit the rdc data , as obtained by iteratively searching through the bins and updating the choices of bins and relative populations . they are shown in figure 4 , and key characteristics are summarized in table 1 . for simplicity , the structures will be referred to by their respective bend angles . thus , the highest population structure will be called the 45 structure , the second highest populate structure as the five structures obtained by the rdc - filtering procedure to represent the ensemble of tar conformation that describes the experimental data , together with their interhelical euler angles and population percentages . the for the best - fit set of structures was 11 460 for a set of 48 rdcs , with 9 degrees of freedom ( 5 bin choices and 4 probabilities ) , for a reduced r2 = 302 ( = 11460/(48 9 1 ) ) , whereas the pearson s correlation coefficient was 0.72 . it is to be noted that the represent unweighted values ; i.e. , the discrepancies between the experimental and simulation rdc values are not inversely weighted with the error bars of the rdc values ( which have not been calculated ) . this assumption is equivalent to assuming that the error on all measurements is 1 hz , which is likely to be a considerable underestimation of the true error bars . the comparison between experiment and simulation is shown in figure 5 . in the figure , we further attempted an mcmc fit procedure with the possibility of 6 , 7 , or 8 structures but were unable to improve upon the fit . it is possible that this fit may be improved by the continuation of the mcmc procedure to a greater numbers of iterations , but the pca analysis reported in the following section provides further corroboration that the model found by rdc fitting represents the conformational landscape of the molecule well . comparison of the experimental rdcs ( red triangles ) with the rdcs generated by the best - fit simulation parameters ( blue circles ) for the helical residues in hiv-1 tar rna . the values shown include those for backbone ( c5c5 , c5h5 ) , furanose ( c1h1 , c4h4 ) , glycosidic ( c1n1 for pyrimidines , c1n9 for purines ) , and base ( c5c6 and c6h6 for pyrimidines , c8h8 for purines ) bonds . plotting the calculated rdcs against the experimental values ( figure 6 ) , we found that the trend , on average , is toward an underestimation of the rdcs by the simulations . the dashed blue line in figure 6 is the best - fit line to the data and has a slope of 0.4 and a y - intercept of 3.9 . an underestimation of the rdcs may arise from using a smaller degree of alignment in the simulations than in the actual experimental situation . one possible source of this discrepancy may be the current assumption of a simple steric model for the alignment of the molecule by peg / hexanol . recent work has proposed that there are subtleties in the alignment process , including the possible contributions of complex alignment medium topology and electrostatic alignment , that are not incorporated in simulations using only the basic steric version of the pales algorithm . plot of calculated rdc values vs experimental rdc values for the best - fit set of structures and populations . the dashed blue line is a linear fit to the data , and the solid red line is the ideal case where the calculated and experimental values match perfectly . this result is a cautionary statement in the application of simple steric models in the simulation of potentially complex alignment media . we attempted to fit rdc data collected in glucopone / hexanol mixtures and in pf1 filamentous bacteriophage media and found that the models selected were different ( two or three of the structures chosen were the same compared to the peg / hexanol model ) . one obvious reason for this was the availability of rdc data for different bonds in the different media . however , there is potentially a fundamental difference in the alignment properties of the media as well . for example , the pf1 phage medium is negatively charged and , thus , as may be the case with peg / hexanol as well , the alignment has an electrostatic component . this must be taken into account more carefully in future analyses . to further discern the causes of the discrepancy in the fit , we looked at all helical rdc values that contributed to a deviation of magnitude 10 or greater and found a set of 18 rdcs : 8 correspond to c1h1 furanose ring bonds , 2 to c4h4 furanose ring bonds , and the remaining 6 to c6h6/c8h8 base bonds , occurring in 12 helical residues in both the upper and lower stems as well as among all four nucleotide types . the large deviations observed for furanose ring bonds indicate the existence of additional motions localized to the furanose rings , as has been reported for dna , that have not been accounted for using the current set of 500 structures . these motions involve an exchange between the c2-endo and c3-endo conformations , and on the time scale of the rdcs these motions may be averaged out to produce an intermediate conformation . it is likely that the furanose ring samples these conformations even for the residues stacked in a helical configuration . a similar argument holds for those base bonds that show a large discrepancy in rdc values : there may be additional vibrations in the base orientations that are not adequately sampled by the 500 structures . we also compared the five structures selected above to the set of 20 lowest energy tar rna structures recently generated on the basis of noe data that have not been constrained by rdc data . upon calculating helical axes and orientations in the same manner as for the 500 farfar structures in this manuscript , we find that 11 out of 20 of the structures , including 3 out of the 5 structures that best fit the noe data , occur within the same 10 bend angle bin as the highest population structure from the rdc fit described above . thus , we believe that our approach identifies a predominant conformation set . to test the robustness of the search algorithm , we performed two simulations of fitting a reduced data set upon the random removal of ( a ) 10 rdcs and ( b ) 15 rdcs ( different rdcs were deleted in each of these two cases ) . removing the first set of 10 resulted in the selection of the same 5 conformers as from the full set , along with a sixth new structure with a population of 4% . the populations of the 5 full - set best - fit structures were slightly different ( maximum change of 12% ) . removal of 15 rdcs reproduced 4 out of the 5 full - set best - fit structures , and two new conformers with populations of 7% and 5% . the maximum change in population among the 4 best - fit structures was 7% in this case . these results indicate that the choice of structures is robust to a reduction in the size of the experimental data set . jumps between the five conformers shown in figure 4 require a combination of bending and twisting about either the lower or the upper helices . among the entire set of energy - minimized structures , there was an observed correlation between the and angles , as has been reported previously . these correlations may be reflected even in the jumps among this set of five structures , representing a free energy landscape where the exchanges between minima involve coupled shifts in euler angle values . we carried out a series of pcas to ( a ) identify the choice of torsion angles that best captures interhelical motions , ( b ) corroborate the rdc - filtered set by overlaying the five chosen structures on the clusters obtained from the pca of choice , and ( c ) identify jump matrix elements for the exchanges between the five rdc - filtered structures in the dynamics calculation . given the lack of clear clustering results from pca method 1 , which incorporates all backbone torsion angles in the bulge and hinge , we examined whether torsion angles on one side of the helical joint would suffice to describe the interhelical reorientation ( pca method 2 , incorporating the backbone torsion angle suites between u38 and u40 ) . when this was done , only the first two principal components ( pcs ) contributed significantly , accounting for 75% of the fluctuations in the molecule ( figure 7 ) . furthermore , these two pcs were the only ones with a multimodal probability distribution across the 500 structures . this non - single - gaussian distribution signals the presence of conformational clusters in energy minima separated by significant free energy barriers . the map of these two pcs is shown in figure 7a and shows the presence of three to four major conformational clusters . the large red dots superimposed on the 2d plot correspond to the five structures selected from the rdc filtering procedure . parts b and c of figure 7 show the population distribution histograms for pc 1 and pc 2 , respectively . again , the positions of the five structures from the rdc filter are indicated by red arrows . principal component analysis of the torsion angle suites along the hinge ( residues u38 through u40 ) . ( a ) principal components 1 and 2 for the 500 structures , represented by blue dots , with the positions of the five best - fit structures marked explicitly by red dots , and further encircled for clarity . ( b ) histogram of the first principal component ( corresponding to the largest eigenvalue of the covariance matrix ) , with the positions of the five best - fit structures marked explicitly by red arrows . ( c ) histogram of the second principal component ( corresponding to the next - to - largest eigenvalue of the covariance matrix ) , with the positions of the five best - fit structures marked explicitly by red arrows . first , the five structures chosen as the best - fit set match up well with the main conformational clusters obtained from this pca , suggesting that our rdc - filtered set has captured the relevant information about the major conformational clusters of the molecule . second , structures with similar interhelical bend angles have similar values of each of the principal components . the principal component values of the two structures with bend angles of 115 and 132 occur in close proximity to each other . the same is true of the pair of structures with bend angles of 61 and 76 ( it is true , however , that the 45 structure does not differ significantly from the 61 structure ; the pca suggests that there is a free energy barrier that separates even these two neighboring structures ) . this is important because other pca methods ( described below ) separate structures with similar interhelical orientations , possibly due to the presence of additional degrees of freedom that do not contribute significantly to interhelical reorientation . finally , the sums of the probabilities of the best - fit structures within each cluster are similar to each other : the 45 bend structure has a population of 29% , the 61 and 76 structures have a joint population of 40% , and the 115 and 132 structures have a joint population of 31% . because the histogram heights do not correlate well with the nearly uniform probability distribution , we fit the jump rates numerically , as described below . subsequent attempts at testing the robustness of our cluster analysis yielded further interesting results . pca method 3 ( u38 through c41 ) showed a marked difference in clustering of the structures . three pcs contributed significantly , yielding about 70% of the total fluctuations , with all three now being multimodal . however , the five structures do not all seem to fall within major clusters . moreover , the 61 and 76 structures no longer fall within the same cluster , nor do the structures with bend angles of 115 and 132. this occurs because of the intervention of the torsion angles associated with u40 . when the helical parameters of the 500 structures were searched , about 37% of the structures u40 did not form a canonical watson crick pairing with a22 , indicating considerable conformational variability of this residue , at least within the physical picture generated by the energy - minimization ensemble . to test this hypothesis , we extended the pca up to u42 ( pca method 4 ) on the lower stem and up to c37 on the upper stem ( pca method 5 ) . both these methods gave very similar clustering to method 3 , with three significant pcs . we interpret this observation as confirmation that , after the inclusion of u40 , the remaining residues behave fairly rigidly and do not change the results of the pca . as a final confirmation of this conclusion , we carried out a pca including only torsion angles from c37 to u40 ( pca method 6 ) . the clustering results for this pca proved to be the same as for pca method 2 . given the change in clustering associated with the inclusion of the u40 degrees of freedom , we utilized the results of pca method 2 to set up the rate matrix for the relaxation time simulations . in general , for n conformers , the number of combinations of pairwise rate constants that need to be fit is c2 . given the clustering suggested by the pca , we reduce the fit problem from 10 ( = c2 for the five rdc filter structures ) to five parameters in the following manner . for pairs of structures that occur within the same cluster in the two pc distributions , we allow for only one distinct exchange rate between both members of the pair in that cluster and any structure in another cluster . thus , the rates used in the fitting process were for the following exchanges : these exchange processes are shown graphically in figure 8 . the assumption in this parameter reduction is that all the structures within a cluster are separated from other clusters by similar free energy barriers . exchange between clusters inferred from the pca , together with the five rates used in the jump matrix for the relaxation time simulations . the rdc - filtered conformer set of five structures , together with the relative probabilities , were used to calculate the t1 , t2 and noe values . we used two different approaches to calculate the c6h6 ( pyrimidine ) and c8h8 ( purine ) relaxation times : ( a ) the slow exchange method , where the assumption is that the exchanges occur at an infinitely slow rate ( compared to the rotational correlation time ) , and ( b ) the general rate method , where we fit the relaxation times by allowing the exchange rates to vary arbitrarily . the general rate analysis has shown that the slow exchange regime in tar rna effectively occurs for time scales longer than about 1 s . thus , the results of this subsection assume conformational exchanges occur on a scale longer than 1 s . as a starting point for the fitting process , we use rates and jump amplitudes for the base librations close to those obtained to fit the solid - state nmr data of the uridines , and changed both the rates and amplitudes in small increments to improve on the . using the fit to the t1 values as a benchmark , we found that it suffices to fit only two base - libration rates , one for the upper helix residues and one for the lower helix residues . this simulation model was inspired by the results of the two solid - state nmr analyses . the exceptions , however , are the parameters for u40 and u42 in the lower helix . we obtain a significantly better fit by using the rates and jump amplitudes for the upper helix for these two residues , indicating that these residues are more similar in local base motion to the upper helix . windows in the plot for the local motion parameters , with the upper and lower helical parameters behaving independently . the rates in these windows vary between 10 s and 10 s , whereas the amplitudes are less than 20. we did attempt simulations of additional models of base - libration such as a treatment of the rates of purines and pyrimidines independently , and the assumption of a constant rate across the entire molecule . there is always the possibility that more complex models of base libration rates may fit the data better ; for example , we may treat the libration rate of each individual residue as an independent parameter , or choose the purines and pyrimidines in the lower helix as independent from the purines and pyrimidines , respectively . however , this would increase the number of free parameters in the problem , and we chose the above model as a balance between an arbitrary increase in free parameters and an attempt at a physically realistic representation . the following representative values of the local motion parameters simulate the relaxation times well:(a)upper helix , u40 and u42 base - libration rate = 4.6 10 s(b)upper helix , u40 and u42 base - libration jump amplitude = 13.7(c)lower helix ( without u40 and u42 ) base - libration rate = 6.6 10 s(d)lower helix ( without u40 and u42 ) base - libration jump amplitude = 9.8 upper helix , u40 and u42 base - libration rate = 4.6 10 s upper helix , u40 and u42 base - libration jump amplitude = 13.7 lower helix ( without u40 and u42 ) base - libration rate = 6.6 10 s lower helix ( without u40 and u42 ) base - libration jump amplitude = 9.8 the match between the experimental and simulation relaxation t1 and t2 values is shown in figure 9 . for quantitative comparison , we calculated the root - mean - square deviation ( rmsd ) across the 14 helical t1 values to obtain an rmsd of 5.3 ms . relaxation time simulations for the c6h6 ( pyrimidine ) and c8h8 ( purine ) bonds using the slow exchange method , and comparisons of residuals ( discrepancies relative to experimental values ) to statistical error bars : ( a ) t1 simulations ( blue circles ) compared to the experimental t1 values ( red triangles ) ; ( b ) t2 simulations ( blue circles ) compared to the experimental t2 values ( red triangles ) ; ( c ) difference between simulation t1 and experimental t1 values , together with the statistical error bars on experimental data ( red dashed lines ) at 3.2 ms ; and ( c ) difference between simulation t2 and experimental t2 values , together with the statistical error bars on experimental data ( red dashed lines ) at 0.5 ms . the error bars shown in figure 9 describe the statistical error in the measurements . however , the potential systematic error , not quantified by the authors , is larger . yet , even at this level of uncertainty there is a consistent difference between the best - fit local jump amplitudes for the upper and lower helices , with the lower helix amplitudes being larger on average ( this is not reflected in the parameter set shown above but is true for the best - fit windows in general ) . thus , the solution relaxation times can be fit to an rmsd close to the statistical error in the experiments by assuming a slow exchange rate ( i.e. , slower than 10 s ) between the five structural configurations of tar rna shown in figure 4 with populations determined by rdc filtering and further corroborated by principal component analysis . the model assumed base - libration rates that range between 10 and 10 s , and libration - rate - dependent jump amplitudes less than about 20. the solid - state nmr parameters for the local base - libration parameters fall within the windows described above , consistent with the notion that solid - state experiments are able to capture the solution - state local motions accurately . although the slow exchange method provides an approximate scale for exchange processes , the general rate method could further resolve the values of the exchange rates between conformers . for ease of comparison , we used the local base motion rates and jump amplitudes from the slow - exchange fit , whose results were shown in figure 9 . although several combinations of 5 pca - inspired rates were found that could be fit to the 14 helical t1 and 14 helicalt2 values with comparable rmsd s , in all cases the inter- and intracluster exchange rates were on the order of 1010 s , which confirms the validity of the slow exchange approximation . for example , the set of inter- and intracluster exchange rates in table 2 yield the base relaxation times shown in figure 10 with an rmsd for t1 values of 5.2 ms and an rmsd for t2 values of 1.4 ms . interestingly , there is no clear distinction between the intercluster and intracluster rates , as may be expected from a significant difference in the free energy barriers . relaxation time simulations for the c6h6 ( pyrimidine ) and c8h8 ( purine ) bonds using the general rate method with inter- and intracluster exchange rates from table 2 , and using the same local base motion parameters as for figure 9 : ( a ) t1 simulations ( blue circles ) compared to the experimental t1 values ( red triangles ) ; ( b ) t2 simulations ( blue circles ) compared to the experimental t2 values ( red triangles ) ; ( c ) difference between simulation t1 and experimental t1 values , together with the statistical error bars on experimental data ( red dashed lines ) at 3.2 ms ; ( c ) difference between simulated and experimental t2 values , together with the statistical error bars on experimental data ( red dashed lines ) at 0.5 ms . to summarize , if we fix the local motion parameters to the best - fit set obtained using the slow - exchange method but float the conformational exchange rate parameters , we obtain a slight improvement in the quality of the fit . best fits to experimental relaxations using general rate theory are achieved with inter- and intracluster exchange rates on the order of 1010 s , thus justifying the slow exchange approximation of the prior section . we did not float both local motion parameters and conformational exchange parameters simultaneously , but it is reasonable to assume that the results will be similar , especially when we constrain the scale of the local base motion parameters to those observed under solid - state conditions . in addition to helical residues , we also simulated the relaxation times for the bulge residues using the slow exchange and general rate algorithms . in table 3 , we present relaxation time simulations assuming exchange between the conformations shown in figure 4 using the slow exchange and general rate algorithms . the base rotation rate is selected as 5 10 s and the amplitude as 15 ( to yield good matches to the t1 values ) . for the general rate simulations , inter- and intracluster exchange rates shown in table 2 were assumed . we obtain simulated t1 values that are within 2% of experimental data , but simulated and experimental t2 values for u23 deviate by about 16% ; for c24 , the relative deviation is even greater . this is likely due to the fact that the t2 relaxation time has a spectral density component ( the j(0 ) term ) that makes this observable sensitive to motions much slower than the larmor frequencies of carbon-13 and protons ( which are on the order of nanoseconds ) . the fact that we have been unable to capture the t2 values may indicate that there are additional slower motions of these relatively underconstrained residues that are missing from the conformer set we have selected . although we have been able to successfully match most of the solution relaxation times to almost within the statistical error bars using the rdc - filtered conformer set , we must address the basic question , are the relaxation times sensitive to motions occurring at rates on the order of microseconds or slower ? though the t1 time has no spectral density dependence slower than a time scale of a nanosecond , the t2 are determined by slower motions as well and their expressions contain a dependence on the j(0 ) spectral density . more importantly , the slow exchange and general rate methods depend on the fact that the time scales of rotational diffusion of many molecules ( including the tar rna considered here ) overlap with the time scales to which both t1 and t2 are sensitive . thus , two different conformers , with slightly different diffusion tensors will have different characteristic relaxation times when calculated separately . even the slow exchange averaging process will result in a unique linear combination of relaxation rates that becomes discernible when enough data points are compared . the general rate theory loses some sensitivity to dynamics for rates much slower than a microsecond , but our least - squares simulations for relaxation times have shown that there is still discernible information to be gained at these time scales . in this manuscript , we introduce a methodology based on energy - minimized structures that ties together structural and dynamic data , as well as solid - state and solution nmr , to build a dynamic trajectory for the hiv-1 tar rna to atomic - level detail . the protocol uses ( a ) solid - state nmr data to acquire information about local motions of the bases , ( b ) solution nmr rdc data to identify conformational states and their relative populations , ( c ) pca analyses to identify degrees of freedom relevant to the overall reconfiguration of the molecule , as well as to corroborate the clustering of structures and choose parameters for the dynamics analysis , and ( d ) solution nmr relaxation time simulation techniques previously developed to simulate experimental data and fit the jump rates between the molecular conformers . the experimental data utilized covers a wide spectrum of motional time scales , from the picosecond scale for solution relaxation times and micro- to nanosecond times derived from solid - state nmr line shapes to the submillisecond time scale investigated with rdcs . the method reproduces relaxation data at multiple helical residues within the molecule using only five structures out of a set of just 500 possible conformers . the advantage of this approach lies in the coverage of multiple time scales , including long time scales that are difficult to sample with md , and the ease with which energy - minimized conformers may be obtained for small - to - midsized molecules using public - access software like the rosetta suite of programs . the use of these structures inverts the problem of dynamics relative to md methods : instead of starting with an initial structure and running the clock forward from t = 0 , we filter out long - lived structures using experimental data and interconnect them in a stochastic trajectory . thus , the results from such a technique would prove valuable to corroborate md - based simulations , and the protocol could provide a less computationally intensive alternative to extended molecular dynamic simulations . we have been able to simulate relaxation times of most of the helical residues in the molecule with limited conformational sampling from an already small set of structures . the success in matching the experimental data indicates that , to gain an understanding of the gross motional properties of rna , it is sufficient to sample the limited phase space of the particular structural motifs that constitute the molecule . this protocol was designed to derive a geometric picture of interhelical reorientation based on the limited conformational space available to the torsion angles in and around the bulge and hinge regions . the assumption that the helices are rigid and the requirement to close the loop formed by the single stranded bulge , adjacent helical base - pairs and the hinge backbone should restrict conformational possibilities for the entire molecule . steric hindrances and limitations on the stretchability of the single - stranded region would further impose constraints on molecular reorientation . in practice , we found that parameter space for the reorientation of one helix relative to the other was expanded by the possibility of one of the bulge adjacent base pairs opening up . among the energy - minimized structures , a significant number had a missing a22-u40 base pair the u40 base often forms a stabilizing interaction with u25 instead and , among the full set of 500 structures , sometimes with u23 and c24 as well . we allowed these possibilities to occur in our sample set to reflect fluctuations in the residue orientations , as well as the impacts of these fluctuations on overall molecular conformation . thus , we believe that models generated using well - vetted potential energy functions can identify sites where new intramolecular bonding and conformational variability might occur . furthermore , we made a conscious choice to characterize the structural bins by their euler angles . this choice of parameters has been made previously to enhance reproducibility and comparability to other analyses of the molecule . such a parametrization represents the core ingredient of most molecular analyses : reduction of the dimensions of the problem to render it tractable . molecular studies often aim to distill out a few degrees of freedom that are implicated in determining either the structure or dynamics of the system , and many different techniques ( ramachandran plots , phenomenological models , pca analyses ) are directed toward identifying a minimal set of relevant coordinates . the methodology relies on the assumption that energy - minimized structures sufficiently populate the available conformational space , i.e. , on the assumption of ergodicity . if ergodic behavior holds , then a sufficiently representative characterization of the energy landscape of the phase space will allow a calculation of the requisite time averages of observables . in the case of the tar rna , the structural motif ( helix bulge helix ) is fairly simple , and it is possible to cover a large region of the interhelical orientation space with a relatively small number of structures . for more complex structural motifs , it would be necessary to generate a sample set that covers both the space of molecular reorientations and the range of conformations of local residues relative to a fixed large - scale molecular orientation . even in the current work , it is possible that we have under - sampled the full range of conformations available to the bulged loop . a richer sample of both interhelical orientations and orientations of bulge residues relative to particular interhelical orientations may improve the rdc fit . a second assumption is that the slow exchange ( se ) and general rate ( gr ) methods assume coincidence of the diffusion tensors at the moment of exchange . this is not a significant problem for conformers that are not significantly different , but it could pose problems for conformers that are widely separated in conformational space . we have not attempted to quantify the actual deviation in relaxation times on account of this assumption . finally , it bears mentioning that there is a 2-fold degeneracy in the choice of the unit eigenvectors of the rotational diffusion tensors , with the negative of a given choice of unit vector being acceptable as the eigenvector as well . the choice of eigenvectors does not change the results for the slow exchange formalism ( the expressions are invariant to such changes ) but does impact the general rate theory expressions . for example , keeping the z - axis the same , the two choices of a right - handed coordinate system vary by 180 and would artificially introduce such an extra jump into the calculations . the means of consistently dealing with such a jump is to track the diffusion tensors as a function of changes in the shape of the molecule , either visually or geometrically and ensure that there is no additional change in the diffusion tensor orientations due to axis inversions . in our particular case , we tested our calculations by artificially inverting the orientation of the diffusion tensors of some of the structures and found very small changes in the relaxation times as a result : a change of at most 0.25 ms in t1 and at most 0.02 ms in t2 . these changes would not significantly impact the conclusions of this manuscript and a detailed analysis is omitted here . we examined the five structures selected from the rdc - filter in light of unbound tar rna structures generated using noe constraints ( but not rdc constraints ) by one of the authors . these 20 energy - minimized structures , termed the tar2013 series , have angles in the range 3859 , but the five lowest energy structures cover a smaller range of 4553. this range corresponds well to the bend angle of highest population structure we have obtained ( i.e. , = 45 ) . in fact , the lowest energy tar2013 structure has an value of 202 and a value of 225 , similar to the we previously published two studies where the new methods developed to simulate relaxation times were applied to u38 relaxation data and used to select regions of interhelical motional parameter space that fit the data . the models consisted of two - site jumps between the lowest energy structure of 1anr and structures artificially modified from that structure to reflect changes in interhelical orientation . the closest approximation to a two - site jump in this manuscript is found by considering only exchanges between pairs of conformers within the three most populated structures ( the 45 structure , the 61 structure , and the 115 structure , which are almost equal in population ) . the exchange between the 45 and the 61 structures involves a bend angle modification of 16 , and a twist angle about the upper helix of 2. cross - checking this parameter set against the results of applying the general rate theory , we find that the u38 data was fit by a two - site jump model with a twist of 0 and bend angle between 5 and 12 ( among other possible models ) . thus , the 45 and 61 structures fit the profiles of two structures selected previously on the basis of the u38 data alone . exchanges between either of these structures and the 115 structure , however , are of a magnitude not simulated in the previous studies . we can also make a few basic comparisons to the results of dayie et al . , even though the authors consider the hiv-2 tar rna , as both molecules consist of a helix bulge helix motif . given our focus on the c6 and c8 atoms in the current work , we first observe a similarity in the fact that in the relaxation data used in this work ( from bardaro et al . ) the helical residues seem to have similar c ( c6 and c8 ) t1 relaxation times , a fact observed in the work of dayie et al . as well however , the u23 c t1 is nearly half the value of the corresponding u25 time in dayie et al . , whereas bardaro et al . report values much closer in magnitude . also , the magnitudes of the t1 and t1 relaxation times in the two papers are different . we mention these facts to bring up three relevant considerations in interpreting the results of the two different sets of experiments : ( a ) the obvious difference made by the presence of only two residues in the bulge in hiv-2 rna versus three in the hiv-1 rna ; ( b ) the fact that the relaxation times of dayie et al . include a component from the c5c6 dipolar coupling for pyrimidines , whereas this coupling is explicitly suppressed in bardaro et al . ( see shajani and varani ) ; ( c ) the use of a model - free analysis in dayie et al . versus the se and gr methods used herein . notwithstanding these caveats , the common results we can extract are that both papers observe significant flexibility in the u23 and u25 residues and rigid , slow motions in the helices . to examine the extent to which our approach matches results obtained by the md approach , we compare our results to those of salmon et al . , where the authors describe the selection of conformers from 8.2 s md simulations of the tar rna using pf1 phage - aligned rdc data sets . it was reported that the best fit to rdc data is obtained with a set of 20 conformations selected from the full md ensemble . though it is not possible to compare the absolute values of the interhelical bend and twist angles due to differing methods of characterizing the helices and their relative orientations , we can compare the spans of the angles reported for their ensemble to those in ours . ensemble of 20 span 88 ( from 3 to 91 ) whereas those in our ensemble span 87 , the rotations of the upper helix about the lower helical axis span 191 in their ensemble whereas those in our set span 215 , and the rotations of the upper helix about its own symmetry axis span 224 in their ensemble and 210 in ours . thus the span of angles obtained by the two approaches are in excellent agreement . moreover , the full , prefilter ensembles in both papers show correlations between the and twist angles . a more fine - grained comparison relates to the behavior of individual residues . we have already mentioned that the a22-u40 base pair is often found to be open among the full set of 500 structures . we also find that , among our five rdc - filtered structures , four ( the 45 , 76 , 115 , and 132 structures ) lack a a22-u40 base pair . however , the g26-c39 base pair is maintained in all five of these structures . find a similar asymmetry between the a22-u40 base pair and the g26-c39 base pairs in their rdc - selected ensemble , with the former adopting a broader conformational distribution and the latter being in an a - form helix - like conformation . report the occurrence of three clusters within their rdc - selected ensemble : a 66% population cluster with a22 stacked on u23 , a 19% population cluster with u23 flipped out , and a 15% cluster with paired u25-u40 and unpaired u23 and c24 . with regard to the third cluster , nearly 30% of the 500 energy - minimized structures used in our analysis show a u25-u40 pair , with three of the rdc - filtered structures ( the 45 , 76 , and 132 structures ) included in this list as well . the 115 structure simply lacks the a22-u40 pair and does not have any alternative pairings of either residue . stated that the u25-u40 pair is predicted to be the second most energetically favorable bulge conformation in mc - fold . a visual inspection of our structures shows the following behavior for the bulge:(1)the 45 structure has u23 flipped into the interhelical space but not stacked , c24 is flipped in but not stacked and u25 is paired with u40.(2)the 61 structure has u23 stacked on a22 and has c24 and u25 flipped out.(3)the 76 structure has u23 flipped out , c24 stacked with u25 , and u25 paired with u40.(4)the 115 structure has u23 flipped out , c24 flipped in and stacked close to u25 , and u25 flipped in and stacked only with c24.(5)the 132 structure has u23 and c24 flipped in but not stacked and has u25 paired with u40 . the 45 structure has u23 flipped into the interhelical space but not stacked , c24 is flipped in but not stacked and u25 is paired with u40 . the 61 structure has u23 stacked on a22 and has c24 and u25 flipped out . the 76 structure has u23 flipped out , c24 stacked with u25 , and u25 paired with u40 . the 115 structure has u23 flipped out , c24 flipped in and stacked close to u25 , and u25 flipped in and stacked only with c24 . the 132 structure has u23 and c24 flipped in but not stacked and has u25 paired with u40 . thus , only one of the structures in our ensemble ( the 61 structure with a 27% population ) has a significant a22-u23 stacking interaction and two have u23 flipped out ( 39% total population ) , a clear deviation from the results of salmon et al . , suggesting that the conformational variability of this region is more than can be captured by a small number of sampled structures . a solution to this problem is to generate energy - minimized structures where the interhelical orientation is fixed ( or nearly so ) and the bulge flexibility is evaluated under the constraint of fixed end points . such an analysis would establish the inherent conformational flexibility of the bulge . to cross - check the results of the program hydronmr , we recalculated the diffusion tensors using the program best , which tessellates the solvent - accessible surface of the molecule and calculates the various diffusion properties using a finite element analysis . the eigenvalues of the rotational diffusion tensors ( in ascending order ) of the two programs are compared in table 4 . we find that the diffusion tensor eigenvalues as found by best were uniformly smaller than those found by hydronmr , indicating that hydronmr underestimated the hydration effect relative to best . we subsequently calculated the relaxation times using the slow exchange formalism and found a t1 shift of at most 18 ms and a t2 shift of at most 1.2 ms , corresponding to a shift of about 4% of the experimental values for both relaxation times . this may modify the choice of parameters described above , but we believe that the impact will not be substantial . we have carried out a characterization of the essential dynamics of the tar rna molecule using techniques with time scale sensitivities ranging from subnanosecond ( solid - state and solution relaxation times ) to millisecond ( rdcs ) . we have been able to capture the long - time scale behavior of the conformational exchange processes that characterize this molecule and fit experimental relaxation times very well , with exchanges between discrete conformers occurring at time scales longer than 1 s . the similarities of results of this method with those of extended md simulations provide independent corroboration of our conformational analysis . further computational explorations and sample - size increases will enhance the results obtained by this methodology .
complex rna structures are constructed from helical segments connected by flexible loops that move spontaneously and in response to binding of small molecule ligands and proteins . understanding the conformational variability of rna requires the characterization of the coupled time evolution of interconnected flexible domains . to elucidate the collective molecular motions and explore the conformational landscape of the hiv-1 tar rna , we describe a new methodology that utilizes energy - minimized structures generated by the program fragment assembly of rna with full - atom refinement ( farfar ) . we apply structural filters in the form of experimental residual dipolar couplings ( rdcs ) to select a subset of discrete energy - minimized conformers and carry out principal component analyses ( pca ) to corroborate the choice of the filtered subset . we use this subset of structures to calculate solution t1 and t1 relaxation times for 13c spins in multiple residues in different domains of the molecule using two simulation protocols that we previously published . we match the experimental t1 times to within 2% and the t1 times to within less than 10% for helical residues . these results introduce a protocol to construct viable dynamic trajectories for rna molecules that accord well with experimental nmr data and support the notion that the motions of the helical portions of this small rna can be described by a relatively small number of discrete conformations exchanging over time scales longer than 1 s .
Introduction Theoretical and Computational Methods Results Discussion Conclusions
PMC4373489
additional supporting information may be found in the online version of this article : video s1 . this video runs at three frames per second : 96 sec . of running video time distal podomeres detach from tp of tc , and their presence guides the elongation and migration of tp belonging to other tc , as preliminary stages in the formation of a complex multi - cellular network . for supplementary materials on tc , please see http://www.telocytes.com . please note : wiley - blackwell are not responsible for the content or functionality of any supporting materials supplied by the authors . any queries ( other than missing material ) should be directed to the corresponding author for the article .
abstracttelocytes ( tc ) are interstitial cells with telopodes ( tp ) . these prolongations ( tp ) are quite unique : very long ( several tens of micrometres ) and very thin ( 0.5 m ) , with moniliform aspect : thin segments ( podomeres ) alternating with dilations ( podoms ) . to avoid any confusion , tc were previously named interstitial cajal - like cells ( iclc ) . myocardial tc were repeatedly documented by electron microscopy , immunohistochemistry and immunofluorescence . tc form a network by their tp , either in situ or in vitro . cardiac tc are ( completely ) different of classic fibroblasts or fibrocytes . we hereby present a synopsis of monitoring , by time - lapse videomicroscopy , of tp network development in cell culture . we used a protocol that favoured interstitial cell selection from adult mouse myocardium . videomicroscopy showed dynamic interactions of neighbour tc during the network formation . during their movement , tc leave behind distal segments ( podomeres ) of their tp as guiding marks for the neighbouring cells to follow during network rearrangement .
Supporting Information
PMC5138462
iran is located in western asia , with over 77 million inhabitants and also more than two - thirds of the population is under the age of 30 with one - quarter being 15 years of age or younger . iran also exhibits one of the steepest urban growth rates in the world and approximately over 70 percent of iran 's population lives in urban areas . moreover , the indicators for health and education have improved dramatically during the recent years . on the other hand , iran is one of afghanistan 's neighbors and therefore has the most serious problems in asia . the latest rapid situation assessment of substance use in 2011 estimates the number of dependent substance users in iran at 1,200,000 , corresponding to 2.2% of the adult population and a rise in injection drug use has a worrying ascending trend . a brief report of substance abuse policy in iran shows that government authorities and special services for treatment and harm reduction and prevention are being implemented on a patient - based approach . despite a significant increase in antisubstance trafficking efforts and establishment of several treatment centers in recent decades , the number of substance users has had an ascending trend and the age of onset has decreased . social policymakers have discovered a structural understanding of individual and social factors related to various levels of social health , providing an insight that can be used as the basis of public health policies . however , the application of evidence - based thinking in primary prevention is definitely hampered by the complexity of the causal chain . the knowledge about the first link is uncertain because of social and psychological factors . in addition , to identify effective strategies , evidence - based prevention programs and strategies adaptation need to be better understood as well as factors associated with institutionalization of effective prevention programs . thus , the main aim of this study is to investigate the most important social factors affecting substance use and other deviant behaviors in this country ; creating a structural discussion on the pathology of this phenomenon . this survey was implemented as a prospective study on 402 high risk abandoned substance users admitted to shafagh rehabilitation center : a clinical and psychological treatment center affiliated to the ministry of health , in collaboration with the police department and iran 's drug control headquarters in the year 2008 in tehran . a standard questionnaire was designed by researchers and experts to estimate baseline characteristics , sociodemographic variables , drug users ' experiences during rehabilitation treatment , imprisonment period , and substance use causes . narcotic replacement therapy at shafagh center was based on methadone therapy for 6 months . upon entrance to the rehabilitation center , the aim of the interview was explained to subjects and , after obtaining consent to participation , the questionnaire was completed for each individual via a face - to - face interviews , which were in - depth , semistructured interviews conducted by 3 social workers and 1 clinical psychologist [ 7 , 8 ] . qualitative data that were collected through semistructured interviews including the subjects ' self - report reasons regarding substance use were extracted via the following questions : how did you get involved in substance addiction ? and what was your main reason of substance use ? . field notes were analyzed using a thematic analysis with inductive hand coding in order to derive themes by two researchers . this work was designed to construct the theories that are grounded in the data themselves . indeed , the process of coding occurred without trying to fit the data into a preexisting model or frame . this process mainly consisted of reading transcripts , generating initial codes , comparing and contrasting themes , and building theoretical models and thereafter developing a mixed methodology by entering qualitative information into a computer at the first opportunity . the rigor and trustworthiness of the data were ensured through immersion in the subject , peer checking , and data source triangulation using experts from different fields for collection . with the aim of conducting a peer review , each interview was first coded by the first author and then reviewed by the second author who modified the manuscript if necessary . moreover , the charts extracted by the first author were checked by the second author in the middle and late stages of the analysis and extracted ; the themes , subthemes , and related statements are provided . descriptive statistics such as number and percent for categorical and mean sd for continuous variables was used for descriptive tables . also , in the analytic statistics , univariate logistic regression was used to assess the associations between drug use and demographical and etiological variables . the data collected in the present study is a part of the data obtained for a research project approved by the health ministry of iran . the study participants submitted their informed consent forms after the study objectives were described to them and after confidentiality and anonymity of their information were assured . moreover , at interviews , all ethical principles and subjects ' rights to withdraw from the study at any stage were observed . after contrasting , the study themes of the theoretical framework were borrowed from the social ecological theory developed by berkowitz and perkins ( 1986 ) which explains the causes of substance abuse as being within the social environment and the social group in which individuals interact . it is hypothesized that , to change a particular behavior , the social context that shapes it must be changed and therefore , to change the behavior , the social institutions that shape it must undergo change . prevention efforts using this theory focus on changing the environment and mainly the socialization process rather than the person . in addition , this study is referred to as the social stress model of substance abuse and theorizes that the probability of engaging in drug use is assessed as a function of the stress level and to the extent of which it is triggered by stress moderators , social networks , social competencies , and resources of communities . moreover , in a sociological view , lpez and scott 's idea of the triple concept of social structure ( 2000 ) including institutional , relational , and embodied structures has provided a useful framework for understanding the impacts of combined social structure on deviant behaviors . drawing on the above theories , the fundamental variables of influencing deviant behaviors were defined in three structural levels and four categories in figure 1 and the following . etiologic variables of drug use variables ( themes and codes ) deviant social networks having a drug user in peers group.having a drug user in family.having an imprisonment history.joining a gang . having a drug user in peers group . having a drug user in family . having an imprisonment history . low social capital immigration.being away from family.divorce.family disputes and tensions.intragroup conflicts.loss of close relatives and friends . weak social sources joblessness.severe life conditions.economic hardship.adverse living conditions.unavailability of appropriate medical facilities.inappropriate welfare facilities.risky environment.easy access to drugs . statements i had never lived with the fear that i had no friends , so i did almost anything to keep the two good friends i still had . when i was 17 , i started smoking cigarettes and alcohol because of my friends . at age 18 i started acting out like my peer drug addicts , stealing and sneaking out at night.both of my parents are active addicts , and they abused me physically and verbally , it was my father who got me into drugs.my parents were getting a divorce when i was 14 . we moved to new city , i was isolated and they did not care for me . i started using alcohol and drugs with my new friends after school.i had many problems with my parents . it helps you fit in or make you cool.we had many problems in our life . i heard just try crack once and everything 's gon na be okay ; it will make everything go away.i struggled with depression and hopelessness for a long time . my friends used the substance ; they were enjoying themselves and bonding.i rarely had a friend and i was so alone in school and i thought to be worthlessness and boring for others . the fear of loneliness led me to smoke cigarette in the park as a common action that shows i am like you , then use alcohol in a party and step by step i found that i am a drug user and in addiction . i had never lived with the fear that i had no friends , so i did almost anything to keep the two good friends i still had . when i was 17 , i started smoking cigarettes and alcohol because of my friends . at age 18 i started acting out like my peer drug addicts , stealing and sneaking out at night . both of my parents are active addicts , and they abused me physically and verbally , it was my father who got me into drugs . we moved to new city , i was isolated and they did not care for me . i heard just try crack once and everything 's gon na be okay ; it will make everything go away . i rarely had a friend and i was so alone in school and i thought to be worthlessness and boring for others . the fear of loneliness led me to smoke cigarette in the park as a common action that shows i am like you , then use alcohol in a party and step by step i found that i am a drug user and in addiction . as can be seen in table 1 , among 402 reported drug users , 386 ( 96.5% ) of them were men and 294 ( 73.4% ) subjects were single or divorced . moreover , the majority of drug users were in the age range between 20 and 39 years , with a mean age of 28.78 years and minimum and maximum of 13 and 62 years , respectively . concerning the education level , majority of drug users , 339 ( 87.1% ) , had primary or secondary school education diplomas and also 15 ( 3.9% ) subjects were illiterate . early onset of drug use in most participants ( 57.6% ) was 20 years with an average age of 21.21 and also a minimum and maximum age of 17 and 53 years , respectively ( sd = 6.363 ) . in addition , 213 ( 57.1% ) of participants had the minimum of a 5-year period of drug use . however , the majority of substance users had a history of addiction treatment and just 37% of them were treated by a physician . in addition , other risky behaviors including needle sharing , pre- or extramarital sex , and condomless sex were common among 21 ( 5.2% ) , 43 ( 10.7% ) , and 38 ( 19.5% ) of drug users , respectively . on the other hand , the majority of substance users did not report a history of severe physical and mental illness before the onset of drug use ( 87.1% and 82.6% , resp . ) . the subjects also reported the following substance uses : cigarettes , 385 ( 95.8% ) ; opium , 321 ( 79.9% ) ; heroin , 259 ( 66.4% ) ; crack , 227 ( 56.7% ) ; cannabis , 174 ( 43.3% ) ; alcohol , 164 ( 40.5% ) ; and sedatives , 117 ( 29.1% ) . in addition , the reported major reasons were categorized into four main themes including deviant social networks ( 26.2% ) , low social capital ( 16.5% ) , weak social sources ( 15.2% ) , and stress ( 37.1% ) . according to the results of the univariate logistic regression test , in table 2 , there are significant associations among the variables of age ( or = 1.04 ; 95% ci : 1.011.07 ) , divorce status ( or = 2.07 ; 95% ci : 1.233.49 ) , and the history of imprisonment ( or = 2.12 ; 95% ci : 1.323.40 ) with presence of cannabis use . moreover , diploma or academic educational level ( or = 0.26 ; 95% ci : 0.070.94 ) and history of imprisonment ( or = 0.35 ; 95% ci : 0.210.58 ) have significant protective association with presence of alcohol consumption . also , there is a significant protective association between primary or secondary educational level ( or = 0.29 ; 95% ci : 0.140.60 ) and history of imprisonment ( or = 0.27 ; 95% ci : 0.170.44 ) and the presence of heroin consumption . diploma or academic educational level is identified as a protective factor ( or = 0.29 ; 95% ci : 0.140.60 ) in comparison to lower education levels for presence of cocaine consumption . also , there is a significant protective association between history of imprisonment ( or = 0.49 ; 95% ci : 0.280.84 ) and sedative drug consumption . moreover , in table 3 , the findings provided are regarding the psychosomatic indicators that have been discovered after six months of methadone therapy and an education program for social - emotional skills . these results have shown that this program was approximately successful in decreasing drug users ' stress indicators . during recent years , there has been a considerable development in science - based prevention , providing new prevention models such as those based on risk and protective factors . in fact , the most important aspects of the models are their prognostic value and their influences on the success of treatment program [ 6 , 12 ] . according to the study qualitative findings derived from the drug users ' self - reported statements , the most important reasons of substance use were prioritized as stress that could be associated with a comorbidity of other external stressors and the pressure of deviant social networks , low social capital , and weak social sources in the society . other studies confirm an increasing adolescents ' stress in iran 's contemporary society [ 13 , 14 ] . furthermore , pleasure seeking and release of tension are discussed as the most common reasons for substance use among iranian high school students . on the other hand , the drug users ' psychotic symptoms assessment after a 6-month period of treatment and educational interventions confirmed a decline of some symptoms such as the feeling of tension , hopelessness , worthlessness , nervousness , and suicidal thoughts . these mental - social disorders may be interpreted as the weakness of emotional and social skills [ 6 , 16 ] . other studies show how high school dropouts are two to three times more likely to begin and maintain injecting drug use , compared to high school graduates [ 17 , 18 ] . it seems that the disruption and dysfunction of leading social institutions such as the family and school may have led to a major deprivation in individuals ' in both emotional and social skills and lead to fundamentally a weak embodied structure to cope with complex issues in the contemporary world . based on this study 's findings , most participants are single and without a high educational level ( 87.1% below high school diploma ) , indicating a high rate of school dropouts and the decline of intergroup social capital as a risk factor in the drug users ' socialization , particularly , because 57.6% of them have started drug abuse at the age below 20 years and , in sum , 16.5% of them are marked by low integration into a family and 57.3% of them suffered from a feeling of loneliness . in addition , the other studies show a low range of social capital within the drug users . indeed , bonds with friends and family are the strongest differentiating factors between substance users and nonusers . confirming the world health organization ( 2001 ) research findings on risk and protective factors from more than 50 countries , the most common risk factor for adolescent substance use in asia is the conflict within the family and friends who use substances . it is also concluded that a positive relationship with parents , parents who provide structure and boundaries , and a positive school environment are the most leading protective factors [ 21 , 22 ] . social and emotional skills learning as a unifying theory with an increasing body of research demonstrates evidence - based interventions which are associated with healthy behaviors [ 2326 ] . on the other hand , the other extracted indictors of this study such as joblessness and severe life conditions , economic hardship , adverse living conditions , unavailability of appropriate medical facilities , and inappropriate welfare facilities can be demonstrated to a large extent as the weakness of social support sources . according to the findings , there is an addiction period of more than 5 years among 57% of the drug users and most of them have given up drugs several times , but approximately only 30% of them had been under treatment by a physician . these findings confirm previous studies regarding the high relapse rate and more common behavioral changes as an active and multidimensional process in which the clients experience a psychological status spectrum from recovery to the relapse complex : a process influenced by the treatment process and individual factors associated with the patients [ 27 , 28 ] . it can be hypothesized that there exists a low coverage level of therapeutic services for substance in iran , since some participants stated running out of medicines or not being able to repurchase them as their main reasons for the noncommitment to the principles of the treatment program resulting in a relapse to addiction . there is a necessity for promoting the level of service utilization and continuation of medical and nonmedical treatment services in primary health care . in addition , low levels of social support system , such as joblessness , financial difficulties , and lack of social welfare in a society , are the significant indicators that can impact and shift social policies toward the improvement of wellbeing and social health [ 9 , 10 , 12 ] . the second explanation for the association between mental disorders and poor social circumstances is that individuals in socially disadvantaged situations are exposed to more psychosocial stressors ( adverse life events ) than those in more advantageous environments . these stressors act as triggers for the onset of symptoms and the loss of the individual psychological abilities necessary for social functions [ 29 , 30 ] . according to the results of the regression analysis , being older and divorced and a history of imprisonment have an impact only on cannabis users as a risk factor and played a protective role on the other drug users . moreover , diploma or academic educational levels and history of imprisonment were protective factors associated with alcohol consumption . also , primary or secondary educational levels and a history of imprisonment were protective factors for heroin users . regarding the effectiveness of previous imprisonment on drug users , it is confirmed by findings of various studies . a relapse to drugs and alcohol abuse occurred in a context of poor social support , medical comorbidity , and inadequate economic resources . however , a strong systemic provision on the access to drugs and the prison - based residential drug and alcohol treatment programs has been suggested in prisons . the above findings confirm the combination of this study 's qualitative findings , especially for drug users who have a higher education level . however , substance use and addiction treatment is not widely available in prisons and studies support that most people with substance abuse issues who are released from prison relapse once back in the community . the prevention interventions after prison for drug users may include structured treatment with gradual transition to the community , improved protective factors , and reductions of environmental risk factors . according to the study findings , there is a comorbidity of various risk factors including the weakness of social capital , deviant social networks , and a low stock of social sources that overall have an impact on the risk factors of drug use . however , in the regression analysis , the results of some variables have not clarified the associations strongly because of the small sample size of this study that again alerted smaller subgroups through distributing them among the users of drug types . in sum , based on these findings , it can be concluded that a major part of drug users commonly use substances as a way to deal with difficulties in stressful situations . and , from a social point of view , substance use is acknowledged as a deviance behavior which stems from the functional weakness of institutional , relational , and embodied structures . thus , this study illuminated the need for a focus on the contributions of policymakers to developing strategies in improving social sources , social capital within the family unit and prosocial networks , and enhancing health services .
this study is a sociological analysis of the three dimensions of social structure including institutional , relational , and embodied structures that have an impact on the individuals ' deviant behaviors in the society . the authors used a mix method to analyze the qualitative and quantitative data of 402 high risk abandoned substance users in 2008 in tehran , capital city of iran . the leading reasons of substance use were categorized into four fundamental themes as follows : stress , deviant social networks , and low social capital and weak social support sources . in addition , the epidemiology model of regression analysis provides a brief explanation to assess the association between the demographical and etiological variables , and the drug users ' deviant behaviors . in sum , substance use is discussed as a deviant behavior pattern which stems from a comorbidity of weak social structures .
1. Introduction 2. Materials and Methods 3. Results and Discussion 4. Conclusion
PMC3556547
prostate cancer ( pca ) is the fastest growing cancer in korea . according to the statistical data of the national cancer information center , the incidence of pca , 8.5 per 100,000 population in 1999 , the annual growth rate , 13.5% , is the fastest of all cancers in korea . radical prostatectomy ( rp ) is the standard treatment for patients with clinically localized pca ( ct1-t2 ) and is associated with a life expectancy of > 10 years . whereas open radical retropubic prostatectomy ( orp ) has been considered the gold standard for surgical treatment , minimally invasive procedures have been introduced with the intention of minimizing peri- and postoperative morbidities . despite the widespread use of robot - assisted laparoscopic prostatectomy ( ralp ) over the past decade , there are ongoing debates regarding the benefits of ralp compared with orp . oncological controls in comparative studies have shown that ralp yields results similar to those of orp . one recent study suggested that ralp results in no significant improvement in urological complications such as incontinence and erectile dysfunction . however , not many studies have undertaken well - controlled , single - surgeon , direct comparisons of the outcomes of ralp and orp . there have in fact been some reports on the impact of prostate volume on surgical outcomes . in orp , a large - volume prostate is associated with longer operation time and increased complications . patients with a small - volume prostate , meanwhile , have higher rates of biochemical recurrence ( bcr ) . in ralp , a small - volume prostate is correlated with early return of potency . however , there are few comparative reports on the impact of prostate volume on oncological and functional outcomes between the two types of surgery . the aim of the present study was to investigate differences in oncological and functional outcomes according to prostate volume in patients with localized pca who underwent orp or ralp . between september 2003 and april 2010 , 408 consecutive patients underwent single - surgeon rp for biopsy - confirmed pca at seoul national university hospital . a total of 103 patients ( 25% ) underwent ralp and 305 patients ( 75% ) underwent orp . preliminarily , after approval from our institutional review board , a total of 253 patients were included in this study . we initially selected 176 consecutive orp and 77 ralp cases for clinically localized pca ( ct1-t2 ) . the first 100 patients who had undergone orp and the first 25 to undergo ralp were excluded from the analysis owing to the learning curve . postsurgery follow - up visits typically were scheduled at 3-month intervals for 1 year , and then semiannually for 1 year , and yearly thereafter . patients within each surgical group were divided into two subgroups according to their prostate volume as measured by preoperative transrectal ultrasound : less than 40 g and 40 g or larger . the oncological outcomes were assessed as positive surgical margin ( psm ) and 24 month bcr rates . bcr was defined as two consecutive prostate - specific antigen ( psa ) measurements 0.2 ng / ml . the functional outcomes were assessed as continence and potency . urinary continence was defined as the absence of any urinary leakage or the use of only one security pad . potency was defined as spontaneous erectile function satisfactory for intercourse or with the use of phosphodiesterase-5 inhibitors on demand . preoperative erectile function was assessed by use of a validated questionnaire , the international index of erectile function-5 ( iief-5 ) . we evaluated postoperative potency in all patients who had been potent before surgery , defined as an iief-5 score of 12 , and who had undergone a bilateral or unilateral nerve - sparing procedure . postoperative potency was determined at each follow - up by means of a detailed surgeon - conducted interview . ralp was performed by the transperitoneal antegrade approach with the use of the da vinci robot system ( intuitive surgical inc . , the choice of surgical approach accorded with patient preference after discussion of the risks , benefits , and alternatives with the patient 's physician . in both groups , a uni- or bilateral nerve - sparing procedure was performed if clinically indicated by patient age , preoperative erectile function , and oncological parameters . the baseline characteristics of the patients were summarized as the mean standard deviation for continuous variables and frequencies or as percentages for categorical variables . the ralp and orp groups were compared by using the student 's t - test ( continuous factors ) and pearson chi - square test ( categorical factors ) . kaplan - meier survival curves were compared across techniques by using the log - rank test for up to 24 months of follow - up . in all of the tests , a statistical analysis was performed with the spss ver . 13.0 ( spss inc . , the mean patient age was 67.06.76 years , and the median body mass index was 24.02.67 kg / m . the mean preoperative psa level was 7.417.74 ng / ml . the mean preoperative transrectal ultrasound prostate volume was 4217.92 ml . as for the patients ' clinical stages , 171 cases ( 68% ) were stage i and 82 cases ( 32% ) were stage ii . the preoperative baseline clinicopathological demographics of the orp and ralp groups were comparable ( table 1 ) . the mean operation time was significantly shorter in the orp group ( 151 minutes vs. 220 minutes , p<0.001 ) , whereas the mean estimated blood loss was significantly less in the ralp group ( 917 ml vs. 642 ml , p<0.001 ) . the number of patients who had undergone a nerve - sparing procedure ( unilateral or bilateral ) was significantly higher in the ralp group ( 63% vs. 83% , p=0.004 ) . the pathological stages were very similar in each group , with 53% of patients with organ - confined disease in the orp group compared with 54% in the ralp group . however , there was a significant difference in the proportion of patients with a pathologic gleason score ; the proportion was more favorable in the ralp group ( p=0.008 ) . the most common location of a psm in the two groups was at the apex . psms were encountered less often with ralp than with orp , but without statistical significance ( 42% vs. 38% , p=0.394 ) . the 2-year bcr - free survival rates were 88% ( 154 of 176 ) in orp and 94% ( 72 of 77 ) in ralp patients during the follow - up period . a log - rank test showed no statistical difference between the two groups ( p=0.140 ) within 2 years of follow - up ( fig . , urinary continence had been regained in 55% of patients at 1 month , 80% at 3 months , 92% at 6 months , 95% at 9 months , 96% at 12 months , and 98% at 24 months . the corresponding ralp recovery rates were 38% , 71% , 84% , 88% , 94% , and 95% ( fig . 2 ) . after adjustment for age , operation type was not found to significantly affect postoperative urinary continence recovery ( p=0.058 ) . of the 177 patients who underwent orp , 55 ( 31% ) were potent preoperatively , compared with 41 of the 77 patients ( 53% ) who underwent ralp . in the orp group , nerve - sparing status was bilateral in 47 ( 85% ) and unilateral in 8 patients ( 15% ) ; the corresponding numbers in the ralp group were 33 ( 81% ) and 8 ( 19% ) , respectively . in the subset of potent patients , 28 of 55 ( 51% ) treated with orp and 23 of 41 ( 56% ) treated with ralp were potent at the 2-year follow - up . the recovery rates after orp were 2% at 1 month , 6% at 3 months , 15% at 6 months , 22% at 9 months , 40% at 12 months , and 51% at 24 months ; after ralp , they were 0% , 17% , 29% , 29% , 54% , and 56% , respectively ( fig . after adjustment for age and nerve - sparing status , the recovery of sexual function was comparable between the orp and ralp groups throughout the follow - up period ( p=0.418 ) . in the subgroup analysis for which patients were classified according to prostate volume into small ( < 40 g ) and large ( 40 g ) volume groups , in the orp small - volume subgroup , the potency rates were 0% at 1 month , 0% at 3 months , 10% at 6 months , 21% at 9 months , 35% at 12 months , and 55% at 24 months . in the ralp small - volume subgroup , they were 0% , 24% , 36% , 36% , 56% , and 60% , respectively ( fig . ralp was associated with quick potency recovery in the small - volume subgroup ( p=0.020 ) . between the two small - volume subgroups , there was no significant difference in the time to continence or oncological outcomes . within the large - volume subgroups , patients who had undergone ralp were less likely to become continent than were those who had undergone orp ( p=0.048 ) . in the orp large - volume subgroup , the continence rates were 57% at 1 month , 77% at 3 months , 91% at 6 months , 95% at 9 months , 95% at 12 months , and 97% at 24 months . in the ralp large - volume subgroup , they were 31% , 56% , 75% , 81% , 88% , and 91% , respectively ( fig . , there was no significant difference in the recovery of sexual function or oncological outcomes . in the present study we compared the surgical , oncological , and functional outcomes of orp and ralp . whereas the study was not randomized , its main strengths were the single - center and single - surgeon setting ; the use of a validated questionnaire to evaluate preoperative erectile function , specifically , the iief-5 ; and the inclusion of consecutive patients . the same protocols for preoperative diagnosis and staging evaluation of pca , perioperative treatment , pathological evaluation , and surgical approaches were adopted for both study groups , and the follow - up periods were sufficiently long for evaluation of functional outcomes . another strength of the methodology was the comparability of the baseline clinicopathologic characteristics between the two groups . bcr and psm are the two commonly used indexes for assessment of oncological outcomes following rp . in the present study , the psm rates were similar in the two groups , and consistent with other , prior series . the reported incidences of psm ranged from 11% to 37% after orp and from 9.6% to 26% after ralp . although more nerve - sparing procedures were performed in the ralp group , the technique did not significantly increase the incidence of adverse outcomes . the difference in the short - term bcr rates between the two groups was not statistically significant . also reported similar bcr rates for ralp and orp groups at short follow - ups of 1 and 3 years , respectively . sooriakumaran et al . recently reported that biochemical - free survival after ralp was 84.8% at a median follow - up of 6.3 years . long - term outcome data on psa progression are not yet available for ralp , owing to the relatively short history of their availability . further follow - up is required to determine long - term oncological outcomes such as disease - specific death and overall survival . in the present study , the recovery of erectile function was more rapid in ralp patients with small - volume prostates . we found a significant advantage of ralp over orp for those who had small - volume prostates at the 3- and 6-month follow - up , particularly for preoperatively potent patients ( iief-512 ) undergoing unilateral or bilateral neurovascular bundle sparing . our postoperative potency rates were 24% after ralp and 0% after orp at the 3-month follow - up ( p=0.007 ) , and 36% and 10% at the 6-month ( p=0.026 ) , respectively . however , there was no difference in potency rates between the ralp and orp small - volume groups at 12 months ( 56% vs. 35% , p=0.095 ) or 24 months ( 60% vs. 55% , p=0.468 ) postoperatively . neither was there any statistical difference in recovery of erectile function between the two large - volume groups . tewari et al . reported that patients after ralp had a more rapid return of erection : 50% at a mean follow - up of 180 days versus 440 days after orp . rocco et al . reported a significant ralp advantage over orp for patients , particularly younger patients , who had undergone a nerve - sparing procedure at 3 , 6 , and 12 months postoperatively . however , krambeck et al . reported no significant difference in potency rate at the 1-year follow - up . this early return of erectile function could be attributed to preservation of potency with minimized damage to the neurovascular bundles , better magnified visualization , precise anatomical dissection , reduced blood loss , or improved anatomical - reconstruction ability by use of robotic assistance . it has been postulated that in larger prostates , the neurovascular bundles are displaced posteriorly , where they are possibly obscured by the prostate , making them prone to injury . also , a large prostate is less mobile in the pelvis , owing to the smaller available space . these factors could offset the advantage of ralp for potency preservation . in the present findings interestingly , patients with large - volume prostates seemed to recover continence more quickly after orp than after ralp . found that 84% of patients were continent at the 12 month follow - up after ralp . hu et al . reported that men undergoing ralp were more likely to be diagnosed as incontinent . malcolm et al . found no difference in health - related quality - of - life " bother scores " related to incontinence . reported no difference in continence after ralp or rrp at the 1-year follow - up . others , meanwhile , have reported 12 month urinary continence rates as high as 97% after ralp . our findings are interesting in light of the fact that ralp is thought to increase the precision of anastomosis and decrease the incidence of traumatic maneuvers on the urethrosphincteric complex . we suspect that these outcomes are related to a combination of extensive apical dissection and overzealous diathermy at the bladder neck with over - tight suturing , though we have no direct evidence for this . this difference might be attributable to the necessarily lengthy learning curve for making a running anastomosis compared with the interrupted anastomosis used for orp . or , it might also be due to the differences between the retrograde and antegrade approaches . in any case , there is no real explanation for such findings at the present time . the main limitation of this study , a retrospective review of our database , was the lack of randomized allocation of patients into one of the two treatment arms . the choice of surgical approach was based mainly on patient preferences and requests after they were fully informed about both procedures . still , given that these groups were relatively well matched regarding the comparability of the patients ' baseline characteristics , the substantial differences in functional outcomes shown in the present results can be attributed mainly to the two different surgical approaches . another concern is that because the patients underwent rp at a tertiary - care centre , the present study might not fully reflect epidemiological trends and , indeed , might have incorporated a certain degree of selection bias . the other limitations of this study were the relatively small number of patients and the short follow - up period . in this single - surgeon consecutive series , patients after ralp demonstrated early recovery of erectile function , especially those with small - volume prostates . ralp was also associated with lower blood loss but slightly longer operation times when compared with orp . a large - volume prostate was associated with lower rates of postoperative urinary continence recovery , particularly in ralp patients . long - term follow - up and well - designed randomized controlled trials are required before definitive conclusions on oncological and functional outcomes between orp and ralp can be drawn .
purposewe compared the impact of prostate volume on oncological and functional outcomes 2 years after robot - assisted laparoscopic prostatectomy ( ralp ) and open radical retropubic prostatectomy ( orp).materials and methodsbetween 2003 and 2010 , 253 consecutive patients who had undergone prostatectomy by a single surgeon were serially followed over 2 years postoperatively . ralp was performed on 77 patients and orp on 176 . the patients were divided into two subgroups according to prostate volume as measured by transrectal ultrasound : less than 40 g and 40 g or larger . recoveries of potency and continence were checked serially by interview 1 , 3 , 6 , 9 , 12 , and 24 months postoperatively.resultsralp was associated with less blood loss ( orp vs. ralp : 910 ml vs. 640 ml , p<0.001 ) but a longer operation time ( 150 minutes vs. 220 minutes , p<0.001 ) than was orp . no statistically significant differences were found between the two groups for oncological outcomes , such as positive surgical margin ( 40% vs. 39% , p=0.911 ) or biochemical recurrence ( 12% vs. 7% , p=0.155 ) . the overall functional outcomes showed no statistically significant differences at 2 years of follow - up ( continence : 97% vs. 94% , p=0.103 ; potency : 51% vs. 56% , p=0.614 ) . in the results of an inter - subgroup analysis , potency recovery was more rapid in patients who underwent ralp in a small - volume prostate than in those who underwent orp in a small - volume prostate ( 3 months : 24% vs. 0% , p=0.005 ; 6 months : 36% vs. 10% , p=0.024 ) . however , patients who underwent ralp in a large - volume prostate were less likely to recover continence than were patients who underwent orp in a large - volume prostate ( 97% vs. 88% , p=0.025).conclusionspatients can be expected to recover erectile function more quickly after ralp than after orp , especially in cases of a small prostate volume .
INTRODUCTION MATERIALS AND METHODS RESULTS DISCUSSION CONCLUSIONS
PMC3956292
there currently is considerable interest in developing therapeutic strategies to enhance plasticity of the adult central nervous system ( cns ) . physical exercise , diet , and various forms of environmental / cognitive enrichment have all been proposed to facilitate plasticity [ 14 ] . for example , the maturation and strength of gabaergic signaling act as a critical regulator of plasticity in cortical networks , with increasing inhibitory tone during postnatal development generally limiting plasticity [ 59 ] . consequently , manipulations aimed at reducing the strength of gabaergic transmission are thought to constitute promising candidate strategies to enhance plasticity of mature , less plastic cortical circuits [ 7 , 8 ] . interestingly , recent reports have provided evidence for the notion that chronic treatment with the antidepressant and selective serotonin reuptake inhibitor ( ssri ) fluoxetine can reduce the strength of gabaergic inhibition and promote plasticity of forebrain synapses . for example , chronic ( 4 weeks ) fluoxetine administration restored ocular dominance shifts in the primary visual cortex ( v1 ) of adult rats , a form of developmentally regulated plasticity that is significantly reduced in the mature brain . in addition , fluoxetine treatment allowed v1 synapses to express greater long - term potentiation ( ltp ) , an electrophysiological index of the ability of synapses to undergo an upregulation of synaptic strength . these plasticity - promoting effects of chronic fluoxetine administration appeared to be mediated by a decrease in intracortical inhibition and translated into significant behavioral effects , as assessed by the restoration of visual functions in a rat model of adult amblyopia . thus , chronic ssri treatment may offer significant , therapeutic potential for the restoration of plasticity to levels normally present only during the earlier stages of postnatal brain development . the notion that chronic ssri treatment can exert a prominent , facilitating effect on plasticity has also been supported by investigations employing structural and neuroanatomical measures . noted that , in rats , 14-day treatment with fluoxetine resulted in an increase in immediate early gene ( c - fos ) expression in the somatosensory cortex , together with an increased spine density of cortical pyramidal cells ; similar results have also been obtained in hippocampal pyramidal cells . finally , it is now well established that fluoxetine administration enhances neurogenesis in the hippocampal formation of adult animals [ 1416 ] , an effect that appears to be a critical mediator of some of the behavioral effects seen with ssri treatment . the evidence summarized above indicates the potential of fluoxetine to affect plasticity of forebrain synapses . it is important to note , however , that some investigations have failed to detect beneficial effects ( or noted adverse outcomes ) of chronic fluoxetine treatment on plasticity or in animal models of several neurological diseases , some of which clearly involve deficient plasticity mechanisms ( down syndrome , fetal alcohol syndrome , and neurotoxic brain damage ) [ 1720 ] . consequently , there is a need for further , detailed investigations of the effects of ssri treatment on plasticity mechanisms across various forebrain networks . in the present study , we assessed the effect of chronic fluoxetine treatment on ltp in the thalamocortical auditory system between the medial geniculate nucleus ( mgn ) and primary auditory cortex ( a1 ) of adult rats . ltp in this projection system shows a sharp , age - dependent decline over postnatal life , with high levels of ltp present during the first 5 - 6 weeks of postnatal life , but only modest levels after postnatal day ( pd ) 100 . here , we tested whether chronic fluoxetine treatment of adult rats would restore ltp to levels normally seen only in juveniles , similar to the effects reported for plasticity in v1 of adult rodents . experiments were conducted on adult , male long - evans rats ( obtained from charles river laboratories inc . , saint - constant , qubec , canada ; 200250 g body weight or about 5055 days old at the arrival in the animal colony ; at least 9095 days old at the time of the electrophysiological procedures ) . rats were individually housed ( cage dimensions 40 20 20 cm ) with ad libitum access to food and water . the colony room was maintained under a reversed 12 /12 hour dark / light cycle ( lights on at 7 pm ) . experimental procedures were performed in accordance with the published guidelines of the canadian council on animal care and approved by the queen 's university animal care committee . all efforts were made in order to minimize animal suffering and the number of animals employed for these experiments . all animals were allowed at least 1 week of acclimatization to the animal colony prior to the onset of fluoxetine treatment . the fluoxetine treatment regimen was the same as that described by vetencourt et al . . fluoxetine ( capsules containing 10 mg fluoxetine hydrochloride , obtained from the kingston general hospital pharmacy , kingston , on , canada ) was dissolved in the drinking water at 0.2 mg / ml and was available ad libitum ; control animals received drinking water without drug . drinking bottles were covered with cardboard or tinfoil to prevent photodecomposition of fluoxetine and were refilled every 48 hours . treatment continued for 4 - 5 weeks ( mperiod = 4.5 and 4.7 weeks for fluoxetine and water rats , resp . ) , with fluid intake ( every 48 hours ) and body weight ( every 7 days ) recorded throughout the treatment period . electrophysiological assessments were carried out at the end of the treatment period ( rat age of about 9095 days ) and followed previously established procedures [ 21 , 22 ] . rats were anesthetized using urethane ( sigma - aldrich , oakville , on , canada ; 1.5 g / kg , given as three intraperitoneal ( i.p . ) injections of 0.5 g / kg each , every 20 min , further supplements as required to reach deep , surgical anesthesia ) . in addition , the local anesthetic marcaine ( 0.2 - 0.3 ml ) was administered subcutaneously under the skin covering the skull . throughout the experiment after anesthesia induction , rats were placed in a stereotaxic apparatus , the skull was exposed , and small holes were drilled over the mgn ( ap 5.5 , l + 4.0 , v 5.4 to 6.4 ) and the ipsilateral a1 ( ap 4.5 , l + 7.0 , v 3.2 to 5.4 , all measurements from bregma ) . additional holes over the cerebellum and frontal cortex were used to secure reference and ground connections ( jewelry screws ) , respectively . a stimulation electrode ( concentric bipolar electrode , sne-100 , rhodes medical instruments , david kopf , tujunga , ca ) was lowered into the mgn , and a recoding electrode ( 125 m diameter teflon - insulated stainless steel wire ) was placed in a1 . the final , ventral placement of both electrodes was optimized to yield the maximum amplitude of field postsynaptic potentials ( fpsps ) in a1 in response to single pulse stimulation of the mgn . signals were amplified ( model 1800 , a - m systems inc . , carlsborg , wa , half - amplitude filter settings at 0.3 hz to 1 khz ) , digitized ( at 10 khz using a powerlab/4s system , running scope software v.4.0.2 , ad instruments , toronto , on , canada ) , and stored for subsequent offline analysis . stimulation ( 0.2 ms pulse duration ) of the mgn was delivered by means of a stimulus isolation unit ( ml 180 stimulus isolator , ad instruments ) . for each rat , a 30 min period following the final electrode adjustments was given to allow for stabilization prior to the onset of data collection . following stabilization , an input - output curve was generated by stimulating the mgn at successively increasing intensities ( 0.11.0 ma , in 0.1 ma increments ) . the intensity that elicited fpsps of 5060% of the maximal fpsp amplitude was chosen for the formal data collection . baseline fpsps were recorded every 30 s until a 30 min period of stable baseline recordings was established . subsequently , theta - burst stimulation ( tbs ) of the mgn was applied as trains of 10 bursts ( delivered at 5 hz ) , with each burst consisting of five pulses at 100 hz . stimulation trains were repeated every 10 s for a total of four trains . this induction protocol has previously been shown to elicit reliable ltp in the thalamocortical auditory system in vivo [ 21 , 22 ] . after the tbs delivery , fpsps were recorded for 60 min ( every 30 s ) , followed by a second tbs episode ( same as above ) and a final 60 min of fpsp recordings . immediately after the experiment , rats were perfused through the heart with 0.9% saline , followed by 10% formalin . brains were extracted and immersed in formalin prior to sectioning ( 40 m ) using a cryostat . the locations of all electrodes were examined using standard histological techniques and only animals with accurate placements were included in the analysis of the electrophysiology data . the fpsp amplitude was computed offline by scope software ( v.4.1.1 , ad instruments ) . values for each rat were averaged over 10 min intervals and these averages were normalized by dividing them by the averaged baseline ( pre - tbs ) amplitude of that animal . data were statistically evaluated by repeated measures analysis of variance ( anova ) and , if statistically appropriate , simple effects tests using the clr anova software package ( v.1.1 , clear lake research inc . , houston , tx ) . note that the results of all statistical analyses are reported in the appropriate figure captions . for a 4-week period , rats were given access to drinking water ( n = 18 ) or drinking water containing fluoxetine ( 0.2 mg / ml ; n = 20 ) . during this period , weight gain was significantly reduced in rats given access to fluoxetine relative to control ( water ) animals ( figure 1(a ) ) . the total weight gain from week 0 ( before treatment onset ) to week 4 was 37 3% and 54 + 3% for fluoxetine and water animals , respectively , findings that are consistent with the substantial literature demonstrating appetite - suppressant effects of fluoxetine treatment [ 19 , 20 , 23 ] . fluoxetine also reduced water intake ( figure 1(b ) ) , with fluid consumption over 48 h averaging 31 1 ml and 51 3 ml in fluoxetine and water animals , respectively . after 4 - 5 treatment weeks ( 4.5 and 4.7 weeks for fluoxetine and water animals , resp . ) , each rat was anesthetized with urethane to allow for the placement of a stimulating and recording electrode in the mgn and a1 , respectively ( figure 2(a ) ) . consistent with previous wok [ 21 , 22 ] , extracellular recordings in the middle layers ( iii / iv ) of a1 revealed that single pulse stimulation of the mgn elicited fpsps consisting of two negative - going components with peak latencies of about 68 and 1416 ms , respectively ( figure 2(b ) ) . previous work using current - source density analysis and pharmacological approaches has revealed that these two negative peaks correspond to current sinks associated with the sequential activation of direct , thalamocortical synapses ( layer iv ; first fpsp peak ) and subsequent , intracortical synapses ( layers ii / iii ; second fpsp peak ) [ 22 , 24 ] . it is important to note that the urethane dose required for deep , surgical anesthesia did not differ significantly between the two groups of animals , with water and fluoxetine rats receiving a final dose of 2.12 0.09 and 2.07 0.06 g / kg of urethane , respectively ( figure 2(c ) ) . thus , chronic fluoxetine treatment did not appear to alter the response to urethane anesthesia . initially , fpsps elicited by single - pulse mgn stimulation were recorded for 30 min in order to establish a measure of baseline synaptic strength prior to ltp induction . stimulation intensities used for the two groups of animals did not differ significantly , with mgn stimulation pulses of 0.48 0.04 ma and 0.5 0.02 ma for water ( n = 11 ) and fluoxetine ( n = 14 ) animals , respectively ( figure 2(d ) ) . further , the amplitude of baseline fpsps did not differ between the treatment groups , with the amplitude of the first peak at 1.05 0.15 mv and 1.0 0.15 mv in water and fluoxetine animals , respectively ( figure 2(e ) ) . the amplitude of the second fpsp peak was 0.48 0.07 mv and 0.59 0.08 mv in the water and fluoxetine group , respectively ( figure 2(f ) ) . it is noteworthy that the amplitude difference between the two groups amounts to 23% , even though the statistical analysis did not indicate this effect to approach significance ( p = 0.3 ) . following the completion of baseline recordings ( 30 min ) , two episodes of tbs were delivered to the mgn , each followed by 60 min of fpsp recordings . in water animals , tbs resulted in successful ltp induction , with the first tbs episode resulting in a potentiation of the first and second fpsp peak to 115% and 123% of baseline , respectively ( figure 3 ; all values reported here are mean values for recordings taken between 3160 min after tbs delivery ) . the second tbs episodes resulted in further potentiation in water animals , with the two peaks reaching 120% and 135% of baseline ( figure 3 ) . in fluoxetine animals , tbs also resulted in ltp , but the amplitude of the first and second fpsp peak reached only 111% and 110% of baseline , respectively , following the first tbs episode ( figure 3 ) . further , after the second tbs , both peaks reached only 113% of baseline ( figure 3 ) . thus , fluoxetine animals showed less potentiation than that seen in the water group , an effect that was significant for the second fpsp peak representing intracortical synapses ( see caption for figure 3 ) . as mentioned , the second fpsp peak in fluoxetine rats exhibits a baseline amplitude 23% higher than that seen in water animals . even though this effect did not reach statistical significance , it might nevertheless indicate a minor enhancement of intracortical synaptic strength following chronic fluoxetine exposure . such an enhancement may act to limit ( occlude ) further ltp induction by tbs delivery . in order to assess this possibility , we performed additional analysis by plotting and correlating baseline fpsp amplitude against levels of ltp for the second fpsp peak in all fluoxetine animals ( figure 4 ) . however , correlations using either raw or rank - ordered data ( figure 4 ) both failed to indicate a significant relation between baseline fpsp amplitude and subsequent potentiation induced by tbs in fluoxetine rats . with the present experiments , we examined whether chronic fluoxetine treatment alters ltp of synapses in the mature a1 of adult rats . several recent reports have provided support for the notion that chronic fluoxetine treatment leads to an enhancement of plasticity in the adult cns [ 7 , 10 , 1214 , 25 ] . in contrast to these findings , we found no evidence of an upregulation of plasticity at a1 synapses . in fact , there was a clear suppression of ltp in fluoxetine - treated rats , in particular for the second peak of the cortical fpsp , thought to reflect synaptic currents originating at intracortical synapses in a1 ( see below ) . previous work has shown that , in rats , fpsps elicited by mgn stimulation in vivo typically consist of two distinct , negative peaks that correspond to current sinks associated with the successive activation of thalamocortical and intracortical a1 synapses , respectively [ 22 , 24 , 26 ] . for both sets of synapses , ltp induced by thalamic stimulation shows a clear , age - related decline , with significant potentiation present up to pd 50 , modest ltp around pd 100 ( about the time ltp was assessed in the present study ) , and very little or no ltp after pd 200 [ 21 , 24 ] . as such , ltp in the rat thalamocortical auditory system provides an appropriate model to study the developmental decline of plasticity in a central sensory system . the present data confirm that adult rats show modest levels of plasticity under the present experimental conditions , with thalamocortical and intracortical synapses expressing ltp of about 120% and 135% of baseline , respectively . surprisingly , rats given chronic fluoxetine showed less ltp , with both thalamocortical and intracortical synapses expressing potentiation of only 113% of baseline , respectively , observations that are indicative of an inhibition of ltp induction mechanisms , especially for intracortical synapses in a1 . the results summarized above regarding ltp in control animals are consistent with previous work , which has also shown that intracortical synapses in the adult a1 show higher ltp levels relative to thalamocortical synapses [ 21 , 22 , 24 ] . interestingly , receptive field plasticity of a1 neurons ( i.e. , shifts in the optimal response to different sound frequencies ) also occurs by a potentiation of intracortical but not thalamocortical synapses , suggesting that the ltp measured here plays a direct role in receptive field plasticity and associated changes of the tonotopic map present in a1 . consequently , it is possible that chronic fluoxetine exposure may also impair a1 receptive field shifts in rodents , a hypothesis that clearly requires examination . we employed the same fluoxetine dosing regimen ( 0.2 mg of fluoxetine in 1 ml of drinking water ) as that used in previous work showing a restoration of ocular dominance plasticity and enhancement of ltp in v1 slice preparations obtained from adult rats . in the present investigation , rats in the fluoxetine condition consumed an average of about 15.5 ml of fluid every 24 hours , equaling an intake of about 3.1 mg of fluoxetine . this drug amount corresponds to a daily dosage of about 10.7 mg / kg and 7.9 mg / kg of body weight at the beginning ( body weight of about 290 g ) and the end ( 390 g ) of the treatment period , respectively . these dosages are very similar to those used in previous chronic administration studies demonstrating behavioral and/or neurochemical effects following fluoxetine treatment [ 10 , 17 , 19 , 28 ] . in our experiments , fluoxetine reduced body weight gain during the treatment period , a classic effect of ssri treatment [ 17 , 19 , 20 , 23 ] . in addition , we also noted a suppression of water intake , also consistent with the results of previous work . together , these results confirm the bioavailability and bioactivity of fluoxetine , as administered in the present investigation . while fluoxetine has been suggested to enhance plasticity of the mature cns , empirical evidence assessing this contention has been inconsistent . stewart and reid noted that 15-day treatment with fluoxetine reduced levels of ltp in the dentate gyrus of anesthetized rats , and inhibitory effects on hippocampal ( area ca1 ) ltp in rats have also been reported for acute ( single - dose ) fluoxetine treatment . an elegant , recent investigation revealed that a chronic ( 4 week ) fluoxetine regimen resulted in deficits in the induction of ltp in the hippocampal ca1 field of adult rats , while dentate gyrus ltp was intact . interestingly , the same authors also noted a disruption of ltd , which again was specific for the ca1 field . the ltd impairment is of significance since it suggests that any observed reduction in ltp is unlikely to be related exclusively to an upregulation of synaptic strength following long - term fluoxetine exposure ( see ) . typically , such an effect reduces ltp but yields greater ltd , due to the fact that enhanced synaptic connectivity represents a potentiated state , which limits further potentiation but leaves greater room for synaptic weakening [ 3133 ] . it will be important for future work to assess whether chronic fluoxetine impairs or facilitates ltd induction in areas other than the hippocampal formation . the present experiments did not provide reliable evidence for an upregulation of a1 synaptic strength following chronic fluoxetine , with baseline fpsp amplitudes showing no significant differences between the two treatment groups . previous work has found evidence for an enhancement of field potential strength in rats after fluoxetine administration in both the dentate gyrus and ca1 field [ 17 , 30 ] , even though a lack of fpsp facilitation in the rat dentate gyrus has also been reported . it is noteworthy , however , that the second fpsp peak was 23% larger in fluoxetine rats relative to water animals , in particular since this value is very similar to the reduction of ltp for the second fpsp peak in the fluoxetine group ( 135% and 113% potentiation in water and fluoxetine rats , resp . , we carried out additional analyses to assess whether greater baseline fpsp amplitude was related to lower levels of tbs - induced ltp , suggestive of an occlusion - like effect of chronic fluoxetine in ltp . however , these analyses did not provide any suggestion of an association between baseline fpsp amplitude and subsequent ltp magnitude . we can not rule out that such a relation , or a significant difference in baseline synaptic strength , may emerge with larger sample sizes or alternative methodologies to study synaptic connectivity and plasticity in a1 ( e.g. , optical imaging or in vitro approaches ) . it is also possible that some cns regions ( e.g. , hippocampal formation ) exhibit greater sensitivity to the potential , neurotrophic effects elicited by fluoxetine than areas such as the primary sensory fields of the neocortical mantle . previous work has shown that local application of an nmda receptor antagonist directly in a1 of rats blocks the induction of ltp elicited by mgn stimulation in vivo [ 22 , 34 ] . it is well documented that the precise subunit composition of the nmda receptor exerts profound effects on ltp induction by altering the level of calcium influx across the postsynaptic membrane . receptors containing the nr2b subunit exhibit prolonged channel opening duration and greater calcium influx relative to nr2a - expressing receptors , effects that lead to enhanced ltp induction [ 21 , 22 , 3537 ] . interestingly , in rats , chronic fluoxetine exposure can alter nmda subunit composition by increasing the relative expression of nr2a subunits , raising the possibility that this effect contributes to the reduction of ltp noted here and in previous work [ 17 , 30 ] . future work is required to examine this hypothesis and delineate the precise mechanisms that mediate the effects ( facilitating and inhibitory ) of long - term fluoxetine exposure on the induction of different forms ( e.g. , structural and physiological ) of cns plasticity . in recent years , there has been considerable excitement regarding the potential of various ssris to enhance cns plasticity [ 7 , 8 , 10 , 14 , 39 ] . not only have the plasticity - promoting actions been suggested to mediate mood - enhancing effects [ 1416 , 39 ] , but ssris may also facilitate plasticity and functional recovery in neurological conditions unrelated to mood disorders ( e.g. , [ 7 , 10 , 1820 ] ) . while some evidence is clearly supportive of this notion [ 10 , 12 , 13 , 25 ] , the present study confirms and extends previous investigations that have failed to detect an enhancement of plasticity following chronic fluoxetine treatment [ 17 , 30 ] . in fact , both electrophysiological ( ltp ) and structural investigations are compatible with the view that fluoxetine exposure can result in a strengthening and stabilization of synaptic connectivity , which reduces the ability of neurons to express further plasticity [ 12 , 13 , 30 , 38 ] . thus , the effects of fluoxetine on plasticity appear to be complex and bidirectional , findings that clearly require consideration when discussing the use of fluoxetine and other ssris for the modulation of plasticity of the mammalian forebrain .
several recent studies have provided evidence that chronic treatment with the selective serotonin reuptake inhibitor ( ssri ) fluoxetine can facilitate synaptic plasticity ( e.g. , ocular dominance shifts ) in the adult central nervous system . here , we assessed whether fluoxetine enhances long - term potentiation ( ltp ) in the thalamocortical auditory system of mature rats , a developmentally regulated form of plasticity that shows a characteristic decline during postnatal life . adult rats were chronically treated with fluoxetine ( administered in the drinking water , 0.2 mg / ml , four weeks of treatment ) . electrophysiological assessments were conducted using an anesthetized ( urethane ) in vivo preparation , with ltp of field potentials in the primary auditory cortex ( a1 ) induced by theta - burst stimulation of the medial geniculate nucleus . we find that , compared to water - treated control animals , fluoxetine - treated rats did not express higher levels of ltp and , in fact , exhibited reduced levels of potentiation at presumed intracortical a1 synapses . bioactivity of fluoxetine was confirmed by a reduction of weight gain and fluid intake during the four - week treatment period . we conclude that chronic fluoxetine treatment fails to enhance ltp in the mature rodent thalamocortical auditory system , results that bring into question the notion that ssris act as general facilitators of synaptic plasticity in the mammalian forebrain .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion
PMC3818849
dental caries is the most common chronic disease of childhood and significantly impacts children 's well - being . among us children aged from 2 to 5 years of age , more than 25% have caries , a prevalence which appears to be on the rise . yet , young children , particularly those who are low - income , encounter substantial barriers accessing dental care for prevention or treatment of dental caries . meanwhile , primary care physicians ( pcp ) who care for children in the us , specifically pediatricians and family physicians ( and in some settings , nurse practitioners and physician assistants ) , have unique opportunities to deliver oral health anticipatory guidance and implement dental caries primary prevention at frequent well - child - care visits early in a child 's life . infants , young children , and their parents will likely see their pcp as many as 13 times before they have ever visited a dentist , and more children have ready access to primary medical care than to dental care , particularly if they are publicly insured . studies indicate that , with training , physicians can effectively deliver preventive oral health services [ 4 , 5 ] . in an effort to encourage pcps to further their involvement in oral health as a means to diminish oral health disparities among children , 44 out of 50 us states now reimburse pcps to provide preventive oral health care services to medicaid - enrolled children . however , until recently , most pediatricians have lacked formal training in oral health [ 6 , 7 ] that would allow them to effectively deliver these services and bill for them . acknowledging the impact of dental disease on children 's health and the unique role that pediatricians can play in addressing oral health beginning in infancy , the american academy of pediatrics ( aap ) added oral health promotion to its strategic plan in 2006 and set about developing plans to educate us pediatricians about oral health using a train - the - trainer model . funding for these efforts was provided by a grant from the american dental association foundation . the result was the chapter oral health advocate ( coha ) program , in which 1 - 2 representative pediatricians were recruited from each aap chapter to become peer - to - peer educators called cohas for fellow pediatricians in their state or aap chapter ( larger states have multiple chapters ) . cohas were trained at the chapter advocacy training on oral health ( catooh ) , a 1.5-day course held 3 times ( 2008 , 2009 , and 2010 ) at aap headquarters in elk grove , illinois , usa . following the catooh , cohas implemented ( or refined ) an oral health preventive program within their own practices and then disseminated the model to their fellow pediatricians and other pediatric providers using strategies and techniques they had learned during the catooh and , subsequently , within their own practices . this study describes participants ' experiences during the catooh and subsequent implementation during activities as cohas . we were specifically interested in roles that cohas assumed and the opportunities and challenges that cohas faced in their efforts to disseminate oral health knowledge and skills to other pediatricians . we intend that findings from this project will ( 1 ) allow refinement , expansion , and replication of the coha program ; ( 2 ) increase awareness of pediatric oral health issues that arise in primary care practice ; ( 3 ) describe factors that influence pediatricians ' willingness and abilities to adopt oral health into their routine and practice ; ( 4 ) inform future models of physician peer training and advocacy that could be applied in other countries and to other areas within health care . fifteen more cohas were trained in 2009 and 36 in 2010 . at the time of the interviews ( during march 2011february 2012 ) , there were 64 cohas from 50 states and us territories . the coha program is ongoing and cohas remain active in their roles and continue to expand their knowledge around oral health and increase their level of advocacy for children . cohas are not paid and do not receive any funding from the aap for their activities . a steering committee of pediatricians , dentists , and staff from the aap and the american dental association ( ada ) planned the catooh , which included didactic , interactive small group and hands - on sessions . design of the catooh was based on principles of adult learning and evidence that a combination of didactic and interactive cme activities is substantially more effective than didactic sessions alone in promoting behavior change . the 2008 catooh covered basic oral health science , fluoride , oral health risk assessment , prevention and anticipatory guidance , oral health reimbursement , and hands - on practice in oral examination and fluoride varnish application . in 2009 and 2010 , the agenda was supplemented with presentations from previously trained cohas about lessons learned and best practices and with a session on billing ( see table 1 for 2010 catooh agenda ) . in addition to attending the catooh , cohas completed an online oral health training program , called protecting all children 's teeth , which was developed by pediatricians and dentists working together with the aap ( pact is available at : http://www2.aap.org/oralhealth/pact/index-cme.cfm ) . they also received directed readings , supply lists , resources for peer and patient education , and a list of state dental contacts . at the end of each catooh , participants completed a commitment - to - change contract that specified at least 4 training sessions per year would be done by each coha . additionally , individual coha goals included working with state medicaid programs around pcp oral health reimbursement , educating residents and other trainees about oral health , link with oral health coalitions , and improving medical / dental relationships . after each catooh , the aap offered technical assistance and organized an electronic listserv for cohas to share ideas , strategies , and support and for research updates and announcements . the aap and the university of washington institutional review boards ( irb ) approved this project . a semistructured script was used with questions in the following categories : motivation to become a coha , previous oral health experience , perceptions about the catooh , activities undertaken as a coha , facilitators and barriers encountered by cohas , and recommendations for the future . the first 8 interviews were conducted in person during a coha advisory council meeting and the remainder by telephone . each audiotape was reviewed , the content wase categorized into themes , and representative quotes was selected . to reduce bias , the interviewer ( cwl ) was not involved with planning or implementation of the catooh or the coha program . findings , including themes and representative quotes , were presented , and feedback about accuracy and completeness was elicited from the catooh steering committee and coha advisory council . forty cohas responded and were scheduled for an interview ( 62% responses after 3 emails ) . participants had graduated from medical school an average of 17 years prior to the interview ( range 4 to 44 ) . approximately one - quarter of subjects were in private practice , one - quarter practiced at a community health center or federally qualified health center , and one - half were academic pediatricians . most cohas practiced in suburban or urban locales while approximately 10% worked in rural settings . approximately two - thirds of interviewees had previous oral health involvement prior to becoming a coha . these individuals volunteered to be cohas because of their interest in oral health , which usually was motivated by their patients ' oral health problems and difficulties accessing dental care . the other third of cohas had no previous oral health experience , but most were involved with their local aap chapter and/or other advocacy activities such as reach out and read . some of those interviewed confessed an initial lack of interest in oral health prior to attending the catooh , as this participant stated , i really was n't that interested ( in oral health ) but when they asked for volunteers to be a coha , no one volunteered so i figured , ok , i 'll go . however , the coha training proved influential and this same individual went on to say , i came out of the 2008 catooh and was really excited about ( oral health ) and i was on fire about how we could do this with pediatricians . towards the goal of optimizing children 's oral health , cohas advocated pediatricians ' role to be that of providing preventive oral health anticipatory guidance , screening for caries risk and dental disease , applying fluoride varnish to children at high risk for dental caries , and facilitating access to a dental home . almost all of the interviewees met their goal of conducting at least 4 oral health training sessions per year and most did more . in general , cohas felt the on - site training that they provided to other pediatricians and their staff was well received . as fellow pediatricians , cohas were uniquely able to relate to those that they were training but cohas also acknowledged that each pediatric practice is different , and thus , an individualized approach to training was necessary . for example , in some practices , the pediatrician applies the fluoride varnish whereas , in other practices , fluoride varnish application is delegated to another health care provider , such as the medical assistant . in addition to academic detailing and on - site training , cohas used other ways to reach out to pediatricians in their state / chapter to provide education , usually by email or presenting grand rounds at their hospital or area medical schools . some cohas were able to make a greater impact by focusing time and energy at a state government level , for example , meeting with state medicaid directors to advocate for pcps ' reimbursement for oral health services and for expanded access to dental care for poor and low - income children . academic cohas , meaning those who work at universities and their affiliated medical centers and who typically have both clinical and educator roles , explained that their positions allowed them more time to spend on oral health activities than clinicians in private practice since it was expected that they would be involved in community projects , outreach , and trainee education . most academic cohas provide pediatric medical care for a disproportionate share of children with special health care needs , publicly insured and uninsured children . there were common lessons that academic cohas sought to impart to trainees : ( 1 ) oral health is part of well - child - care ; ( 2 ) oral health prevention is easy to do ; ( 3 ) it is important for pediatricians to partner with dentists in their community . additionally , academic cohas gave resident noon lectures about oral health , developed a continuity clinic oral health curriculum , and incorporated an oral health module into the residents ' community and/or advocacy rotation . these cohas reported that residents had little difficulty incorporating oral health into their visits with patients . interviewees attributed residents ' ease with oral health to a few factors including that residents have additional time to spend with patients and that residents are still in the process of developing their routine . referring to oral health , one coha said , residents just do it if you tell them to . although there was variation in the infrastructure in place to support cohas , most cohas listed 5 factors that enabled their success as cohas : ( 1 ) the catooh ; ( 2 ) support from the aap , fellow cohas , and others ; ( 3 ) personal experience implementing oral health into their practice ; ( 4 ) relationships with dentists ; ( 5 ) reimbursement for oral health services . in addition to knowledge gained at the catooh , cohas learned from other cohas ' successes and failures , were given valuable resources like flip charts to use when educating fellow pediatricians , and developed strategies for developing collaborative relationships with dentists , expanding pediatrician involvement in oral health and optimizing billing for these services . furthermore , the lectures and discussions with dentists at the catooh helped cohas appreciate the expertise of their dental colleagues and made dentists in general seem more approachable . the most valued aspects of the catooh was the hands - on aspect of the training , meaning that cohas were able to examine and apply fluoride varnish to actual children . cohas found their experience at the catooh to be empowering as this coha said , all of a sudden it hit me . cohas , whether they were new to oral health or previously involved , came away from the catooh highly motivated to promote oral health involvement among fellow pediatricians and to improve the oral health of children . it was important , cohas expressed , to maintain this momentum upon return to the coha 's home states and to have a forum for ongoing collaboration and exchange of ideas . to that end , after each catooh , the aap national office maintained regular contact with the cohas . cohas also worked with their local aap chapter and stayed in touch with fellow cohas via the listserv . through these interactions , cohas could avail themselves of expert assistance when problems or questions arose and were able to share resources and ideas . most cohas commented positively on the support they received from their local aap chapter and its executive director who often helped with outreach and legislative contacts , as one coha explained , ( the executive director ) did a lot so that i could focus on outreach rather than organizing . some cohas worked in communities and states where there was a preexisting oral health coalition with whom they could work and rely on for additional support . in settings with limited resources , a few cohas applied for small grants , usually from foundations , to offset the costs of some of their oral health activities . one coha used americorp volunteers , who helped in developing and maintaining detailed online and printed lists of local dentists ' contact information , accepted insurance plans , and wait times for new appointments . after returning from the catooh , cohas focused their initial efforts on incorporating oral health into their own practice and , in the process , learned a variety of lessons , as this comment reflects : once you have done about 20 to 30 , it becomes part of your routine . you are not clumsy anymore ( you need to ) do it whether you are running behind or not . once you have done about 20 to 30 , it becomes part of your routine . you are not clumsy anymore ( you need to ) do it whether you are running behind or not . cohas believed that their insider perspective provided them with ease and credibility in talking to fellow pediatricians and helped them be more positive about the process of integrating oral health into primary care as these quotes reveal : on paper it looks complicated . you need a pediatrician who has done it to make it doable . at first it takes you 3 - 4 minutes , but if you incorporate oral health into the history and the oral screening exam into your physical and then put the fluoride varnish on while you 're examining the child 's mouth , then you 're done and it takes 60 seconds once you are used to it . on paper it looks complicated . it takes you 3 - 4 minutes , but if you incorporate oral health into the history and the oral screening exam into your physical and then put the fluoride varnish on while you 're examining the child 's mouth , then you 're done and it takes 60 seconds once you are used to it . cohas without prior oral health experience seemed to have a better sense about how the average pediatrician might be resistant to undergoing oral health training and adopting oral health into his / her practice . for example , one coha remarked if you do n't know anything about ( oral health ) , then you do n't understand the magnitude of the problem and you do n't know how easy it is , so you just think you ca n't add one more thing to your plate . when cohas had overcome such barriers personally , they felt they were more effective in encouraging other pediatricians to become involved . some cohas practiced in areas where dentists were already involved in training physicians about oral health , giving the opportunity for cohas to participate and provide the pediatrician perspective . for example , one coha who partnered with a dentist for such presentations noted : the dentist knows the science but he does not really know how a pediatric office works and what are going to be the barriers for pediatricians . when our state medicaid program tried to roll this out without pediatrician involvement , none of the pediatricians was really sure they wanted to do it because , ( after the dentist 's presentation ) , the pediatricians did not know what would be involved , could n't see how easy it was ( because there was no hands - on demonstration ) , and that billing would be easy for the ( pediatrician 's ) billing staff . the dentist knows the science but he does not really know how a pediatric office works and what are going to be the barriers for pediatricians . when our state medicaid program tried to roll this out without pediatrician involvement , none of the pediatricians was really sure they wanted to do it because , ( after the dentist 's presentation ) , the pediatricians did not know what would be involved , could n't see how easy it was ( because there was no hands - on demonstration ) , and that billing would be easy for the ( pediatrician 's ) billing staff . additionally , most cohas met with local dentists to discuss their role as a coha and , in doing so , were also able to explicitly address fears that pcps were going to be practicing dentistry . cohas found that dentists were more supportive than expected once they found out that the pediatricians were focused on caries primary prevention in infants and young children ( whom general dentists are often uncomfortable seeing , cohas said ) . meeting with dentists as a coha allowed the pediatrician - dentist relationship to expand into a more collaborative one in which cohas felt greater ease referring patients to and consulting dentists about specific cases . positive experiences with dentists gave cohas greater confidence in educating fellow pediatricians about the importance of children having a dental home . even in settings in which access to regular dental care was limited , almost all of the cohas had developed and shared strategies with fellow pediatricians for obtaining urgent dental care for their patients with acute dental problems . to that end , cohas each knew a few dentists whom they could call upon for dental emergencies or more urgent treatment needs , as this coha described : when i see rotten teeth , i call the dentist and make the appointment for the family . when i see rotten teeth , i call the dentist and make the appointment for the family . ( when i ask them personally ) , they will never turn me down . in most states , cohas could rely on the fact that the hook is that it is a procedure that pediatricians can do and get paid for . the reimbursement was particularly attractive in states such as washington and north carolina , where medicaid payment to pcps for delivering oral health services ranges from $ 50 to $ 70 per encounter . however , the average payment is $ 15$25 and most state medicaid programs only reimburse for fluoride varnish application . oral screening , risk assessment , and family education are expected but in most states not paid separately . cohas encountered 3 levels of barriers related to oral health dissemination to fellow pediatricians : ( 1 ) personal professional barriers that interfered with achievement of goals they had set for themselves ; ( 2 ) policy and colleague - level barriers , in the form of pediatrician reluctance to undergo training about oral health ; ( 3 ) community-/patient - level barriers , which affected pediatricians ' abilities to optimally address their patients ' oral health needs . the most often cited personal barrier faced by interviewees was lack of time to accomplish the activities they envisioned for themselves as cohas . the economic climate may have worsened this situation for some cohas who had acquired more clinical duties or lost administrative time as their respective institutions dealt with budget shortfalls . the majority of cohas had little or no funding and few resources . when asked what would help them do more , cohas typically replied , in one form or another , more time , more money , and more help . fellow pediatricians would sometimes decline cohas offer to conduct an office - based training , citing limited time and being overwhelmed with other demands . as one coha put it , they worry about the 1500 other things we have to do for bright futures ( guidelines for health supervision ) . there were other concerns among pediatricians , including some of the cohas themselves , surrounding reimbursement for oral health activities and fluoride varnish application being only available for medicaid - insured children . although low - income children are considered at higher risk for dental disease , some cohas stated that they wanted delivery of oral health services to be based on individual need , regardless of a child 's insurance and were uncomfortable doing it for one population and not another . logistically , it was often challenging to identify and direct services only to medicaid - insured children in offices that served a mix of privately and publicly insured children ( which is the norm for pediatricians in private practice in the us ) . there were also unique , state - specific barriers that made incorporating oral health into pediatricians ' practices more challenging . for example , some states required that pcps undergo oral health training in person with a dentist in order to qualify for medicaid reimbursement . this requirement imposed burdens of time away from practice , need to travel , and lack of physician perspective . cohas and their fellow pediatricians encountered barriers to educating families about oral health . in some communities , there was limited oral health literacy , which hindered families seeking regular professional dental care and practicing home oral hygiene . however , the main barrier to optimizing their patients ' oral health was limited access to quality professional dental care , as this coha described : part of the ( coha ) program is to encourage ( pediatrician ) referral to dentists and the standard question ( from the pediatrician ) is , to whom do i refer ? and you do not always have an answer . part of the ( coha ) program is to encourage ( pediatrician ) referral to dentists and the standard question ( from the pediatrician ) is , to whom do i refer ? and you do not always have an answer . most cohas described that , within their communities , privately insured children typically went to pediatric dentists in private practice , but such care was unavailable to lower - income children because these dentists did not accept medicaid . more often than not , low - income children received dental care at community health centers or increasingly , at for - profit dental clinic chains that were geared exclusively towards medicaid - insured children . cohas expressed concerns , which were based upon comments made by their patients ' parents , about what seemed to be lower quality of dental care delivered at some of these chain clinics . this is the first paper to describe a national program of peer - to - peer physician education and advocacy about oral health . the previous literature reported that pediatricians perceive preventive oral health as within their purview and that pcps are capable of delivering preventive oral health services , described state 's efforts at oral health integration into primary care [ 4 , 11 , 12 ] , and demonstrated that pcp efforts result in improved oral health among their patients . however , past efforts to train pediatricians about oral health have been limited to single states and have not always included pediatricians in the planning and delivery of educational programs . in this project , cohas from every chapter were recruited for a national training program with the expectation that they would return to their home state and educate / train fellow pediatricians to deliver preventive oral health services , thus allowing for more widespread , standardized , and rapid dissemination . furthermore , pediatricians were involved in developing and revising the catooh , and cohas learned from one another 's experiences . utilizing pediatricians to train other pediatricians was considered essential because cohas uniquely understood how pediatric practices function and how pediatricians could incorporate oral health into their routines . cohas provided at least 4 , and sometimes substantially more , oral health training sessions per year to fellow pediatricians , as well as at grand rounds and local conferences . cohas ' original goals expanded and evolved once they returned to their home state and better understood the needs of their fellow pediatricians and children within their state . for some cohas , this meant focusing more on their efforts on state - level advocacy or on promoting collaborative relationships with dentists . because of this , cohas sometimes had to rely on strategies other than traditional academic detailing to reach fellow pediatricians ( e.g. , emailing pediatricians about web - based oral health training opportunities ) . yet , cohas then lost the advantage of a personal visit during which they could facilitate hands - on training . resources that could allow cohas to expand their efforts might include funding for administrative support and outreach efforts and assistance in developing alternative training strategies to accommodate pcps ' limited time . these cohas formed the eyes and ears for the status of children 's oral health within their communities . collectively , cohas could bring attention to these issues , but they need to know how to quantify and direct their concerns . furthermore , a number of cohas had access to preexisting data , such as medicaid claims and other reports , which , with additional technical assistance , would allow for tracking of oral health outcomes . pediatric residents ' adoption of oral health was viewed as successful by the cohas who supervised them . ideally , having an established oral health routine when one finishes residency means that such routines will be sustained . family medicine is farther along in this process than any other medical specialty ; their residents complete formal , standardized education in oral health as a required part of their training . finally , cohas made important inroads in developing collaborative relationships with dentists in their communities , a process that was encouraged by positive interactions with dentists during the catooh . such interdisciplinary relationships enhance professional learning , improve patient care , and ideally , promote improved access to dental care for children . when it was difficult for pediatricians ' patients to access dental care , it was more challenging for pediatricians to fully implement preventive oral health into their practices . this was because pediatricians lacked any place to refer patients for ongoing dental care or when oral health problems were identified on screening examination . partnering with programs such as washington 's abcd ( access to baby and child dentistry ) , which trains dentists to care for young children and provides enhanced medicaid reimbursement to do so , would satisfy the critical link of a professional dental care referral source that pcps need to be truly effective in promoting oral health for all children . it is important to acknowledge the limitations of this work . while employing qualitative methods allowed themes to emerge that otherwise might not have been considered in a traditional survey , the time - consuming nature of the interview may have discouraged some cohas from responding . a response rate of 62% is reasonable for surveys of physicians but those who did not respond may have had different experiences or perspectives that are not reflected in this paper . this paper describes a novel nationwide effort to train pediatricians to be oral health peer educators and advocates . it also provides insight into the varying roles cohas played and the opportunities and challenges that cohas and their fellow pediatricians encountered integrating oral health into well - child - care visits . some barriers identified in these interviews are modifiable , such as by streamlining training requirements for pcps to bill medicaid for oral health services , whereas other barriers are more difficult to overcome , for example , time constraints among pcps . nevertheless , cohas and their fellow pediatricians found that , once initiated , providing oral health services takes less time than anticipated and delivers a valuable service to their patients . difficulty finding a dental home for patients who lack private dental insurance or cash to pay out - of - pocket is a known barrier to promoting preventive oral health within pediatric practice . however , results from this study point to the potential for national- and local - level collaboration between dentists and physicians as a means to expand interdisciplinary education and collegiality , as well as to expand access to professional dental care for all children . though cohas largely felt their efforts in reaching out to and training their peers had been successful , the increasing time pressure in pediatric practice creates a need for the most efficient training strategies ; these could include use of social media , web - based resources , oral health prompts in the electronic medical record , and incentivizing training by pairing maintenance of certification with oral health quality improvement efforts . the relative ease with which pediatric residents adapted to implementing oral health services highlights the need for focused efforts in medical school and residency to ensure that new physicians receive sufficient didactic and clinical training in oral health . this paper , focused on training and implementation , is the first in a series looking at the impact of the coha program . results provide insight into factors that bear consideration when asking physicians to incorporate preventive oral health care into medical practice . these include the importance of ( 1 ) pediatrician involvement in designing and delivering oral health educational programs ; ( 2 ) beginning oral health education early in medical training ; ( 3 ) individualizing the approach for each physician practice ; ( 4 ) expanding oral health surveillance and advocacy capacities ; ( 5 ) incorporating dental partnerships into every level of implementation . applying the lessons learned in this study along with ongoing technical and financial support , the coha program holds promise to further improve access to preventive oral health services in pediatric medical practices , diminish oral health disparities , inform oral health policy , coordinate state - level oral health surveillance and quality improvement initiatives , and enhance referrals and collaboration with dental professionals .
objective . ( 1 ) to describe an innovative program training us pediatricians to be chapter oral health advocates ( cohas ) . ( 2 ) to provide insight into cohas ' experiences disseminating oral health knowledge to fellow pediatricians . patients and methods . interviews with 40 cohas who responded to an email request , from a total of 64 ( 62% response ) . transcripts were analyzed for common themes about coha activities , facilitators , and barriers . results . cohas reported positive experiences at the aap oral health training program . a subset of academic cohas focused on legislative activity and another on resident education about oral health . residents had an easier time adopting oral health activities while practicing pediatricians cited time constraints . cohas provided insights into policy , barriers , and facilitators for incorporating oral health into practice . conclusions . this report identifies factors influencing pediatricians ' adoption of oral health care into practice . cohas reported successes in training peers on integrating oral health into pediatric practice , identified opportunities and challenges to oral health implementation in primary care , and reported issues about the state of children 's oral health in their communities . with ongoing support , the coha program has a potential to improve access to preventive oral health services in the medical home and to increase referrals to a dental home .
1. Introduction 2. Methods 3. Results 4. Discussion 5. Conclusions
PMC5174185
neuroimmune interactions are increasingly appreciated as both an important regulator of normal brain development and function and a potential contributor to the pathophysiology of a range of neuropsychiatric illnesses . in particular , there is accumulating evidence that immune dysregulation can contribute to obsessive - compulsive disorder ( ocd ) and tourette syndrome , at least in a subset of cases [ 1 , 2 ] . ocd is characterized by unreasonable or excessive thoughts and fears ( obsessions ) and/or repetitive behaviors ( compulsions ) . tourette syndrome , which is frequently comorbid with ocd , is characterized by tics : repetitive , stereotyped , involuntary movements and vocalizations [ 4 , 5 ] . both ocd and tourette syndrome are accompanied by pathological changes in the corticobasal ganglia circuitry , especially in the striatum [ 6 , 7 ] . a role for dysregulated immune function is particularly clear in the syndrome known as pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections , or pandas . pandas is characterized by the sudden onset of ocd and/or tic symptoms in childhood , following a streptococcal infection [ 9 , 10 ] . the symptoms are usually dramatic and can include motor and vocal tics , obsessions , and compulsions . it has been hypothesized that pandas arises from the development of brain - reactive autoantibodies after infection with group a streptococcus . here we review the evidence for an immunological etiology for ocd , tourette syndrome , and related conditions . we focus on one particular component of the immune system : microglia , the brain - resident immune cells . these enigmatic cells have recently emerged as potential key players in the pathophysiology of neuropsychiatric disorders . their activation in neurological disease has classically been associated with inflammation , neuronal damage , and neurodegeneration . however , over the past decade , novel roles for microglia in brain development , homeostasis , and plasticity have emerged . a groundbreaking study demonstrated that microglia can engulf synapsis during normal postnatal development in mice . synaptic pruning by microglia is necessary for the formation of brain circuitry and normal connectivity . disruption of neuron - microglia interactions , through disruption of the fractalkine / fractalkine receptor signaling pathway , results in a range of neural and behavioral abnormalities . microglia cells are also necessary for adult neurogenesis and provide support for neuronal survival . more recently , in keeping with our growing understanding of their roles in modulation of normal brain function , research has focused on neuropsychiatric conditions that are not characterized by frank neuronal death . microglial contributions to pathophysiology in these disorders may be subtle and may relate to their noninflammatory functions . as new functions of microglia in normal brain development and function are discovered , and disruption of these functions in disease is characterized , new therapeutic strategies will emerge . a number of animal models have been described in recent years in which the primary behavioral pathology is a maladaptive excess of repetitive behaviors , most commonly grooming . these have often been interpreted as modeling ocd [ 1921 ] , but repetitive grooming has also been described in models of tourette syndrome , autism , rett syndrome , trichotillomania , and other conditions . we first review several models in which the precise disease correlate is less firmly established but the association between microglial abnormalities in the corticostriatal circuitry is particularly striking . animal studies with clearer etiopathogenic links to particular diseases are reviewed subsequently , together with the relevant clinical literature . an early study reported that knockout of the hoxb8 gene produces compulsive grooming , progressing to hair removal and ultimately to skin lesions . hoxb8 expression in the brain is restricted to microglia ( identified in these experiments by their expression of the cell - surface marker cd11b ) . strikingly , abnormal behavior in hoxb8 knockouts ( kos ) is rescued by transplantation of normal bone marrow , which repopulates the brain with wild - type microglia . conversely , excessive grooming is produced by transplantation of bone marrow from hoxb8 ko mice into wild - type animals . interestingly , not all cd11b cells in the brain express hoxb8 , which suggests that hoxb8 cells might be a subpopulation of microglia , constituting approximately 40% of total microglial cells . however , the total number of microglia cells in the brain of hoxb8 mutant mice is reduced by only 15% . first , the hoxb8 subpopulation may be necessary for maintaining normal brain function , and loss of these cells in the ko animals may produce pathological grooming . second , the population of hoxb8-negative microglia may expand in the knockout animals , which may lead to functional abnormalities . in progranulin deficient ( grn ) mice , microglial activation leads to both excessive grooming and neurodegeneration . this led to the suggestion that their repetitive behavior could be more related to classic inflammatory microglial activation and neuronal damage , rather than the loss of a neuroprotective or neuromodulatory function . notably , autosomal dominant mutations in the human grn gene contribute to a common form of familial frontotemporal lobar degeneration [ 27 , 28 ] . no association of grn mutations with ocd , tourette syndrome , or related conditions has been described . cd11b cells in the brain are all of monocyte lineage , but they can be either resident brain microglia or peripherally derived monocytes . although both resident microglia and peripherally derived monocytes express the cx3cr1 [ 29 , 30 ] , only the latter express ccr2 , providing a powerful marker to discriminate between resident microglia and infiltrating monocyte in the brain . cx3cr1 microglia are widely distributed through the brain parenchyma , while ccr2 monocytes are rarely seen in the healthy brain . interestingly , microglia and monocytes play differential roles in neurodegeneration and brain injury [ 3133 ] . distinguishing between these two populations may therefore prove to be quite important in understanding how microglial abnormalities can lead to repetitive behavioral pathology . in hoxb8 ko mice , the fact that abnormal behavior is rescued by transplanting in wild - type bone marrow indicates that hoxb8 cells derived from circulating monocytes can enter the brain and have behavioral effects . this is consistent with evidence indicating that hoxb8 regulates monocyte / macrophage differentiation from hematopoietic precursors [ 3436 ] . on the other hand , another recent paper showed that mice lacking cx3cr1 , which is expressed by brain - resident microglia , show excessive grooming , along with other behavioral abnormalities that were interpreted as representing an autism - spectrum disorder - like phenotype . considerable evidence suggests that immune dysregulation may contribute to the pathophysiology tourette syndrome [ 37 , 38 ] . however , two recent studies , using different methodologies , have suggested abnormalities in microglial activation in patients with tourette syndrome . both studies focused on the basal ganglia . in a recent postmortem analysis of brains from tourette syndrome cases , lennington et al . described increased number of cd45 microglial cells in the striatum . these cells had morphological changes consistent with neurotoxic activation , concomitant with enriched expression of inflammatory genes [ 39 , 40 ] . importantly , the brain samples were obtained from refractory adult patients ; no comparable postmortem data exist for more typical pediatric and/or fluctuating disease . a second recent study used in vivo positron emission tomography ( pet ) imaging with ( ) c-[r]-pk11195 ( pk ) , a ligand that binds to the transporter protein ( tspo ) , which is expressed by activated microglia . increased pk binding , indicative of inflammatory microglial activation , was observed in the caudate nuclei bilaterally in children with tourette syndrome . an important caveat is that children with tourette syndrome were compared to adult healthy controls ( mean age 11.4 years versus mean age 28.7 years ) . nevertheless , as the first study to image microglial activation in vivo in tourette syndrome , this is an important advance . work in animal models of tic disorders may help to elucidate the role of microglia in their pathophysiology . until recently , there have been no animal models of tourette syndrome with clear links to its etiopathophysiology in which to do such work . recent work has identified a hypomorphic mutation in l - histidine decarboxylase ( hdc ) , which encodes the rate - limiting enzyme in the biosynthesis of histamine , as a rare but high - penetrance genetic cause of tourette syndrome . knockout of the hdc gene , which recapitulates this molecular abnormality , thus produces an animal model with strong etiologic validity . these mice exhibit behavioral and neurochemical abnormalities seen in patients with tourette syndrome , further confirming the validity of the model [ 44 , 45 ] . we found that microglia cells are not activated in basal conditions in the striatum in hdc - ko mice ; rather , microglia from this mice exhibit reduced arborization and normal expression of inflammatory markers . a similar effect is seen when neuronally derived histamine is specifically disrupted , through targeted virus - mediated ablation of histaminergic neurons in the posterior hypothalamus . the total number of microglia is unchanged in ko animals , but the number of microglia expressing insulin - like growth factor 1 ( igf-1 ) is reduced . igf-1 expressing microglia are necessary for neuronal survival and promote neurogenesis in nonpathological conditions [ 16 , 17 ] ; a specific reduction of these cells suggests impairment in these functions . inflammatory challenge with bacterial lipopolysaccharide ( lps ) dramatically changed this pattern : microglia activation in the striatum was enhanced in hdc - ko mice , compared with wild - type controls . this was accompanied by enhanced induction of the proinflammatory cytokines il-1 and tnf-. taken together , these findings suggest that in this mouse model of tourette syndrome there is a deficit in microglia - mediated neuroprotection , accompanied by overreactivity to environmental challenge . such complex mechanisms can not be appreciated in human studies and reinforce the importance of work in animal models to clarify the mechanisms of microglial dysregulation in neuropsychiatric disease . the normal number of microglia in the hdc - ko model and their reduced arborization at baseline contrast with the more numerous and activated microglia seen postmortem . it is possible that the activation of microglia observed in patients with tourette syndrome emerges only after challenge by environmental factors , such as infection , or over the course of aging . our studies in mice [ 44 , 46 ] are performed in a pathogen - free environment , in young adult mice . after lps challenge , microglia activation in the animal model much more closely resembles that seen postmortem in humans . regardless of this consideration , all the studies point to activation of microglia in the basal ganglia in tourette syndrome [ 39 , 41 , 46 ] , irrespective of the complexity of the underlying mechanisms . , however , this evidence is much weaker than in the case of tourette syndrome , except in the case of pandas , to which we return below . more particularly , the role of microglia cells in ocd has not been clearly elucidated . the hoxb8 knockout mouse , described above [ 19 , 25 ] , has been described as a mouse model of ocd and may thus implicate microglial dysregulation in the disorder ; however , clear clinical data linking abnormalities in the hoxb8 gene , or the consequences of its disruption , to ocd has yet to emerge . to date , there are no imaging or postmortem studies describing microglia abnormalities in patients with ocd . obsessive - compulsive disorder ( ocd ) and tourette syndrome often strike in childhood . in a subset of cases , acute onset temporally coincides with a bout of infectious disease , particularly with streptococcus ; this clinical syndrome is known as pandas , or more generally pediatric acute - onset neuropsychiatric syndrome ( pans ) . by analogy with the better - understood pathophysiology of rheumatic fever and sydenham 's chorea , ocd and tourette syndrome symptoms in these cases have been hypothesized to arise from the development of autoantibodies that cross - react with proteins normally expressed in the brain ; this mechanism is known as molecular mimicry . while many details of pandas as a clinical entity remain unclear , and some are controversial , the association of immune dysregulation with ocd and tourette syndrome symptoms in this subset of pediatric patients is increasingly clear . the tspo / pk pet imaging study of microglial activation described above examined both noninfectious tourette syndrome and pandas . children with pandas had increased pk binding in the striatum with respect to adult healthy controls ; this corresponds with increased striatal volumes previously described during acute illness in pandas patients [ 48 , 49 ] . inflammation was higher and more broadly spread through the bilateral caudate and lentiform nucleus in pandas than in non - pandas tourette syndrome . importantly , this comparison with age - matched patients avoids the interpretive difficulties created by comparison to an adult healthy control group ; the observed differences support the notion that pandas is etiologically distinct from non - pandas tourette syndrome . a recent study in mice examined the effects of intranasal group a streptococcal ( gas ) infection and may begin to shed light on the role of microglia in pandas . repeated intranasal gas inoculations result in increasing the number of cd68/iba1 activated microglia in the glomerular layer of the olfactory bulb . abnormal synaptic pruning , probably mediated by microglia , was also observed ( see below ) . the majority of activated microglia were found in close proximity to cd4 t cells , suggesting that gas antigens could be presented to th17 cells by local microglia . the role of microglia in neurodegenerative diseases has been understood in terms of classic , inflammatory activation , which may be both a consequence and a cause of neuronal damage . in ocd and tourette syndrome , which are not characterized by frank neural degeneration , the nature of any contribution of microglial dysregulation to pathophysiology is much less clear . nave hdc - ko mice display morphological abnormalities in striatal microglia that suggest quiescence , normal expression of inflammatory markers , but a reduced number of igf-1 expressing microglia . igf-1 expression by microglia is induced by th2 cytokines such as il-4 , which induce a neuroprotective phenotype , at least in organotypic hippocampal cultures . in vivo , igf-1 expressing microglia support cortical neurons during development and promote neurogenesis in the adulthood [ 16 , 17 , 52 ] . igf-1 thus , deficiency of igf-1 microglia in this animal model of tourette syndrome might lead to impaired neuroprotection and , consequently , to enhanced susceptibility to neuroinflammation after an environmental challenge ( figure 1 ) . consistent with this , lps challenge triggers an exaggerated response in hdc - ko mice , both at the level of morphological activation and the production of the inflammatory cytokine il-1 ( figure 1 ) . more generally , these results suggest that , in some neuropsychiatric disorders in which no marked neurodegeneration occurs , microglial dysregulation may constitute a failure of neuroprotective functions , which may create a vulnerability to neuroinflammation . this mechanism may explain observations in other animal studies , in which loss of microglia - specific genes triggers abnormal grooming behavior [ 15 , 25 ] . for example , cx3cr1 is a key molecule for neuron - glia communication and has a role in neuroprotection [ 5557 ] , which supports the interpretation that knocking it out disrupts neuroprotective or regulatory functions of microglia ( figure 1 ) . in postmortem tourette 's samples , loss of certain types of interneurons has been reported in the basal ganglia [ 39 , 58 , 59 ] . we have found reduced igf-1 expressing microglia in the striatum in the hdc - ko model of tourette syndrome pathophysiology . these igf-1 microglial cells are required for neuronal support during postnatal development , at least in the cortex [ 16 , 17 , 52 ] . although these phenomena have not been linked , it is plausible that impaired igf-1 microglial function in the maintenance of striatal neurons might be causative of neuronal loss observed in tourette syndrome ( figure 2 ) . hdc - ko mice have not been reported to develop spontaneous reduction of striatal interneurons ; however , this might happen in aging mice or young mice subjected to immune challenge . in fact , hdc - ko microglia are more susceptible to lps - induced activation than wild - type microglia . recent studies have described a key role for microglia in synaptic pruning during development , with long lasting consequences in adulthood [ 13 , 14 ] . cx3cr1 knockout mice exhibit deficient synaptic pruning , excessive grooming , and social deficits . whether this function is altered in ocd , progranulin ko mice , on the other hand , have abnormal microglial activation and increased synaptic pruning , which results in elimination of inhibitory synapses in the ventral thalamus ( figure 2 ) , hyperexcitability in the thalamocortical circuits , and repetitive behavioral pathology . as mentioned above , there is limited but promising evidence from animal studies that synaptic pruning might be altered in pandas . using an animal model of intranasal gas infection , dileepan and coworkers observed microglial activation and loss of vglut2 , a marker of excitatory synapses . these results raise the possibility that synaptic pruning of excitatory connections may be increased in pandas ( figure 2 ) . their contact instances with neuronal synapses are reduced in frequency by reductions in neuronal activity [ 61 , 62 ] . in the aging retina , for example this may lead to impairments in their surveillance ability [ 63 , 64 ] . in the hdc - ko mouse , it is possible that microglial abnormalities in this animal model lead to alterations of synaptic pruning . preliminary results suggest that hdc - ko animals may have cx3cr1 deficiencies ( frick et al . cx3cr1 deficiency produces altered microglial morphology , similar to what is seen in hdc - ko mice . investigation of synaptic pruning in hdc - ko mice , and other mouse models of tourette syndrome , is warranted . such interactions have , however , been described in rett syndrome , an autism - spectrum disorder characterized by mutation of the methyl - cpg - binding protein-2 gene ( mecp2 ) . mecp - null microglia release dramatically higher levels of glutamate , and microglia - derived glutamate has neurotoxic effects on dendrites and synapses . it is plausible that microglia - derived glutamate might similarly contribute to the pathophysiology of ocd ( figure 2 ) . whether microglial abnormalities in ocd and tourette syndrome , or in any of the animal models described above , are associated with glutamate dysregulation is an important area for future study . while evidence for microglial dysregulation in ocd , tourette syndrome , and pandas is growing , much remains unclear . the case for microglial dysregulation is strongest in the case of tourette syndrome ; recent postmortem and pet imaging studies have produced convergent evidence for increased microglial activation in the striatum in patients [ 4042 ] . this is complemented by our studies in the hdc - ko mouse model of tourette syndrome . we have proposed that these mice capture a pathophysiologically important gene x environment interaction , in which deficiency in microglia - mediated neuroprotection produces a susceptibility to inflammatory changes upon environmental challenge . this hypothesis needs to be explored in other well - validated models and in clinical contexts to test its generality . mechanisms by which microglial abnormalities contribute to disease are likely to be shared across distinct etiologies and traditional diagnoses . for instance , abnormal synaptic pruning was observed both in animals inoculated with gas ( which may capture key elements of the pathophysiology of pandas ) and in mice that develop excessive grooming after inactivation of the progranulin gene . in both cases , increased synaptic pruning cooccurs with microglia activation . in cx3cr1 knockout mice , deficient synaptic pruning accompanies alterations in neuron - microglial communication . this is accompanied by social deficits , which has been interpreted as an autism - like phenotype . results from these models must be interpreted cautiously , as neither progranulin nor cx3cr1 has been clearly associated with any of these conditions in humans . abnormal synaptic pruning by microglia in the more pathophysiologically grounded hdc - ko model of tourette syndrome is warranted . these studies in animal models are intriguing and allow detailed mechanistic analysis , but their relevance for the understanding of clinical disease remains to be firmly established . information about microglial abnormalities in patients with ocd , tourette syndrome , and pandas remains very limited . this is a recently opened frontier in our understanding of the pathophysiology of these disorders . it is , however , one of great promise , which may lead to the identification of novel therapeutic targets .
there is accumulating evidence that immune dysregulation contributes to the pathophysiology of obsessive - compulsive disorder ( ocd ) , tourette syndrome , and pediatric autoimmune neuropsychiatric disorders associated with streptococcal infections ( pandas ) . the mechanistic details of this pathophysiology , however , remain unclear . here we focus on one particular component of the immune system : microglia , the brain 's resident immune cells . the role of microglia in neurodegenerative diseases has been understood in terms of classic , inflammatory activation , which may be both a consequence and a cause of neuronal damage . in ocd and tourette syndrome , which are not characterized by frank neural degeneration , the potential role of microglial dysregulation is much less clear . here we review the evidence for a neuroinflammatory etiology and microglial dysregulation in ocd , tourette syndrome , and pandas . we also explore new hypotheses as to the potential contributions of microglial abnormalities to pathophysiology , beyond neuroinflammation , including failures in neuroprotection , lack of support for neuronal survival , and abnormalities in synaptic pruning . recent advances in neuroimaging and animal model work are creating new opportunities to elucidate these issues .
1. Introduction 2. Possible Mechanisms 3. Conclusions and Future Directions
PMC5363313
pseudotyping envelopes of viral vectors are heterologous glycoproteins with the key role of mediating vector entry into target cells . thus , their nature , function , and density on the vector surface may deeply influence the transduction ability of the vectors . a powerful strategy to increase the expression of heterologous proteins in eukaryotic cells is codon optimization ( co ) , which is an artificial process through which dna sequences are modified by the introduction of silent mutations , generating synonymous codons . by degeneracy of the genetic code , all amino acids ( aa ) except met and trp are encoded by more than one codon ; i.e. , synonymous codons . genetic code redundancy makes dna triplets tolerant for point mutations , which do not result in corresponding aa mutations ( silent mutations ) . codon optimization is exploited to overcome species - specific codon usage bias and ultimately improve heterologous protein production . the frequency of codon distribution within the genome ( codon usage bias ) is variable and differs depending on species . it follows that trnas corresponding to synonymous codons are not equally abundant in different cell types and species . therefore , for a certain aa , there are synonymous codons more often used , influencing the timing and efficiency of protein translation.2 , 3 , 4 the codon adaptation index ( cai ) technique measures synonymous codon usage bias in a given species . the cai uses a range ( between 0 and 1 , where 1 is the maximum translational efficiency ) of high - rate expression genes ( i.e. , ribosomal proteins and elongation factors ) to assess the relative contribution of each codon in a specific organism , allowing comparison with the nucleotide sequence of interest . thus , it is possible to increase the expression of a certain gene in a specific organism / cell type by simply changing rare codons with more frequent ones , resulting in modification of the cai . codon optimization has been extensively used to increase the production of either recombinant proteins or viral vectors.6 , 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 rd114-tr is a chimeric mutant deriving from the feline endogenous retrovirus rd114 envelope , in which the tr domain of the gamma retroviral vector ( -rv ) moloney leukemia virus ( mlv ) amphotropic 4070-a envelope , fused at the c - terminal end of rd114 , increases envelope incorporation into lentiviral vector ( lv ) particles . rd114-tr is first translated in a non - functional precursor ( pr ) that is then processed by the membrane - associated endoprotease furin in the surface ( su ) and transmembrane ( tm ) active subunits . rd114-tr processing occurs either in furin - rich compartments of the trans - golgi network , where the pr accumulates during its way to the plasma membrane or in the recycling endosomes close to the plasma membrane . the cleavage and post - translational glycosylation of rd114-tr are crucial for trafficking to the plasma membrane and for incorporation into nascent virion coats . the tm subunit mediates plasma membrane anchoring of the su subunit . upon recognition and engagement of functional subunits to specific receptors , fusion between viral and cell membranes mediates the entry of the vector into target cells . rd114-tr - pseudotyped retroviral vectors are suitable for both ex vivo and in vivo gene therapy applications because they can be concentrated by centrifugation and are resistant to human serum complement inactivation.20 , 21 , 22 , 23 to improve and simplify the expression of the rd114-tr envelope during development of the rd - molpack packaging technology for stable and constitutive manufacturing of lvs,21 , 23 we codon - optimized the entire rd114-tr open reading frame ( orf ) . this idea stemmed from our previous observation that rd114-tr expression is achieved only when the -globin intron ( bgi ) is inserted between the promoter and the rd114-tr cdna of the expression cassette of many different expression plasmids tested . to explain this constraint , we hypothesized that bgi may attenuate the negative effect of interfering sequences existing in the rd114-tr cdna . to eliminate these sequences and to simplify the vector design , we decided to codon - optimize the entire rd114-tr orf . in fact , the elimination of the interfering sequences would have avoided using the bgi , therefore reducing the size of the vector . unexpectedly , we found that , despite the high level of transcription / translation and cytosol export , rd114-trco is functionally dead . our data strengthen the conclusion , also supported by other studies , that codon optimization may not always lead to functional improvement of the gene of interest . we initially analyzed the expression of the rd114-trwt envelope in rd3-molpack - gfp producer cells and in their derived lvs to confirm previous studies describing proper processing and trafficking to the plasma membrane of the wild - type ( wt ) envelope . rd3-molpack - gfp cells contain 12 copies of the integrated self inactivating ( sin)-rd114-trwt - in - rev - responsive element ( rre ) transfer vector ( tv ) ( figure 1a , scheme 2 ) , and the originated rd114-trwt pseudo - typed lvs are proficient in cell transduction , as reported previously . we used two specific antibodies ( abs ) , each recognizing either the pr and su ( anti - su ) subunits or the pr and tm ( anti - tm ) subunits , respectively ( figure 1b ) . to visualize the expression of rd114-trwt at the rd3-molpack - gfp plasma membrane , we carried out pull - down ( pd ) of biotinylated and de - glycosylated total cell extracts . because , in sds - page , glycosylated pr and su molecules co - migrate , we first pulled down membrane proteins , which were first biotinylated in vivo and then deglycosylated by peptide n - glycosidase f ( pngasef ) treatment in vitro . pngasef cleaves the link between asparagine and n - acetylglucosamine residues ( complex oligosaccharides ) that are added in the endoplasmic reticulum ( er ) and the golgi stack . we here confirmed the results previously reported by sandrin et al . , showing that both tm and su subunits are correctly localized at the plasma membrane , whereas the pr does not reach and/or accumulate on it ( figure 2a , lane 8) . the very low level of pr detected in the pngasef - treated sample likely derives from contamination of the endoplasmic reticulum or other membranes ( figure 2a , anti - su , top right panel , lane 8) . to further characterize rd114-tr glycosylation and trafficking to the plasma membrane , we treated in vitro producer cellular and derived vector extracts not only with pngasef but also with endoglycosidase h ( endoh ) enzyme . the latter is active on n - linked high - mannose oligosaccharides ( simplex oligosaccharides ) , added in the er compartment , but not on high - glucose residues attached later during glycosylation in the golgi apparatus . it follows that glycoproteins carrying complex oligosaccharides become resistant to the attack of endoh ( endoh - resistant proteins ) . of note , we observed that , in both cells and derived lvs , pr and tm subunits are endoh - sensitive ( figure 2b , lanes 3 and 6 , anti - tm ) . on the contrary , the su subunit is endoh - resistant because it carries complex oligosaccharides ( figure 2b , lanes 3 and 6 , anti - su ) . the tm contains one putative n - linked glycosylation site ( nxs and nxt , where x is any aa ) , whereas su contains 11 sites ( figure s1 ) . it is possible that this unique n - linked site in tm is glycosylated with simplex and not complex oligosaccharides and that the tm subunit is transported to the plasma membrane linked to the su . furthermore , the average titer of rd3-molpack - gfp lvs tested in this study is 1.6 10 4.7 10 sem transducing units ( tu)/ml ( n = 5 ) , in line with our previous collective data.21 , 23 overall , these findings demonstrate that expression of rd114-trwt in rd3-molpack - gfp producer cells and stemmed lvs is correctly achieved . in an attempt to enhance the transduction efficiency of rd3-molpack - derived lvs by increasing the expression and stability of rd114-tr glycoprotein , we codon - optimized its complete cdna . after recoding , the cai of the rd114-tr orf shifted from 0.64 to 0.98 , and the average gc content increased from 48% ( wt ) to 61% ( co ) , resulting in 73% identity between the wt and co sequences ( figures s2 and s3 ) . to test the function of rd114-trco , the new orf , cloned into the pires - puro3 expression vector , was transiently co - transfected in pk-7 cells together with the sin - gfp tv to produce rd114-trco - expressing lvs . we analyzed the expression of rd114-tr proteins by western blot , treating cell and virion extracts with or without pngasef and endoh ( figure 3 ) . surprisingly , the pattern of rd114-trco subunits greatly differed from that of the wt counterparts . in fact , both cell and lv protein extracts showed very high levels of prco and very low levels or even absence of processed suco and tmco subunits ( figures 3a and 3b ) . in contrast , the expression profile of rd114-trwt in cell and vector extracts was identical to that of rd3-molpack - gfp producer cells and the lvs shown in figure 2 . in agreement with these data , the viral titer of rd114-trwt pseudotyped lvs calculated on cem a3.01 cells was 3.9 10 7.1 10 sem tu / ml ( n = 3 ) , whereas that of rd114-trco - pseudotyped lvs was consistently undetectable . to better understand the difference between prwt and prco processing , we tested whether codon optimization might have somehow compromised furin - mediated cleavage of rd114-trco . to this purpose , we treated cell extracts derived from pk-7 cells transfected with either the rd114-trwt or rd114-trco plasmid with recombinant furin overnight at 16c . untreated and treated extracts were then analyzed by western blot using the anti - tm ab ( figure 4a ) . we observed that , after furin treatment in vitro , the level of tmco subunit clearly increases ( figure 4a , lane 4 ) , even though it is difficult to appreciate the corresponding decrement of prco because of its high level of expression . on the contrary , the amount of prwt is clearly decreased , although it is difficult to appreciate the corresponding increase of tmwt because the wild - type protein is already abundantly cleaved before cell protein extraction . overall , these results support the idea that codon optimization does not compromise furin - mediated cleavage of the envelope , at least in vitro . based on this notion , we then tried to understand why the prco is not correctly processed in vivo . one possible explanation was that a large amount of prco could trigger the phenomenon known as excess substrate inhibition . to exclude this possibility , we transfected hek293 t cells with a scalar amount of rd114-trco plasmid and tested the corresponding cell extracts in a western blot to find the lowest possible dose of prco substrate not inhibiting endogenous furin action ( figure 4b ) . we observed that , even at the lowest amount of plasmid generating detectable prco , the tmco subunit was not visible , indicating that in vivo prco is not processed ( figure 4b , lane 3 ) . we next evaluated whether partial recoding of the orf could restore the function of the rd114-trco envelope . to this aim , we generated two cdnas recoded only in the 5 or 3 half of the cdna sequence . we transiently transfected either the rd114-tr5co or rd114-tr3co chimera , cloned into the pires - puro3 plasmid , into pk-7 cells together with the sin - egfp tv . we then tested cellular and lv extracts in a western blot and lv titer in cem a3.01 cells . immunoblot analysis demonstrated that , for both chimeric rd114-tr glycoproteins , prco processing was impaired ( figure s4 ) . furthermore , although we transfected equal amounts of rd114-tr , rd114-tr5co , and rd114-tr3co plasmid dna , the expression of rd114-tr3co was lower than that of rd114-tr5co and rd114-trwt ( figures s4a and s4c , lanes 3 and 4 ) . the tm3co and tm5co subunits were not detectable in the respective lvs , whereas , after pngasef treatment , su3co and su5co were barely visible and visible , respectively . we explain the difference between anti - tm and anti - su staining with an intrinsic difference in the specific affinity of the two abs . in agreement , to see whether rd114-trco differs from rd114-trwt in its subcellular localization , we carried out confocal microscope imaging in cos-7 cells transfected with pires - rd114-tr plasmids . forty - eight hours after transfection , rd114-tr expression was visualized together with that of calnexin and vamp8/endobrevin , which are er and early and late endosomal markers , respectively . as shown previously by sandrin et al . , rd114-trwt is expressed in the cytosol and perinuclear region and is co - localized mostly with calnexin and very poorly with endobrevin / vamp8 . a similar staining pattern and subcellular localization was observed for rd114-trco in either the cos-7 ( figure 5 ) or pk-7 cell experimental setting ( figure s5 ) , indicating that er and early and late endosome trafficking of rd114-tr is not affected by codon optimization . because many groups have demonstrated that silent mutations affect correct pre - mrna splicing by introducing cryptic splice sites or altering splicing control elements ( i.e. , exonic splicing enhancers and silencers),3 , 25 , 26 we and two service provider companies analyzed rd114-trco mrna both in silico and in vitro for the presence of potential cryptic splicing sites . the first in silico service - provided analysis identified one consensus ( cryptic ) splice donor site that was nullified by codon optimization , whereas the second service - provided analysis recognized no cryptic sites ( figures s2 , s3 , s6 , and s7 ) . we also examined the rd114-trwt and rd114-trco orfs in silico using the netgene2 server , which calculates the probability of cryptic splicing sites in pre - mrna sequences . we did not pinpoint any differences between wild - type and codon - optimized orfs . to further confirm these results , we assessed rd114-trwt and rd114-trco mrna transcripts derived from pk-7 cells transiently transfected with the sin - rd114-trwt / co - in - rre constructs ( figure 1a , schemes 2 and 3 ) by northern blot ( figure 6a ) . two sequence - specific probes targeting rd114-trwt and rd114-trco , respectively , recognized qualitatively comparable rd114-tr mrna transcript patterns ( figure 6a ) . similar results were obtained by using a probe directed against the packaging signal ( ) , which is a sequence common to both constructs . the overall steady - state level of rd114-trco rna detected by the probe was only slightly reduced compared with the wild - type counterpart , but no extra spliced bands were observed . these results indicate that the two lentiviral vector plasmids were equally transfected and correctly expressed from the 5 long terminal repeat ( ltr)-cytomegalovirus ( cmv ) vector promoter . these findings indicate that no cryptic splicing sites are present either in the orf or in the vector backbone ( figure 6a ) . to assess whether mrna metabolism differs between rd114-trwt and rd114-trco , we studied mrna nuclear - cytoplasm export in the pk-7 cell setting using the pires - puro3-based expression vectors , which generate only one mrna transcript . northern blot analysis of total , nuclear ( nucl ) , and cytoplasmic ( cyt ) mrnas and quantification by typhoon phosphorimager of the band intensity normalized by cellular equivalents loaded revealed that the unique codon - optimized mrna is exported 1.4-fold more ( wt cyt / nucl band intensity = 1.1 and co cyt / nucl band intensity = 1.6 ; 1.6/1.1 = 1.4 ) than wild - type mrna ( figure 6b ) . qrt - pcr analysis , using the expression of nuclear u6 and cytosolic / total gapdh genes as an internal normalizer , revealed that rd114-trco mrna is exported 3.6-fold more than rd114-trwt mrna ( figure 6c ) . overall , these data establish that recoding affects nuclear export but not transcription and splicing processes . we then investigated whether codon optimization could influence mrna secondary structure and , thereafter , protein translation , as reported recently by several groups.3 , 4 , 27 thus , we examined rd114-trwt and rd114-trco mrna sequences by mfold software ( figures 7a and 7b ) . this computational analysis predicts the most thermodynamically stable rna configurations ( up to 50 ) based on the free energy value ( g ) of the molecules , where a lower g indicates a higher stability . we retrieved 33 different configurations for rd114-trwt and 37 for rd114-trco ( figure 7e ) . as expected , wild - type structures are very different from codon - optimized ones ; the average g for rd114-trwt mrna is 462.25 ( where g = 468.70 is the most stable configuration ) , whereas the average g for rd114-trco mrna is 679.39 ( where g = 687.60 is the most stable configuration ) . this finding indicates that recoded mrna molecules are more stable than wild - type counterparts ( p < 0.0001 ) ( figure 7e ) . because 5 end and 3 end substructures are fundamentally important for translational dynamics and protein folding , we scanned the 5 and 3 ends of all wild - type and recoded mrnas to identify any possible structural conserved domains . over the 33 conformations of rd114-trwt mrna , we identified a conserved domain at both the 5 end ( nucleotides [ nt ] 1320 ) and 3 end ( nt 1,3081,677 ) ( figure 7c ) . these 5 end and 3 end domains are also conserved in the corresponding region of the rd114-trco mrna structure . above the 37 structures calculated by mfold for rd114-trco mrna , we identified nt 1330 at the 5 end and nt 1,3901,677 at the 3 end ( figure 7d ) . to evaluate the similarity between identified domains , we studied rd114-trwt and rd114-trco mrna 5 end and 3 end substructures with simtree software . this software compares each node complexity ( branch - loop ) of two structures that eventually grades in a similarity score ( between 01 , where 1 is the maximum ) . the score is normalized by the number of nucleotides of the substructures from two rna structures showing the lowest g in mfold ( table s1 ) . at the 5 end of the rd114-trwt mrna substructure , 26 complexities were identified , which corresponded to only ten complexities in rd114-trco ( normalized symmetric similarity [ nss ] = 0.5213 ) ( figure 7a ) . at the 3 end mrna substructures , 42 complexities were found in rd114-trwt and only 18 in rd114-trco ( nss = 0.6178 ) ( figure 7b ) . these results point out that codon optimization of the rd114-tr gene introduced significant alterations at both the 5 end and 3 end of the rd114-trco mrna secondary structure . codon optimization and de - optimization have been used extensively for a lot of different biotechnological practices , primarily in heterologous systems to increase recombinant protein yield and as an adaptive response to environmental conditions and natural host selection in bacteria , yeasts and viruses.29 , 30 , 31 , 32 , 33 , 34 in some eukaryotes ( i.e. , c. elegans and drosophila melanogaster ) , it has been exploited to control intracellular trna to modulate translational efficiency.35 , 36 , 37 , 38 in autologous hosts , such as the mammalian chinese hamster ovary ( cho ) or hek293 cell lines , which are the most widespread systems for manufacturing pharmaceuticals , codon optimization is a valuable strategy to prevent transcriptional silencing , mrna destabilization , or inefficient translation other than being a powerful tool to increase immunogenicity in dna vaccinology applications.9 , 10 , 16 , 17 , 39 finally , hiv - derived lv production has also benefited from codon optimization by enhancing production of structural and functional viral proteins ( i.e. , gag and pol);9 , 11 , 12 neutralizing cis - repressive sequences present in gag / pol genes , thereby making the expression of these genes rev - independent;13 , 14 and eliminating homology between packaging gag - pol genes and the cis - regulatory packaging ( ) sequence contained in the packaging construct and tv , respectively , therefore reducing the risk of generating a replication - competent lentivirus ( rcl ) . based on these premises , we analyzed dna codon optimization of the rd114-tr gene with the aim of improving envelope translation in rd3-molpack producer cells . in fact , codon optimization would have neutralized the interfering sequences contained in the rd114-tr orf , thereby sparing the use of the bgi in vector design and , at the same time , increasing the production and density of rd114-tr on the cellular plasma membrane and , consequently , on virion coats . we asked for two recoding analyses by two independent companies , which provided similar results ( figure s8 ) . therefore , we believe that the quality of the analysis could not have affected the final output . the consistently negative results obtained with the chimeric rd114-tr3co and 5co support this idea . in this study , we show that rd114-trwt expressed either stably ( rd3-molpack-24 cells ) or transiently ( pk-7 cells ) naturally traffics from the er through the golgi network to reach the plasma membrane . prwt is processed in suwt and tmwt subunits , which are eventually embedded into nascent lvs . in contrast , rd114-trco reaches the plasma membrane mainly as unprocessed prco , and maturation into suco and tmco functional subunits is drastically reduced or even absent . as a consequence , a high level of prco is erroneously incorporated into budded viral particles , which become defective vectors . because n - linked glycosylation is crucial for maturation of different envelope ( env ) proteins , such as the hepatitis c virus glycoprotein e2 and human t cell leukemia virus type i ( htlv - i ) envelopes,39 , 40 , 41 we studied the glycosylation status of rd114-trwt and rd114-trco . here we confirm the conclusions reached by sandrin et al . , showing that prwt , suwt , and tmwt are deglycosylated by pngasef . furthermore , we also expanded glycosylation studies showing that , in contrast to suwt , tmwt is endoh - sensitive . this result suggests that tmwt either reaches the plasma membrane anchored to su or loses complex oligosaccharides when on the membrane , or , alternatively , does not need complex oligosaccharides for its function . the analysis of prco glycosylation demonstrates sensitivity to both pngasef and endoh enzymes , suggesting defective glycosylation of the recoded protein . further studies will clarify whether a correlation between the observed defective glycosylation and maturation of rd114-trco does exist . although prco is cleaved in vitro by recombinant furin both under reducing and non - reducing conditions ( data not shown ) , it is not cleaved in vivo . to explain this result , we reasoned that cleavage in vivo could be prevented by an excess of substrate : prco is , in fact , much more abundant compared with prwt . alternatively , the deficit of prco processing could be secondary to a deficit of retrograde transport of prco from the cell membrane to the endosomes , where the active form of furin is accumulated . we ruled out the first hypothesis because furin is not active in vivo , even with a very low amount of prco substrate , whereas the second hypothesis requires further analysis to be formally accepted . synonymous mutations have been considered ineffective for a long time , and , for this reason , they are also named silent mutations . however , their nature has been recently re - evaluated because evidence has shown that these mutations have a great effect on pre - mrna splicing and mrna secondary structure formation , therefore affecting protein translational efficiency and folding.4 , 43 even a single synonymous codon substitution can have a significant effect on protein folding and function . protein dysfunction can be caused either by disruption ( or introduction ) of splicing enhancers , by altering mrna stability at the global and local level , or by altering the kinetic of protein production , the ribosomal pausing sites , and co - translational folding.27 , 44 our results exclude that codon optimization has introduced aberrant pre - mrna splicing sites . rather , they establish that rd114-trco mrna is exported more efficiently into the cytosol than rd114-trwt mrna and support the theory that some alterations occurred at the mrna secondary structure , thereby influencing protein translation . we focused our study on the mrna 5 end and 3 end because previous findings from others demonstrated that these two domains crucially influence translation dynamics , such as translation initiation and rna global and local stability.45 , 46 , 47 the in silico mfold and sintree software analyses highlighted that the secondary structures of rd114-trwt and rd114-trco mrna are significantly different . especially some conserved domains at the 5 end and the 3 end of rd114-trwt mrna are lost in the rd114-trco isoform . interestingly , the generation of chimeric rd114-tr5co and rd114-tr3co led to even worse functional impairment . these findings suggest that rd114-trco inactivity is not due to single mutations clustered at the 5 end or 3 end but , more likely , due to conformational modifications distributed along the mrna molecule that affect global mrna stability and , thereby , protein folding and processing . demonstrated that modifications in the cytoplasmic tail of rd114 and rd114-tr alter pr subunit transport from the cell membrane to the trans - golgi network . in particular , transport of the envelopes associated with core protein ( i.e. , gag ) to the endosomal compartment , where active furin accumulates , is important because it affects cleavage efficiency . we observed , by confocal microscope imaging , that both rd114-trwt and rd114-trco are localized mostly in the er compartment when assessed either in the presence ( pk-7 setting ) or in the absence ( cos-7 ) of gag protein . to this extent , secondary structure modifications identified in rd114-trco mrna might result in alteration of protein folding , which , in turn , is responsible for protein dysfunction altogether , this study suggests that rd114-tr is not suitable for codon optimization and that this strategy can not be applied to improve its performance . cosset ( inserm ) , encodes the chimeric rd114-tr envelope that derives from fusion of the extracellular and transmembrane domains of the feline endogenous retrovirus rd114 envelope and the cytoplasmic tail ( tr ) of the amphotropic ( a)-mlv 4070 envelope ( figure 1a , scheme 1 ) . the pires - rd114-trwt - internal ribosome entry site ( ires)-puro - woodchuck hepatitis post - transcriptional regulatory element ( wpre ) plasmid was obtained by excising the cmv - rd114-trwt cassette from the phcmv - rd114-tr plasmid and cloning it into the pires - puro 3 plasmid ( clontech laboratories , a takara bio company ) ( figure 1a , scheme 7 ) . the generation of the sin - rd114-trwt and sin - rd114-trco vectors ( figure 1a , schemes 2 and 3 ) as well as the constructs encoding the hiv gag , pol , and rev genes ( figure 1a , schemes 4 and 5 , respectively ) have been described previously . the sin - gfp tv encoding the egfp gene was kindly provided by l. naldini ( tiget , osr ) ( figure 1a , scheme 6 ) . the rd114-tr orf was codon - optimized , synthesized , and cloned in either the pmk - rq or pms - rq plasmid by geneart . we further cloned the rd114-trco orf into either the pires - puro3 or sin - lv plasmid . four different molecules were generated : pires - cmv - rd114-tr - flco , in which full - length ( fl ) cdna was codon - optimized and cloned into the ecorv and nsii sites of the pires - puro3 plasmid by excising the orf from the pmk - rq - rd114-tr - flco plasmid ; pires - cmv - rd114-tr-5co , obtained by recoding only the 5-half sequence ( 789 bp ) of the rd114-trwt orf ( rd114-tr5co was modified by adding eco47iii and nsii restriction sites at the 5 end and 3 end of the gene sequence , respectively , and the orf was then cloned into the eco47iii and nsii sites of pires - puro3 plasmid ) ; pires - cmv - rd114-tr-3co , obtained by recoding only the 3-half sequence ( 789 bp ) and cloning into the eco47iii and sphi restriction sites of the pires - puro3 plasmid by excising the orf from pmk - rq - rd114-tr-3co ( figure 1a , scheme 7 ) ; and sin - rd114-trco fl ( figure 1a , scheme 3 ) , generated by inserting rd114-trco into a sin - lv through a three - step cloning strategy . first , the rd114-trwt - ires - puro - wpre fragment was excised from the sin - rd114-trwt - in - rre vector ( figure 1a , scheme 2 ) and cloned into the ecori site of a pgem - t plasmid , generating the pgem - rd114-trwt - ires - puro - wpre plasmid . then , rd114-trwt - ires - puro was excised from pgem - rd114-trwt - ires - puro - wpre ( obtaining the pgem - wpre intermediate ) using bamhi , and the rd114-trco - ires - puro orf was excised from the pires - cmv - rd114-tr - fl - co using ecorv / xbai enzymes . rd114-trco - ires - puro was then cloned into the pgem - wpre intermediate through blunt ligation , obtaining the pgem - rd114-trco - ires - puro - wpre plasmid . finally , the rd114-trco - ires - wpre orf was cut out from pgem - rd114-trco - ires - wpre and cloned into the ecori site of sin - rd114-trwt - in - rre , generating the sin - rd114-trco - in - rre vector . t cells and their derivative pk-7 clone , which constitutively expresses the hiv gag - pol - rev genes , were propagated in iscove s modified dulbecco s medium ( imdm ) ( biowhittaker , lonza group ) supplemented with 10% australian fetal calf serum ( fcs ) ( biowhittaker ) and a combination of 1% penicillin - streptomycin and glutamine ( psg ) ( lonza ) . the cem a3.01 t cell line was grown in rpmi 1640 medium ( biowhittaker ) supplemented with 10% fcs and 1% psg . cos-7 cells were grown in dmem ( biowhittaker ) supplemented with 10% fcs and 1% psg . briefly , 35 10 cells were plated on 100-mm tissue culture dishes ( becton dickinson ) . after 24 hr of culture , the egfp tv , rev , packaging , and envelope constructs were co - transfected at a 4:1:0.88:0.48 ratio using either profection mammalian transfection system calcium phosphate ( promega ) or fugene 6 transfection reagent ( roche diagnostics ) according to the manufacturer s instructions . transfection efficiency was calculated 48 hr later by analyzing the percentage of egfp - positive cells by fluorescence - activated cell sorting ( facs ) analysis . the cem a3.01 cell line was transduced by spinoculation at 1,024 g for 2 hr at 37c in the presence of polybrene ( 8 g / ml ) ( sigma - aldrich ) . physical titer was evaluated by measuring the level of p24gag released in the culture supernatant with the alliance hiv-1 p24 antigen elisa kit ( perkinelmer ) according to the manufacturer s instructions . pk-7 cells were transfected with the pires - rd114-tr or sin - rd114-tr tv plasmid encoding either rd114-trwt or rd114-trco . forty - eight hours after transfection , total , nuclear , and cytoplasmic rnas were extracted by trizol reagent ( life technologies ) following the manufacturer s instructions and analyzed by northern blot assay . five micrograms rna / sample was run on 0.8% agarose - formaldehyde gel , transferred onto a hybond - n membrane by capillary transfer , and finally probed with 1 10 dpm / ml of a p - labeled 550-bp rd114-trwt or rd114-trco probe in perfecthyb plus hybridization buffer ( sigma - aldrich ) . membranes were extensively washed and then exposed to x - ray films at 80c or to a typhoon phosphorimager 9000 ( ge healthcare ) for direct quantification of the radioactive signal . after stripping , membranes were re - hybridized with an internal control probe encompassing the packaging sequence ( ) to detect full - length mrnas . total , nuclear , and cytoplasmic rnas , obtained as described above , were retrotranscribed with the superscript first - strand synthesis system kit for rt - pcr ( invitrogen ) . the cdna ( 1.25 ng ) was quantified by qpcr sybr green technology with the following specific primers : rd114-trwt ( for 5 aac ggg tca gtc ttc ctc tg ; rev 5 atc aat ggc agg aat ggg ga ) , rd114-trco ( for 5 ccg tgc agt tca ttc ctc tg ; rev 5 ctc agc ttg gtg tac tgg gt ) , u6 ( for 5 ctc gct tcg gca gca ca ; rev aac gct tca cga att tgc gt 5 ) , and gapdh ( for 5 tgc acc aca act gct tag c ; rev 5 ggc atg gac tgt ggt cat gag ) . normalization was calculated using gapdh for total and cytosolic mrna and u6 for nuclear mrna . cellular extracts and viral proteins derived from isolated cell - free virus - like particles ( vlps ) or lvs were prepared as described previously.21 , 49 briefly , lv supernatants were concentrated by centrifugation at 15,000 g for 90 min at 4c . then , the liquid phase was gently removed , and pelleted virions were directly lysed by adding 5 l of pbs/0.5% np-40 ( calbiochem , merck - millipore , # 492016 ) . proteins were size - fractionated on 8% , 12% , or 4%15% gradient sds - page ( mini - protean tgx gels , # 456 - 1084 , bio - rad ) . then , proteins were electroblotted on either hybond enhanced chemiluminescence ( ecl ) nitrocellulose membranes ( ge healthcare ) or transblot turbo transfer pack membranes ( bio - rad , # 170 - 4159 ) . membranes were blocked in 5% low - fat dry milk tris - buffered saline ( tbs ) , 1% tween 20 ( tbs - t ) and then incubated with the appropriate primary antibody diluted in 5% bsa and tbs - t . the anti - tm rd114-tr rabbit serum , kindly provided by f .- the anti - su rd114-tr mab , generated by areta international , was diluted 1:50 . the anti - extracellular signal - related kinase-1 ( erk ) rabbit ab was diluted 1:1,000 ( cell signaling technology , # 16 ) . the anti - calnexin rabbit ab was diluted 1:2,000 ( santa cruz biotechnology , g1910 ) . the extravidin horseradish peroxidase ( hrp ) ab was diluted 1:2,000 ( sigma - aldrich , # e2886 ) . the anti - hiv human serum , obtained from an aids patient , was kindly donated by g. poli ( osr ) and diluted 1:1,000 . the secondary hrp - linked abs anti - human ( # na933v ) and anti - rabbit ( na934v ) ( ge healthcare ) were diluted 1:5,000 . the anti - mouse ( # a2066 ) ab ( sigma - aldrich ) was diluted 1:10,000 . ecl western blotting detection reagent ( ge healthcare , rpmn2106 ) was used for the chemiluminescence reaction . pk-7 and cos-7 cells were transfected with pires - rd114-tr plasmids encoding either rd114-trwt or rd114-trco , seeded on poly - l - lysine - coated glass slides ( thermo fisher scientific ) . forty - eight hours after transfection , cells were fixed with pbs and 3% paraformaldehyde ( pfa)/0.1 mm cacl2/0.1 mm mgcl2 , permeabilized with pbs and 0.1% triton x-100 , and then stained with the following abs : rabbit anti - vamp8 ( endobrevin synaptic systems , catalog no . 1047 302 ) at 1:200 dilution , rabbit anti - calnexin ( santa cruz , h-70 , catalog no . the secondary abs were alexa fluor goat anti - rabbit a488 ( invitrogen , catalog no . a11034 ) and alexa fluor goat anti - mouse a568 ( invitrogen , catalog no . slides were mounted with fluorescence mounting medium ( dako ) , and images were captured with a laser - scanning confocal microscope ( leica tcs sp5 ) with an hcx pl apo blue 63 ( na 1.4 ) objective in oil immersion . images were acquired using the leica application suite ( las ) advanced fluorescence ( af ) software ( leica microsystems ) and processed by the public domain imagej image processing and analysis software ( http://rsb.info.nih.gov/ij/ ) ( image processing and analysis in java ) . protein extracts from either cells or virions were treated with pngasef and endoh enzymes according to the manufacturer s instructions ( new england biolabs , # p0704s and # p0702s , respectively ) . briefly , proteins were first denatured for 10 min at 99c and then digested for 1 hr at 37c with 250500 u of pngasef or endoh enzyme . 4 loading buffer containing -mercaptoethanol was added to samples that were then boiled for 5 min at 99c and finally loaded onto 4%15% sds - page precast gels ( mini - protean tgx gels , # 456 - 1084 , bio - rad ) for western blot analysis . in vitro furin digestion was carried out by treating 35 g cellular extracts with 4 u of recombinant furin ( neb , # p8077s ) for 16 hr at 16c following the manufacturer s instructions . rd3-molpack - sin - gfp producer cells were plated in 60-mm culture dishes at 1 10 cells / cm density . forty - eight hours after cell seeding , rd3-molpack - sin - gfp cells reached about 90% confluency . cellular monolayers were gently washed in pbs supplemented with 1 mm mgcl2 and 0.1 mm cacl2 to keep epithelial junctions tight and impermeable to molecules . cells were then incubated on ice for 30 min with 0.5 mg / ml ez - link sulfo - nhs - lc - biotin ( thermo scientific ) in pbs/1 mm mgcl2/0.1 mm cacl2 and gently shaken . after biotinylation , cell monolayers were washed for 5 min with pbs/100 mm glycine/1 mm mgcl2/0.1 mm cacl2 for quenching the biotin excess . cells were finally lysed as described previously,21 , 49 and protein extracts were quantified by protein assay ( bio - rad , # 500 - 0006 ) . one milligram of proteins was incubated with 5060 l of biotin binder magnetic beads ( dynabeads myone streptavidin t1 , # 65602 , invitrogen ) for 1 hr at room temperature by gentle rocking . beads were washed four to five times with 1 ml of pbs/0.1% bsa , and then protein / bead complexes were processed with pngasef . after addition of 4 loading dye and boiling for 5 min at 99c , proteins were separated by sds - page on 4%15% precast gels ( mini - protean tgx gels , # 456 - 1084 , bio - rad ) and analyzed by western blot assay . matched p24gag equivalents of vector particles were incubated with 5 l of anti - su ab ( 0.9 mg / ml ) for 3 hr at 4c under rocking conditions . the pd was performed by washing three times 100 l of dynabeads ( sheep anti - mouse immunoglobulin g [ igg ] dynabeads , invitrogen , # 422.01 ) with pbs/0.5% bsa/2 mm edta and then rocking them for 30 min at room temperature in the presence of lv particles and the anti - su ab . after virion pd , the dynabeads were washed several times , 4 loading dye was added , and proteins were separated by sds - page on 4%15% precast gels for western blot analysis . prediction of rna splice sites was generated by the software made available by the netgene2 server ( http://www.cbs.dtu.dk/services/netgene2/ ) and prediction of mrna structure by the software mfold ( http://mfold.rit.albany.edu/?q=mfold/rna-folding-form ) using the following parameters : linear rna sequence ; 37c folding temperature ; 1 m nacl ionic condition : number of calculated folding ; differences between the calculated foldings = default parameters ; maximum extension of the calculated loops = 30 ; maximum asymmetry between the calculated loops = 30 ; and no limit in base pairing distance . statistical analysis was performed using jmp statistical software and by running the wilcoxon - mann - whitney ranked - sum non - parametric test . contributed to the conception , acquisition , analysis , and interpretation of data and drafting the article . contributed to the final approval of the version to be published . c. bovolenta contributed to the conception , design , analysis , and interpretation of the data , drafting and approving the final version to be published
lentiviral vectors ( lvs ) are a highly valuable tool for gene transfer currently exploited in basic , applied , and clinical studies . their optimization is therefore very important for the field of vectorology and gene therapy . a key molecule for lv function is the envelope because it guides cell entry . the most commonly used in transiently produced lvs is the vesicular stomatitis virus glycoprotein ( vsv - g ) envelope , whose continuous expression is , however , toxic for stable lv producer cells . in contrast , the feline endogenous retroviral rd114-tr envelope is suitable for stable lv manufacturing , being well tolerated by producer cells under constitutive expression . we have previously reported successful , transient and stable production of lvs pseudotyped with rd114-tr for good transduction of t lymphocytes and cd34 + cells . to further improve rd114-tr - pseudotyped lv cell entry by increasing envelope expression , we codon - optimized the rd114-tr open reading frame ( orf ) . here we show that , despite the rd114-trco precursor being produced at a higher level than the wild - type counterpart , it is unexpectedly not duly glycosylated , exported to the cytosol , and processed . correct cleavage of the precursor in the functional surface and transmembrane subunits is prevented in vivo , and , consequently , the unprocessed precursor is incorporated into lvs , making them inactive .
Introduction Results Discussion Materials and Methods Author Contributions Conflicts of Interest
PMC4677664
denitrification is an important process in biology that involves the sequential reduction of nitrate ( no3 ) to nitrite ( no2 ) , nitric oxide ( no ) , nitrous oxide ( n2o ) , and finally to dinitrogen ( n2 ) , carried out by several different metalloenzymes . n2o + h2o ) is a key step of this process and is catalyzed by nitric oxide reductases ( nors ) . no is an important molecule in biology because it impacts events ranging from blood pressure regulation , neurotransmission , and immune response in mammalian cells to transcriptional regulation and biofilm formation in bacteria . the presence of nors in pathogenic bacteria such as pseudomonas aeruginosa helps to detoxify no and allow the bacteria to survive . furthermore , an increase in n2o production caused by the use of artificial fertilizers generated from artificial nitrogen fixation has disrupted the global nitrogen cycle , as well as highlighted n2o s potent ability to deplete ozone . despite the biochemical , biomedical , and environmental significance of nors , structural features responsible for its activity and a clear mechanistic understanding of its reaction , particularly the membrane - bound nors from bacteria , are not well understood . bacterial nor is a complex enzyme consisting of a c - type heme , a heme b , and a heme b3/nonheme iron ( feb ) center . electrons are delivered from heme c to heme b and then to the heme b3/feb active site , where no is reduced to n2o ( figure 1 ) . the active site of nor consists of a high - spin ( hs ) heme b3 and feb coordinated by three histidine and one glutamate residues ( figure 1 ) . three mechanisms of no reduction by nors have been proposed ( scheme 1 ) . briefly , the trans mechanism suggests that both heme b3 and feb sites bind no , one each , before n n bond formation , while in the cis heme b3 mechanism , a second no electrophilically attacks a heme - bound no . in the cis feb mechanism , representation of the electron - transfer pathway from cnor ( left ) and the active site structure including feb ( orange sphere ) pdb 3o0r ( right ) . enzymatic and mechanistic studies of native bacterial nors are complicated by the presence of several metal sites ( three hemes on a nonheme iron ; see figure 1 ) , which makes spectroscopic studies difficult , as well as difficulties in purifying the protein in high yield and homogeneity because nors are membrane proteins . synthetic models of nor have been used to complement the study of the native enzyme to great success . the recent progress made in synthetic modeling has been summarized by other articles in this special forum and will not be duplicated here . to complement both native enzyme and synthetic modeling approaches , we have used small , stable , easy - to - purify , and well - characterized proteins such as myoglobin ( mb ) as scaffolds to make biosynthetic models of more complex metalloenzymes . while a great deal of effort has been put forth to understand both the structure and function of native enzymes and their variants using biochemical and biophysical methods , an ultimate test of our knowledge of this class of enzymes is creating functional models that mimic both the structure and function of native enzymes . in contrast to studying native proteins in a top - down approach , which can identify necessary structural features responsible for function , biosynthetic modeling is a bottom - up approach that elucidates structural features sufficient for activity . furthermore , the biosynthetic models may be amenable to investigation in ways that have not yet been developed for the native enzymes ( e.g. , replacement of the heme in the active site with a non - native cofactor , such as zinc(ii ) protoporphyrin ix ( ppix ) ) . on the other hand , thanks to recent progress in molecular biology and protein biochemistry , protein models can now be more readily prepared than by the chemical synthesis of models of complex metalloenzymes such as nor because the latter method requires rigorous synthetic skills . for example , it normally takes about 1 week to construct , express , and purify protein models with a yield of 100 mg / l of escherichia coli culture ; it would take much longer to prepare heme feb models chemically with lower yield because the synthesis of porphyrin - containing models is quite challenging and requires multistep synthesis . despite challenges associated with preparing synthetic analogues of complex enzymes , remarkable progress has been made to understand the structure / function of complex enzymes using synthetic models . furthermore , it is becoming clear that noncovalent , secondary coordination sphere interactions around the primary coordination sphere , such as hydrophobicity and hydrogen - bonding interactions , often involving structurally well - defined water , can play a key role in enzymatic function . addressing these issues in modeling requires a rigid framework that allows the introduction of these elements at specific locations . biosynthetic modeling is an ideal choice in addressing this issue because such secondary coordination sphere interactions can be conveniently introduced at a specific location of the rigid protein scaffold , without elaborate synthesis of model compounds that may be more flexible . therefore , although biosynthetically designed protein models are intermediate in complexity compared to the complex native proteins and synthetic analogues , they contain many features of both the proteins and small - molecule models , providing us with unique constructs to understand the structure , function , and mechanism of complex metalloenzymes . in this article we will use the following nomenclature convention to designate various biosynthetic models and their corresponding metallated and nitrosylated derivatives . febmb1 and febmb2 represents the first and second generations of biosynthetic model proteins , respectively . the corresponding metallated derivatives are represented as m - febmb1(fe ) where the m represents a metal ion with the designated oxidation state ( ii ) occupies the nonheme feb center and the fe represents fe - protoporphyrin ix ( heme ) in the heme - binding site . when the feb center is empty , it will be represented as e - febmb1(fe ) . when the fe - protoporphyrin ix ( heme ) is replaced with zn - protoporphyrin ix , it will be designated as m - febmb1(zn ) . based on this convention , fe - febmb1(fe ) and fe - febmb1(fe ) the corresponding nitrosyl derivatives , feno - febmb1(zn ) and fe - febmb1(feno ) , indicate no binding to the nonheme feb center and heme fe center , respectively . in order to complement studies of native nors and its synthetic models , our group utilizes small , easy - to - purify , and well - characterized proteins like mb as the scaffold to prepare biosynthetic models of nors . this endeavor was built upon our initial success in using mb to prepare structural and functional models of heme copper oxidases ( hcos ) by introducing a cub center in the distal pocket of sperm whale myoglobin ( swmb ) through l29h / f43h mutations ( called cubmb ) . because nors and hcos belong to the same superfamily with similar overall structural folds , and some hcos have been shown to exhibit nor activity , we first investigated the cross reactivity of this hco mimic cubmb to reduce no and found that the presence of cu at the cub site in cubmb indeed displayed nor activity with consumption of 2 mol of no / mol of cubmb / min , similar to that of hco from thermus thermophilus ( 3 mol of no / mol of hco / min ) . encouraged by the above success , we turned our attention to mimic both the structure and function of native nors . at the time of our pursuit , there was no crystal structure of nor available to guide the rational design of a nor model using mb . however , biochemical studies and sequence homology analysis have indicated that , in addition to the presence of fe in the feb center ( instead of cu in the cub center ) , nors contain at least two conserved glu residues in the active site that are absent in the hcos . because cubmb did not bind fe and thus did not show any nor activity in the absence of a metal ion in the nonheme metal center , we decided to introduce a glu to the cubmb . after evaluation of several positions to introduce the glu through computer modeling and energy minimization , we found the best candidate to be v68e , called e - febmb1(fe ) ( l29h / f43h / v68e swmb ) . this protein binds fe readily ( figure 2a ) and the metal bound fe - febmb1(fe ) displays nor activity , making it the first structural and functional model of nor . overlays of cnor ( yellow ) with ( a ) fe - febmb1(fe ) ( cyan ) , ( b ) fe - febmb2(fe ) ( green ) , and ( c ) fe - febmb1(zn ) ( magenta ) . feb sites are shown as brown spheres and amino acid residues as sticks , and the water molecule involved in hydrogen bonding is shown as a cyan sphere in part b. because there are at least two conserved glu in the active site of nors , we decided to investigate the role of the second glu in the biosynthetic models . while there is no room to introduce the second glu in the primary coordination sphere of the feb center , we evaluated introducing the second glu in the secondary coordination sphere of the feb center . we found i107e to be at an ideal location to provide an extended hydrogen - bonding network around the feb center ; thus , a second - generation model ( i107e - febmb1(fe ) , called e - febmb2(fe ) was prepared , which also bound fe(ii ) in the feb site and the metallated derivative fe - febmb2(fe ) improved the nor activity over fe - febmb1(fe ) by nearly 100% . both fe - febmb1(fe ) and fe - febmb2(fe ) were prepared before the publication of the first crystal structure of cytochrome c dependent nor ( cnor ) . after the crystal structure of cnor became available , we overlaid the crystal structures of fe - febmb1(fe ) and fe - febmb2(fe ) with that of native nor ( figures 2a , b ) and were pleased to see that , in addition to displaying nor activity , both biosynthetic models mimic native cnor structurally . this accomplishment , achieved through computational modeling guided by homology modeling with structurally related proteins and by activity that mimics those of native enzymes , demonstrated the immense potential of biosynthetic approach in making close structural and functional models of native enzymes . spectroscopic studies of fe - febmb1(fe ) and fe - febmb2(fe ) using fourier transform infrared ( ftir ) and resonance raman ( rr ) have shown that heme - bound no adopts a strong nitroxyl character through interactions with the nonheme iron , and time - resolved rapid - mixing experiments provided evidence for both heme and nonheme nitrosyl complexes , supporting the trans mechanism . additionally , electron paramagnetic resonance ( epr ) studies of fe - febmb2(fe ) reacted with excess no showed the formation of a five - coordinate low - spin ( 5cls ) ferrous heme species due to cleavage of the proximal histidine bond . epr measurements taken below 30 k of fe - febmb1(fe ) and fe - febmb2(fe ) upon the addition of 1 equiv of no show signals at g = 6.1 , which likely arise from exchange coupling of an s = /2 6cls { feno } heme and s = 2 nonheme fe . heme nitrosyl species have been spectroscopically probed , but a more complete understanding of the nonheme nitrosyl was limited to only a few studies because the spectroscopic signals of the heme often dominate the spectra of nors over the nonheme iron center , hampering our understanding of the role of nonheme iron in nor reactivity . recently , we have replaced the native heme ( iron protoporphyrin ix , feppix ) in fe - febmb1(fe ) with znppix , which also bound fe(ii ) , and the fe - febmb1(zn ) derivative allowed for a thorough spectroscopic and computational investigation into the feb nitrosyl complex selectively , without interference from the heme nitrosyl . by using uv vis absorbance , epr , and mssbauer spectroscopies , as well as x - ray crystallography ( figure 2c ) and density functional theory ( dft ) calculations , the nonheme nitrosyl was characterized as an s = /2 { feno } complex , using the enemark feltham notation , best described as a hs ferrous iron antiferromagnetically coupled to an no radical . to further probe the interaction of our nor model with no , we report here uv vis and nuclear resonance vibrational spectroscopy ( nrvs ) measurements . using this information , we can systematically characterize the intermediates illustrated in scheme 1 because they are likely to have distinctly different spectroscopic signatures . nrvs gives a complete and quantitative vibrational frequency spectrum for fe - enriched nuclei . it offers a selectivity similar to that of rr spectroscopy but is not bound by the optical selection rules of rr or ir spectroscopy . this is especially important in the study of nors , given that the fe no stretching mode vibrations are ir - silent and decompose upon laser irradiation . unfortunately , performing nrvs requires extremely concentrated protein samples ( > 5 mm ) that are impractical when working with native enzymes such as nor . however , studies have already been performed on mb and its mutants , thus offering our nor mimics a unique opportunity to utilize this advanced spectroscopic technique that would otherwise be inaccessible to native proteins . vis spectroscopy was used to monitor no binding to fe - febmb1(fe ) and zn - febmb1(fe ) during nrvs sample preparation . representative spectra are shown in figure 3 . upon the addition of 1 equiv of no to fe - febmb1(fe ) , where the nonheme site was reconstituted with fe , the soret peak at 433 nm ( figure 3 , black curve ) underwent a blue shift to 419 nm ( figure 3 , red curve ) , corresponding to the formation of a 6cls { feno } species and another broad peak at 398 nm corresponding to a 5cls { feno } species ( figure 3 , red curve and inset ) , and this assignment is confirmed by nrvs results ( vide infra ) . the addition of 1 equiv of no to the zn - febmb1(fe ) sample also caused a blue shift of the soret peak from 434 nm ( figure 3 , cyan curve ) to 403 nm ( figure 3 , purple curve ) , corresponding to the formation of a 5cls { feno } species . vis spectra of fe - febmb1(fe ) ( black curve ) , zn - febmb1(fe ) ( cyan curve ) , and the corresponding mononitrosyl derivatives fe - febmb1(feno ) ( red curve ) and zn - febmb1(feno ) ( purple curve ) . the inset shows deconvolution of the soret region of fe - febmb1(feno ) demonstrating two components : one peak at 419 nm corresponding to the 6cls { feno } species and a second broader peak at 398 nm corresponding to the 5cls { feno } species . the presence of both 6cls and 5cls { feno } species is consistent with the nrvs results ( vide infra ) . in the case of the zn - febmb1(feno ) sample , the presence of the soret peak at 403 nm is also consistent with the presence of 5cls { feno } also observed by nrvs ( vide infra ) . nrvs exploits technology developed at third - generation synchrotron light sources to monitor the vibrational properties of mssbauer nuclei , including fe . tuning of a monochromatic x - ray beam in the vicinity of the nuclear resonance reveals vibrational sidebands displaced from the recoilless resonance observed in conventional mssbauer spectroscopy . a growing number of nrvs applications exploit its exclusive and quantitative sensitivity to vibrational motions of the probe nucleus . specifically , each vibrational mode contributes to the measured signal in direct proportion to the mean - squared displacement of the probe nucleus along the beam direction , and well - established data analysis methods directly extract a partial vibrational density of states ( vdos ) for that measurement . for a randomly oriented ensemble of molecules containing fe , each vibrational normal mode contributes to the fe vdos an area equal to the fraction efe2 of the mode s kinetic energy associated with motion of the fe nucleus . the information content of the vdos is quantitative , allowing direct comparison with vibrational predictions on an absolute scale . the vdos is comprehensive because all vibrations involving fe contribute , without the artificial restrictions imposed by selection rules in more familiar vibrational spectroscopies ( ir and raman ) . finally , the vdos is uniquely site - selective because only motion of the fe will contribute , even in a macromolecule containing thousands of other atoms . on the basis of these characteristics , nrvs is a uniquely valuable probe of protein active sites containing iron . several investigations have reported the vibrational dynamics of iron in heme proteins and iron porphyrins using nrvs . nrvs measurements on oriented single crystals of iron porphyrins exploit the sensitivity to motion along the direction of the x - ray beam to provide additional insights into the interpretation of the results on heme proteins . proteins with nonheme iron sites are equally amenable to nrvs investigations , which have informed the structural characterization of reaction intermediates in nonheme iron enzymes . vibrational spectra resulting from previous nrvs measurements on proteins containing multiple iron atoms contain superposed contributions from all sites . here , we demonstrate for the first time that we can specifically label either the heme or nonheme iron sites of fe - febmb1(fe ) with fe , allowing us to independently monitor vibrations of iron at each site . partial unfolding of the protein at low ph allows removal of the heme and reconstitution of the protein with fe - enriched ppix , following the same procedure as that used for previous nrvs investigations on native mb . heme vibrations will dominate the nrvs signal of the reconstituted protein , even after the incorporation of natural abundance iron ( or another metal ) into the nonheme site because the natural abundance of fe is only 2% . similarly , the incorporation of fe into the nonheme site of e - febmb1(fe ) reconstituted with natural abundance heme should allow us to specifically monitor the vibrations of the nonheme iron . the vdos determined from nrvs measurements on reduced fe - febmb1(fe ) ( figure 4 ) demonstrate that specific labeling of the heme and nonheme iron sites with fe allows us to distinguish vibrations at distinct sites within the same protein . the vibrational signal from the heme iron strongly resembles that reported for native mb . the dominant feature of the vdos includes contributions from vibrations of the axial fe nhis bond to his 93 and of the equatorial fe npyr bonds to the four heme pyrrole nitrogen atoms at approximately 230 and 250 cm , respectively . the fe nhis frequency is well - known from rr measurements on heme proteins , where this vibration is strongly enhanced upon excitation into the soret band . the nrvs signal is determined by the relative amplitude of iron motion and also includes the fe npyr vibrations , which are not easily observable using other spectroscopies . site - selective enrichment of fe - febmb1(fe ) with fe allows independent monitoring of iron vibrations at either the heme or nonheme site ( fe - febmb1(fe ) ) . the upper and lower traces present the partial vdos of the heme and nonheme iron of the reduced proteins , respectively , derived from such measurements and reflect the distinct coordination of iron at each site . the heme vdos is nearly identical with that reported for reduced native mb from horse heart , where contributions from the fe nhis vibration perpendicular to the heme and vibrations of the in - plane fe npyr bonds to the heme pyrrole nitrogen atoms were identified . although individual vibrations are not resolved for the less symmetric nonheme site , the stiffness derived from the vdos ( table 1 ) , nevertheless , reflects the lower coordination of iron in this environment . here and in subsequent figures , error bars reflect experimental uncertainties determined from counting statistics , while solid traces represent a five - point running average of the experimental vdos . the nonheme iron of reduced fe - febmb1(fe ) displays a clearly distinct vibrational signal dominated by a broad feature with a peak near 230 cm . the crystallographic model includes his 29 , his 43 , his 64 , glu 68 , and a water molecule as ligands to the nonheme iron . the relatively featureless signal observed for the nonheme iron apparently masks the vibrational structure that one might expect in light of this diverse ligand field , and the reduced symmetry in comparison with the heme site may further increase the vibrational complexity . we identified multiple unresolved vibrational modes contributing to a similar broad vibrational feature in reduced cytochrome c , based on a quantitative comparison with fe / fe isotope shifts observed in rr measurements , and attributed this complexity to the reduced symmetry of iron coordination in the distorted heme . conformational heterogeneity is well - documented for native mb and may also broaden vibrational features . regardless of the reasons , the unresolved vibrational complexity may hinder the identification of well - defined iron - ligand vibrations . fortunately , the vibrational information revealed by the nrvs measurement yields a quantitative measure of the coordination strength even in the absence of detailed vibrational frequency assignments . the vdos d( ) determines the stiffness,1an effective force constant that directly measures the force required to displace the iron with its coordination environment held fixed . the stiffness for both sites is much lower than that for the ls heme iron in reduced cytochrome c , where the stiffness was more than 300 pn / pm . the stiffness is consistent with the presence of a hs iron at both sites in reduced fe - febmb1(fe ) and fe - febmb1(fe ) . the stiffness of the heme iron in reduced fe - febmb1(fe ) is the same as that in native mb ( table 1 ) , confirming the expectation that the introduction of the nonheme metal site does not significantly affect the coordination strength of the heme iron . however , the stiffness of the nonheme iron environment is significantly lower than that determined for the heme iron . the slightly lower force restraining the iron in the nonheme site presumably reflects its reduced coordination . similar conclusions follow from data recorded on oxidized fe - febmb1(fe ) ( figure 5 ) . a feature near 270 cm dominates the heme iron vdos , which strongly resembles that previously reported for native swmb . because iron ligand vibrations are undoubtedly the primary contribution to this feature , this indicates that the ligation of the heme iron , to his 93 and to a neutral water molecule , is the same in fe - febmb1(fe ) as it is in native mb . in particular , these data provide no indication for an oxo group bridging the two iron sites , as observed for the oxidized state of nor ( scheme 1 ) , and the hs nrvs signal from the heme iron contrasts with the ls heme iron reported for cnor from ps . the iron vdos of oxidized fe - febmb1(fe ) , shown in the upper trace , strongly resembles that reported for native mb from sperm whale , indicating that coordination of the heme iron is unaffected by the presence of the additional nonheme iron engineered in the distal pocket . in spite of the limited signal , the nonheme iron vdos in fe - febmb1(fe ) ( lower trace ) clearly reports vibrations from a distinct iron site characterized by a reduced coordination strength . as seen above for reduced fe - febmb1(fe ) , the vdos for the nonheme iron is clearly distinct from that for the heme iron , in spite of a relatively low fe concentration and a consequently reduced signal in the fe - febmb1(fe ) sample . this dual confirmation of successful site - specific labeling of each site illustrates the opportunity to probe the reactivity of each iron independently . the iron vdos determines an additional averaged force constant , the resilience2which provides information distinct from the stiffness . as defined more generally by zaccai , the resilience3measures the rate at which the mean - squared displacement xfe of the probe atom ( here , iron ) increases with temperature . nrvs lacks the energy resolution to capture highly anharmonic motions that contribute to temperature - dependent measurements of xfe using techniques such as inelastic neutron scattering or mssbauer spectroscopy above a dynamical transition near 200 k. on the other hand , at temperatures below 200 k , we have shown quantitative agreement between determinations of xfe from mssbauer measurements on oxidized cytochrome c at a series of temperatures and the values expected on the basis of the iron vdos determined using nrvs at a single temperature . the vibrational contribution to the resilience ( eq 2 ) captures this temperature variation in a single parameter , with lower values of the resilience characterizing environments with large fluctuations of the fe probe atom . the resilience spectrum ( eq 4 ) suppresses contributions from localized iron ligand vibrations and highlights low - frequency oscillations of the protein that drive translational motion of both iron sites in reduced fe - febmb1(fe ) and fe - febmb1(fe ) . quantitative agreement between the areas determined for both sites yields values for the resilience that are identical , within experimental uncertainty ( table 1 ) . nevertheless , comparison as a function of frequency reveals subtle differences in the coupling of long - range protein fluctuations to these two sites . low - frequency vibrations play the primary role in determining the resilience , as we illustrate by directly plotting the integrand4 in eq 2 as a resilience spectrum in figure 6 . the resilience is equal to the inverse of the area of this spectrum and is primarily determined by vibrations below 100 cm . molecular dynamics simulations on mb and cytochrome c show similar mean - squared displacements for all heme atoms including iron , supporting our suggestion that translation of the heme in response to fluctuations of the embedding protein matrix make the primary contribution to the nrvs signal below 100 cm . as a result , we interpret the resilience as a measure of the elastic properties of the protein environment . previously , we found a significant increase of the resilience in cytochrome c in comparison with mb . here , we find that the resiliences of reduced fe - febmb1(fe ) and native mb are the same , within experimental uncertainty , suggesting that the introduction of the additional nonheme metal site does not seriously perturb the elastic properties of the protein . moreover , small differences in the coupling of the protein fluctuations to the heme and nonheme iron sites , apparent from examination of figure 6 , average out to yield values for the resilience that agree quantitatively for the two distinct sites , within the experimental uncertainty . this further supports the notion that the resilience quantifies global properties of the embedding protein and contrasts with the sensitivity of the stiffness to differences in the coordination of the two iron sites . in short outer - sphere force constant that probes the elasticity of the embedding protein , in contrast with the probe of the immediate coordination environment , as quantified by the stiffness . the vibrational dynamics of the heme iron undergo noticeable changes upon exposure to no . the vibrational signal from fe - febmb1(feno ) containing fe - enriched hemes covers a wider frequency range than that in the absence of no , with significant features resolved beyond 500 cm ( figure 7 ) . the experimentally determined stiffness for zn - febmb1(feno ) exceeds 300 pn / pm ( table 1 ) , indicating a substantial increase in the coordination forces exerted on the iron . we observed stiffnesses exceeding 300 pn / pm for the ls heme iron in reduced cytochrome c. the heme iron vdos reveals that the presence of a second metal in the nonheme site influences no binding to fe - febmb1(fe ) and zn - febmb1(fe ) . no stretching vibrations , clearly resolved above 400 cm , probe the axial ligation . for reference , dashed lines indicate fe no stretching frequencies reported for native horse heart mbno ( 452 cm ) , characteristic of a six - coordinate complex with no coordinated trans to a histidine ligand , and for fe(dpix)(no ) ( 528 cm ) , a typical five - coordinate heme no complex . a substantial fraction of hemes exhibit an fe no stretching frequency characteristic of five - coordinate heme nitrosyls when either zn or fe is present in the nonheme site . this contrasts with previous measurements on native mbno , which revealed a nrvs signal consistent with six - coordinate heme no . the presence of well - resolved features yields more specific information on individual fe ligand bonds . in particular , previous nrvs measurements have identified an fe no stretching mode in the 520530 cm range in five - coordinate heme no complexes , binding of an imidazole ligand trans to no weakens the fe no bond , and we observe this mode at lower frequencies in the 450460 cm range for these six - coordinate heme no complexes . one well - characterized six - coordinate heme no complex is native mbno , where this fe no vibration appears at 452 cm and contributes to both the rr and nrvs signals . unlike native mbno , fe no stretching frequencies near 530 cm characteristic of five - coordinate heme no contribute to the experimental vdos of fe - febmb1(feno ) and zn - febmb1(feno ) upon the introduction of a divalent metal in the nonheme site ( figure 7 ) . this result supports previous evidence for formation of a five - coordinate heme no complex in fe - febmb1(feno ) exposed to excess no , which was based on observation of the same fe no vibration at 522 cm using rr spectroscopy as well as the presence of a 1660 cm n o stretching frequency in ir measurements . the altered heme ligation in response to the neighboring nonheme metal contrasts with the insensitivity of the unligated heme to the nonheme metal noted above ( figures 4 and 5 ) and demonstrates that the nonheme metal specifically influences the structure of the heme ligand complex . in the presence of 1 equiv of no , the fe - febmb1(feno ) vdos ( figure 7 ) also has a feature with a 480 cm frequency that we attribute to six - coordinate heme no . because the nrvs signal depends only on the mean - squared vibrational amplitude of the iron and on the relative population of contributing species , the fe - febmb1(feno ) vdos suggest comparable amounts of five- and six - coordinate heme no ( figure 7 ) . interestingly , the fe no frequency is significantly increased with respect to that observed for native mbno , providing additional information on how the nonheme metal influences the electronic structure of the neighboring heme no complex . the contribution of a vibrational signal attributable to six - coordinate heme no is significantly reduced for zn - febmb1(feno ) in the presence of 1 equiv of no . 560 cm in six - coordinate heme no complexes . experimental characterization of its kinetic energy distribution based on isotope shifts indicates that this vibrational mode primarily involved motion of the central nitrogen atom of the feno unit . on this basis , this n - centered vibration can be qualitatively described as an feno bending mode to distinguish it from the fe no stretching mode that contributes more strongly to the nrvs signal . however , it must be emphasized that neither mode can exhibit pure feno bending or fe no stretching character for the nonlinear feno unit . both modes exhibit rather modest soret enhancement in raman scattering from six - coordinate heme no complexes , but the feno bending frequency is more reliably detected in heme proteins because of its large sensitivity to n / n substitution and is thus more often reported . although the iron amplitude and thus the nrvs signal is necessarily smaller for the feno bending vibration , the fe - febmb1(feno ) vdos includes minor features near 380 and 580 cm consistent with contributions from the feno bending vibration of five- and six - coordinate heme no complexes , respectively , supporting conclusions based on the stronger fe no stretching frequency discussed above . raman and ir measurements on fe - febmb1(feno ) resulting from reaction with stoichiometric no also identify feno bending and n o stretching frequencies that are 1520 cm higher and 7080 cm lower , respectively , than those typically observed for six - coordinate heme no . together , the unusual values for all three vibrations of the feno fragment suggest that the nonheme fe strongly perturbs the electronic structure of heme no . in particular , it is conceivable that the fe cation electrostatically predisposes the heme - bound no to the electron transfer that will ultimately be required for reactivity . as found above , the nonheme iron influences the coordination of the heme , strengthening the fe no bond and weakening the fe in contrast , the vibrational dynamics of the nonheme iron in fe - febmb1(fe ) exposed to 1 equiv of no do not differ significantly from those observed in the absence of no ( figure 8) . this indicates that the ligation and electronic structure of the nonheme iron is insensitive to the structural and electronic changes that take place upon no binding to the heme iron . moreover , it indicates that the nonheme iron has a much lower affinity for no than the heme iron does . the vdos of the nonheme iron atom reveals perturbations in fe - febmb1(zn ) when the nonheme site is saturated with excess no ( see materials and methods for details of sample preparation ) . when fe is present in the heme site , on the other hand , the nonheme iron vdos exhibits no significant change upon reaction with stoichiometric no , in contrast with the clear signatures for no binding to the heme iron seen in figure 7 . replacement of the heme iron with znppix eliminates the possibility of no binding to the heme and allows the investigation of no binding to the nonheme iron selectively . the vdos of the nonheme iron in the resulting fe - febmb1(zn ) ( figure 8) in the absence of no strongly resembles that observed for fe - febmb1(fe ) under the same conditions ( figure 4 ) , indicating that the structure of the nonheme iron site is insensitive to the substitution of the heme metal . however , noticeable changes in the nonheme vdos of fe - febmb1(zn ) take place in the presence of excess no ( figure 9 ) , which we attribute to the binding of no to the nonheme iron forming the feno - febmb1(zn ) under these conditions . computational models for the no - ligated nonheme feb site in feno - febmb1(zn ) using differing functionals yield quantitative predictions for the iron vdos . comparison with the experimental vdos for feno - febmb1(zn ) in the presence of excess no is consistent with a substantial contribution from no - ligated iron . the red trace indicates the contribution from iron motion along the fe no bond direction and highlights the variability of the predicted fe no stretching frequency , which shifts from 376 cm using b3lyp to 454 cm using m06l . one significant advantage of the nrvs method is the relative ease of quantitative comparison with dft predictions . overall , such comparisons provide useful guidance for interpreting experimental results , but we have found that predicted vibrational frequencies for the feno fragment exhibit significant dependence on the functional used for dft calculations . for five - coordinate heme no complexes , predicted fe no stretching frequencies vary by nearly 200 cm . dft investigations of nonheme feno complexes also reveal significant variability of the electronic structure predicted using different functionals . it remains to be established whether any currently available functional adequately accounts for electron correlation in iron nitrosyl complexes . examination of a wide variety of functionals found that m06l gave the best overall account of the iron vdos for the five - coordinate nitrosyl heme complex fe(oep)(no ) . the vdos predicted using this functional is presented both as the lower trace in figure 9 and , for comparison , as a filled area behind the experimental vdos in the upper trace of figure 9 . the m06l prediction does exhibit significant correspondence with the experimental signal , supporting the conclusion that direct no ligation accounts for the observed vibrational changes . unfortunately , because of the limited fe concentration , the relatively low signal level from this sample precludes an experimental identification of the fe no stretching frequency . the biosynthetic models have allowed us to provide insight into nors that is otherwise difficult to obtain in studying native enzymes . for instance , to the best of our knowledge , the nonheme feb in native nors has not been replaced or removed , making it difficult to assess the role of feb in the activity of nors . in contrast , because the biosynthetic models are purified without a nonheme metal ion , investigations into the role of iron or other nonheme metals is greatly simplified by changing the nonheme metal source that is used ( e.g. , fecl2 vs zncl2 ) . therefore , our biosynthetic model allowed us to answer the previously unaddressed question of what would happen if feb was replaced with cub . given the structural homology between hcos and nors and their known cross reactivity , a fascinating issue arises as to the role of each class of enzymes different nonheme metal . activity assays using iron or copper as the nonheme metal ion both demonstrated nor activity , while controls using redox - inactive zinc did not possess nor activity . this study demonstrated a critical insight that a redox - active metal ion was needed to confer nor activity , an insight that would not be possible when studying the native enzymes . in addition , we also demonstrated that the glutamate ligand to feb is essential for both iron binding at the nonheme site and nor activity . finally , an extended hydrogen - bonding network was shown as a critical component in improved nor activity in our biosynthetic models when a glutamate residue ( i107e ) , was introduced at the secondary coordination sphere into our model to facilitate proton transfer to the active site . the ultimate goal of studying native enzymes and their models is to unravel the details of how they work and apply that understanding to other related enzymes as well as biomedical and biotechnological applications . this goal can best be achieved by thorough mechanistic characterization , which has been achieved to great success in our biosynthetic models of nors . for example , although progress has been made in elucidating the structural aspects of nors aided by recent success in solving the x - ray structure of cnor , understanding the mechanism of the enzyme continues to be problematic due to several technical barriers . to illustrate , even though the isolated enzyme cnor is reactive to no with a moderate turnover rate under steady - state conditions , in the reduced form the enzyme shows very slow turnover in pre - steady - state conditions because of the presence of an obscure structural form of the enzyme . flash - flow experiments with the carbonyl complex of cnor can result in fast reaction kinetics . vis changes of the protein , which is dominated by the signals from the high - affinity heme site and does not provide any information on the events occurring at the feb site . apart from these experimental challenges , the presence of multiple configurations of the oxidized cnor further hinders our understanding of the mechanistic aspects . in one such configuration where the enzyme exists as a -oxodiferric complex , strong magnetic coupling between the five - coordinate heme fe ( his is dissociated from heme iron in this form ) and nonheme feb is observed , while in other cases , only weak magnetic coupling was observed . in addition , there are no experimentally accessible methods to selectively probe the no complex of the feb site because the high - affinity heme site dominates spectroscopic signatures including uv vis , epr , mssbauer , and nrvs . owing to these practical problems , n bond formation from cleavage of the n o bond ( scheme 1 ) . in the first route , , one no molecule binds each of the heme iron and nonheme feb sites in a trans configuration , where both iron centers are present as { feno } complexes . this step is followed by the reductive activation of the dinitrosyl moiety , leading to the formation of a hyponitrite dianion intermediate , where both iron centers are now oxidized to fe . in the cis heme b3 mechanism , supported by theoretical studies , the first no binds to the heme fe , followed by reductive activation of the no complex , which is stabilized by electrostatic interactions with feb . next , a second no electrophilically attacks the first heme - bound no , leading to the formation of a hyponitrite dianion , which is electrostatically stabilized by feb . finally , in the cis feb mechanism , both no units bind to the feb site and the hyponitrite dianion form is stabilized by electrostatic interactions with the heme fe . in all of these proposed mechanisms , it is also unclear how the dianion leads to the product formation , e.g. , whether this hyponitrite intermediate becomes protonated , followed by chemical rearrangement of the complex , is also not well understood . with these hurdles in understanding the mechanism of nor using native enzymes , simpler protein - based model systems that are stable , easy - to - prepare , and well - characterized are needed . to this end , engineered e - febmb1(fe ) and e - febmb2(fe ) and their corresponding metallated and nitrosyl derivatives have provided the much needed insight into the mechanistic aspects of nors , as summarized below . resonance raman studies have shown that in the reduced form both these models exist as 5chs heme in both the absence and presence of fe in the feb site . however , in the presence of 1 equiv of no , both proteins loaded with the feb site form stable 6cls { feno } complexes at the heme site . one important revelation from these studies was the presence of exceptionally low (no ) stretching and high (feno ) frequencies compared to all reported 6cls heme nitrosyl complexes . these results were attributed to ferric heme iron(iii ) nitroxyl ( feno ) complex , where no was stabilized by electrostatic interactions with the feb site . strong back - donation from heme iron caused an increase in the (feno ) frequency , while the negative charge on no resulted in a decrease in the (no ) frequency . in the event of excess no addition , both proteins form a [ { feno}]2 trans nitrosyl dimer , leading to n2o formation , supporting the so - called trans mechanism . under single - turnover conditions , using ftir studies , no n2o production was observed in febmb1 , suggesting that the presence of the feb site is not enough to reduce no to n2o . however , in fe - febmb2(fe ) , 50% n2o production was observed , suggesting that the presence of the second glutamate is critical for n2o formation , presumably by facilitating proton transfer via a hydrogen - bonding network during turnover . unproductive complexes in both the proteins are characterized by a trans dinitrosyl complex , where the heme iron is present as a 5cls { feno } species with a dissociated heme his bond and a second no bound to the feb site . surprisingly , from stopped - flow and rapid freeze quench experiments , no binding to the feb site was observed to be kinetically favored with a t1/2 of 1 ms , followed by binding of the second no to the heme iron , leading to the trans dinitrosyl 6cls { feno}-febno complex . this finding provided experimental evidence that feb binds no before it is bound to heme b3 , which was suggested previously , but not confirmed , in a study of ps . nautica nor . in feno - febmb1(feno ) , the dinitrosyl complex leads to the formation of a dead - end 5cls { feno } species , where the heme his bond is dissociated , but in feno - febmb2(feno ) , the presence of the second glutamate residue leads to 50% effective turnover , which results in a decreased rate of dissociation of the proximal heme his bond , leading to the formation of the dead - end complex . during the decay of the trans dinitrosyl complex ( 6cls heme { feno}/feb - no ) , the hyponitrite intermediate was not observed in either protein derivative , in contrast to proposed mechanisms . furthermore , when excess no was added after formation of the 6cls { feno } complex , the same dinitrosyl complexes of feno - febmb1(feno ) and feno - febmb2(feno ) were observed ( vide supra ) , ruling out the formation of any electrophilically attached second no . this observation , therefore , does not support the so - called cis heme b3 , as proposed by theoretical studies . as stated above , a major barrier to understanding the nor mechanism is the difficulty associated with isolating pure feb - no complexes due to the presence of a high - affinity heme site . to circumvent this critical methodological barrier in native nors , in a recent effort we spectroscopically probed no binding to the feb site after replacing the high - affinity heme with znppix . such a strategy can not be easily applied with native nors since the heme can not be selectively replaced because of the complex nature of the enzyme . from epr , mssbauer , and quantum mechanics / molecular mechanics calculations on the no derivative of feno - febmb1(zn ) , the electronic state of feb - no could be best described as hs s = /2 fe - no having a high ferrous character and a radical nature on no . the radical nature on no would promote n n bond formation by radical coupling with a heme - bound no and thus would support the trans mechanism . these results highlight the usefulness of biosynthetic models of complex enzymes within easy - to - produce and well - characterized proteins . taken together , the engineered nor models have provided important insights into the reaction mechanism of nor and support the proposed trans mechanism of no reduction by nors . results presented here exploit the ability to replace metals at either site , illustrating an important opportunity enabled by the biosynthetic approach . selective substitution with fe provides an independent structural probe for either of the two metal sites in fe - febmb1(fe ) . nrvs measurements quantify the forces exerted on fe by its coordination environment and indicate the presence of hs iron at both sites in the absence of substrate . reduced and oxidized proteins serve to model the initial and final states , respectively , in scheme 1 , although the vibrational signals provide no evidence for a solvent - derived ligand bridging the metals in the oxidized state . rather , vibrations of the heme iron are comparable to those reported for native mb , confirming that the heme is unperturbed by the engineered nonheme site . on the other hand , the observation of distinct vibrational dynamics for the nonheme iron confirms successful site - specific labeling with fe . substitution of a redox - inactive zn ion for fe allows the preparation of stable mononitrosylated intermediates that would precede the formation of the putative ( and unstable ) dinitrosylated intermediates depicted in scheme 1 . the presence of zn in the nonheme site perturbs the vibrational properties of the adjacent heme no complex , in a manner consistent with electron transfer to the no ligand . this suggests that the electrostatic influence of the nonheme fe in nors could act to promote the enzymatic reaction in either the trans or cis heme mechanism . with zn in the heme site , alteration of the nonheme iron vibrations upon exposure to no confirms that no can bind to the nonheme fe if the heme site is unavailable , as would be required in the trans mechanism . considerable advances in the biosynthetic modeling of nors have been achieved recently , and given that both the resting state and ligand - bound and reduced forms of cnor have been crystallized , our understanding of nor modeling should only improve moving forward . importantly , models that can perform enzymatic turnover will be critical because we are unaware of any model , biosynthetic or otherwise , that is capable of reducing no with turnover numbers comparable to those of the native enzymes . in conjunction with this , new models that more closely replicate the secondary coordination sphere of native nors must be developed because these interactions are critical for improving activity . with improved structural information into the active site of nor , fine - tuning of factors such as the heme the creation of such models will provide further insights into the reaction mechanism and activity of nors . all samples were prepared in a 50 mm bis - tris ph 7.3 buffer after chelexing overnight , followed by ph adjustment and filtration to remove chelex beads . buffers were degassed in a schlenk line for 5 h by several cycles of freeze pump thaw prior to their transfer into an anaerobic chamber ( coy laboratories , inc . ) for sample preparations . dry sephadex g25 beads ( ge healthcare ) were suspended in a buffer solution and degassed in the schlenk line for several hours before transfer into the glovebag . all solid materials were kept under vacuum overnight in the antechamber prior to transfer into the glovebag . all protein solutions were exchanged from a 100 mm phosphate ph 7 buffer to a 50 mm bis - tris ph 7.3 buffer outside the glovebag using small size - exclusion columns ( pd 10 columns , ge healthcare ) preequilibrated in an exchangeable buffer . the protein solutions were then degassed by three cycles of freeze pump thaw in a schlenk line and brought into the glovebag . a diethylamine nonoate ( dea - nonoate ; 250 nm = 6.5 mm cm ; cayman chemicals ) solution prepared in 10 mm naoh and used as the no source was degassed by three cycles of freeze pump thaw before transfer into the glovebag . a stock solution of fecl2 was prepared inside the glovebag by dissolving solid fecl2 in degassed water . e - febmb1(fe ) was purified using a known protocol , as reported previously . a similar protocol was employed for e - febmb1(fe ) purification except that in this case fe - labeled heme ( frontier scientific ) was used during the protein refolding step . the identity of each of the purified proteins was confirmed by electrospray ionization mass spectrometry . the r / z of pure protein was > 4 in a 100 mm potassium phosphate ph 7 buffer . molar extinction coefficients 406 = 175 mm cm for met e - febmb1(fe ) , 433 = 143 mm cm for deoxy e - febmb1(fe ) , and 427 of 136.2 mm cm for e - febmb1(zn ) were used to determine the concentrations of the corresponding proteins . a total of 300 ml of deionized water and 1 ml of 9.14% methanolic hcl ( 285.6 l of 32% hcl + 714.4 l of methanol ) were degassed and transferred into the glovebag . a total of 25 mg of fe metal ( 0.44 mmol ; cambridge isotope lab ) was taken into a small dry nmr tube and transferred into the glovebag . the degassed water was transferred into a small water bath and heated to 60 c using a hot plate equipped with a stir bar . the nmr tube containing fe was immersed into the water bath , and 350 l of 9.14% methanolic hcl ( 0.88 mmol ) was added to the tube . the reaction was allowed to proceed for 34 h until the gas evolution ceased . the schlenk flask was removed from the bag , immersed in a dry ice / ethanol slush bath , and slowly opened to vacuum in a schlenk line . the flask was then slowly warmed to 100 c using a water bath while in vacuum . after the solvent evaporated and the solid turned from green to white , the water bath was replaced with an oil bath and heated to 160 c , allowing the residual methanol to evaporate . the product was cooled to room temperature slowly , purged with argon , sealed , and weighed . nrvs data were collected at the advanced photon source on beamline 3id - d , as described in detail elsewhere . briefly , the x - ray energy was scanned in the vicinity of the fe nuclear resonance at 14.4125 kev in steps of 0.25 mev . data were recorded on frozen solutions at 510 mm protein concentration at temperatures of 6080 k , and each measurement was the average of 1535 scans . a comparison of early and late scans confirmed the absence of spectroscopic changes during x - ray exposure . met e - febmb1(fe ) was reduced inside the glovebag with excess dithionite and passed through a small hand - packed size - exclusion column ( sephadex g25 ) preequilibrated with a 50 mm bis - tris ph 7.3 buffer to remove excess dithionite and eluted with the same buffer . the eluted protein was then concentrated to 1 mm ( 433 nm = 143 mm cm ) , and the nonheme site was reconstituted with 1.0 equiv of either fecl2 or zncl2 ( prepared freshly in the inert - atmosphere bag ) added in aliquots of 0.25 equiv with 15 min between each addition . e - febmb1(zn ) was transferred into the glovebag after degassing and concentrated to 1 mm ( 427 nm = 136.2 mm cm ) , and the feb site was reconstituted with fe without reducing the protein because znppix is redox - inactive . reconstitution of the feb site with fe or zn , as desired , was ensured by checking the uv vis spectrum after metal addition , as evidenced by shifting of the soret peak from 427 nm in e - febmb1(zn ) and 433 nm in e - febmb1(fe ) , in the absence of the nonheme metal to 429 nm in feii - febmb1(zn ) and 434 nm in fe - febmb1(fe ) or fe - febmb1(fe ) in the reconstituted protein , respectively . when applicable , oxidation of the heme iron and nonheme iron was achieved after reconstitution by the addition of excess ferricyanide and then passage through a small size - exclusion column . the feb reconstituted proteins were then concentrated to 1015 mm before loading 15 l of the samples into the well of a high - density polyethylene block sample holder inside the glovebag , transferred outside , and frozen immediately where other components ( sapphire window , copper block , brass screws ) were assembled . nitrosyl derivatives were prepared using the following protocol . for e - febmb1(fe ) , 1 equiv of no was added to the fe or zn reconstituted proteins present at 1 mm concentration . at each step , 0.25 equiv of no was added to the reconstituted proteins in the form of dea - nonoate , allowing enough time to for no release between each addition ( t1/2 = 16 min at ph 7.3 ) . no binding to the proteins was confirmed by measuring uv vis spectra of no - bound samples . similarly , for fe reconstituted fe - febmb1(zn ) , the protein was kept at 1 mm concentration before no addition . excess no was added , in the form of dea - nonoate as described above , to fe - febmb1(zn ) to saturate the feb site with no . after no binding , the samples were further purified by another pd10 column equilibrated in a 50 mm bis - tris ph 7.3 buffer to remove any trace impurities including the decay product of dea - nonoate . all of the nitrosyl complexes thus prepared were then concentrated to 1015 mm and loaded into nrvs cells described above . the protocol of adding no at 1 mm of the reconstituted proteins followed by concentrating to higher concentrations has proven to be a successful strategy in our studies , as we have recently reported . an aliquot of each of the above concentrated samples was diluted , and their uv vis spectra were checked inside the glovebag to ensure that no changes in no coordination occurred during the final step of sample preparation . the vtz basis set was used for the iron orbitals and 6 - 31 g * for all other atoms . the computational model for the nitrosyl complex of the nonheme site of feno - febmb1(zn ) used the atomic coordinates deposited in the protein data bank under access code 3k9z(39 ) for his 29 , his 43 , his 64 , glu 68 ( with -carbon atoms replaced by terminal methyl groups ) , the iron and a ligated water , and added no as a sixth ligand . energy optimization of the experimentally observed s = /2 state yielded nearly octahedral coordination for the iron , with an fe n o angle varying from 142.8 ( b3lyp ) to 143.2 ( m06l ) . the atomic displacements of the vibrational normal modes were used to calculate the iron vdos as described previously .
this forum article focuses on recent advances in structural and spectroscopic studies of biosynthetic models of nitric oxide reductases ( nors ) . nors are complex metalloenzymes found in the denitrification pathway of earth s nitrogen cycle where they catalyze the proton - dependent two - electron reduction of nitric oxide ( no ) to nitrous oxide ( n2o ) . while much progress has been made in biochemical and biophysical studies of native nors and their variants , a clear mechanistic understanding of this important metalloenzyme related to its function is still elusive . we report herein uv vis and nuclear resonance vibrational spectroscopy ( nrvs ) studies of mononitrosylated intermediates of the nor reaction of a biosynthetic model . the ability to selectively substitute metals at either heme or nonheme metal sites allows the introduction of independent 57fe probe atoms at either site , as well as allowing the preparation of analogues of stable reaction intermediates by replacing either metal with a redox inactive metal . together with previous structural and spectroscopic results , we summarize insights gained from studying these biosynthetic models toward understanding structural features responsible for the nor activity and its mechanism . the outlook on nor modeling is also discussed , with an emphasis on the design of models capable of catalytic turnovers designed based on close mimics of the secondary coordination sphere of native nors .
Introduction Summary and Outlook Materials and Methods Calculations
PMC3019353
calcium phosphate ceramics have seen extensive clinical application as synthetic bone fillers and graft extenders [ 13 ] . the biocompatibility as well as osteoconductive and osteoinductive properties of these ceramics have been well documented [ 412 ] . bone tissue engineering research has capitalized on these qualities , making porous calcium phosphate ceramics a popular choice of scaffold [ 1316 ] . porous ceramics for medical applications have been manufactured for decades using a variety of traditional methods . conversion of natural structures , such as coral [ 17 , 18 ] , and trabecular bone [ 19 , 20 ] yield porous ceramics with organic architectures that appear very similar to that of the bone that is being replaced . synthetic manufacturing methods such as foaming [ 2123 ] , dual - phase mixing and the slip - casting of polymer foams and particles [ 2527 ] , may also be used to produce porous ceramics . however , conversion and synthetic techniques result in highly complex macroporous structures that are difficult to define quantitatively . despite the complex nature of the porous ceramics produced by conventional means , quite some information is available regarding the influence the porous structure has on osteoconduction [ 2835 ] , bmp induced osteogenesis [ 26 , 3638 ] , and osteoinduction [ 11 , 39 ] . rapid prototyping ( rp ) , also termed free form fabrication , refers to a variety of technologies capable of producing three - dimensional ( 3d ) physical constructs directly from 3d computer aided models . in recent years , rapid prototyping has been proposed for the production of both scaffolds with controlled porous architectures [ 4042 ] as well as porous implants with patient specific geometries [ 4345 ] . several rapid prototyping techniques have been developed to produce ceramic scaffolds for bone tissue engineering research [ 4649 ] . the aim of the current study was to produce porous ceramic scaffolds from different calcium phosphate materials with sufficiently similar macroporous architectures as to be able to reasonably eliminate the macroporous architecture as a confounding variable in future tissue engineering studies . the scaffolds were produced by casting four different calcium phosphate materials into identical molds produced using a rapid prototyping technique . the resulting macroporous structures as well as the chemistry before and after manufacture were evaluated . briefly , the scaffolds specifications called for an interconnecting network of 400 m square cross - section channels oriented along the orthogonal axes and separated from each other and the exterior by 400 m . six , four , and three channels were incorporated in the x , y , and z axis directions , resulting in overall design dimensions of 5.2 3.6 2.8 mm , respectively . 1 . molds , with cavities for the production of six scaffolds each , were designed using the rhinoceros computer aided design software ( robert mcneel & associates , usa ) . the mold model was scaled to account for shrinkage of approximately 20% during thermal processing demonstrated previously by our hydroxyapatite ceramics . this resulted in pre - thermal processing scaffold dimensions of 6.5 4.5 3.5 mm in the x , y , and z axis directions . multiple copies of the mold were produced using a modelmaker ii rapid prototyping system ( solidscape inc . 1schematic of the designed scaffolds including the three orthogonal planes used to define the scaffold surfaces schematic of the designed scaffolds including the three orthogonal planes used to define the scaffold surfaces ceramic scaffolds were manufactured to achieve four conditions through combinations of calcium phosphate ceramic compositions and sintering temperatures as outlined in table 1 . hydroxyapatite powder ( ha , merck , germany ) , beta - tricalcium phosphate powder ( tcp , merck , germany ) and biphasic calcium phosphate powder ( bcp , wt% 85/15 ha / tcp , isotis sa ) were obtained commercially . the ha and tcp raw powders were calcined by heating from ambient to 900c at a rate of 100c / hour and then cooled naturally with no dwell period . aqueous slurries of ha , tcp , and bcp powders were prepared as previously described for the production of cast plates . in brief , the slurry components detailed below were slowly admixed until a homogenous blend was achieved . the ha and tcp slurries consisted of 67.1 wt% calcined ha powder , 28.6 wt% demineralized water , 2.6 wt% ammonia solution ( 25% , merck ) , 1.5 wt% deflocculant ( dolapix , aschimmer & schwarz gmbh , germany ) . once a homogeneous blend was obtained , a cmc binder was added ( 0.15 wt% , pomosin bv , the netherlands ) to the ha slurry and the slurry further mixed until homogeneous . the bcp slurry consisted of 56.4 wt% ceramic powder , 37.6 wt% demineralized water , 3.9 wt% ammonia solution , and 2.1 wt% deflocculant . all slurries were stored in covered beakers until there use within the same day.table 1scaffold dimensions , shrinkage , volume , weight , and apparent porosity of scaffolds compared to the solids in slurry and sintering temperature during manufacturingmaterialsolids in slurry ( wt%)sintering temp . ( c)exterior dimensions ( mm sd)srinkage ( from as molded , % ) volume ( mm sd)weight ( mg sd)apparent porosity ( % sd)ha h67.11250x : 5.05 0.05x : 22.3547.92 0.8776.16 2.8949.65 1.76y : 3.48 0.03y : 22.68z : 2.73 0.03 ha l67.11150x : 6.14 0.05x : 5.4884.63 3.1773.82 5.4072.39 1.27y : 4.23 0.13y : 6.04z : 3.26 0.11bcp56.41150x : 5.45 0.05x : 16.2158.96 1.3848.86 2.3373.72 1.03y : 3.73 0.06y : 7.86z : 2.90 0.06tcp67.11150x : 6.05 0.04x : 6.8880.53 3.0569.49 4.5472.54 1.10y : 4.15 0.13y : 7.86z : 3.21 0.09 scaffold dimensions , shrinkage , volume , weight , and apparent porosity of scaffolds compared to the solids in slurry and sintering temperature during manufacturing the molds were filled using a simple vacuum device . france ) were divided in half and the filter paper removed to expose the perforated interior surface . the open face of a mold was carefully placed against the interior surface of a filter half and secured by circumferentially wrapping with wax laboratory film . the mold / filter constructs were attached to 50 ml syringes and flushed with demineralized water . during casting , the beakers containing slurry were placed on a porex vibrating table ( renfert , germany ) . this assisted in mold filling by imparting shear energy and thus lowering the viscosity of the pseudoplastic ( shear thinning ) slurries . the molds were filled by submerging the open face of each mold in slurry and then drawing vacuum pressure using the syringe . the molds were then placed on a sheet wax laboratory film and the syringes and filter halves removed . the molds were allowed to air dry overnight at room temperature and were then further dried for 24 h at 50c in air . excess slurry from each ceramic composition was processed identically to the molded ceramics to serve as controls when examining material chemistry and to further examine the previously observed influence of the rapid prototyping wax on the material composition . debinding and sintering of the ceramics were performed in two steps in a high temperature furnace ( nabertherm 1400 , germany ) . debinding of all ceramics was performed by heating at a rate of 0.5c / minute to 400c and then cooling naturally with no dwell period . the ceramics were then sintered using a 600 min heating phase with a 480 min dwell period at the final sintering temperature followed by natural cooling . one set of ha scaffold was sintered at 1250c , designate ha h , while a sintering temperature of 1150c was used for a second set of ha scaffolds , designate ha l , as well as all of the tcp and bcp scaffolds . excess ceramic , occasionally present on the scaffold faces corresponding to the open sides of the molds , was removed using a rotary polisher ( labopol-5 , struers , denmark ) with 1200 grit waterproof silicon carbide paper ( struers ) . the ceramics were cleaned by ultrasound for 15 min each in acetone , 100% ethanol and deionized water , and then dried in air at 50c . the exterior scaffold dimensions were measured using a digital caliper ( cd-15c , mitutoyo ltd . , uk ) and used to calculate the shrinkage resulting from the combined debinding and sintering processes . scanning electron microscopy ( sem , xl 30 esem - feg , philips , the netherlands ) was used to examine the macro - architecture and surface micro - structure of the scaffolds . the dimensions of the macroporosity were measured in each of the orthogonal planes ( fig . 1 ) . the apparent porosity of the scaffolds was determined by comparing the apparent density of each scaffold ( dry weight / measured volume ) and the theoretical density of ha ( 3.156 g / cm ) , tcp ( 3.14 g / cm3 ) , and bcp ( 85% ha , the chemistry of raw ceramic powder , calcined ceramic powder , non - molded sintered ceramic and molded scaffolds were evaluated by x - ray diffraction ( xrd , miniflex , rigaku , japan ) . finally , the potential contamination of the ceramics by residues from the wax mold material was investigated by performing energy - dispersive x - ray spectroscopy ( edx , xl 30 esem - feg , philips , the netherlands ) on the surface of cast and non - cast ( not exposed to wax mold material ) ceramics specimens . briefly , the scaffolds specifications called for an interconnecting network of 400 m square cross - section channels oriented along the orthogonal axes and separated from each other and the exterior by 400 m . six , four , and three channels were incorporated in the x , y , and z axis directions , resulting in overall design dimensions of 5.2 3.6 2.8 mm , respectively . 1 . molds , with cavities for the production of six scaffolds each , were designed using the rhinoceros computer aided design software ( robert mcneel & associates , usa ) . the mold model was scaled to account for shrinkage of approximately 20% during thermal processing demonstrated previously by our hydroxyapatite ceramics . this resulted in pre - thermal processing scaffold dimensions of 6.5 4.5 3.5 mm in the x , y , and z axis directions . multiple copies of the mold were produced using a modelmaker ii rapid prototyping system ( solidscape inc . 1schematic of the designed scaffolds including the three orthogonal planes used to define the scaffold surfaces schematic of the designed scaffolds including the three orthogonal planes used to define the scaffold surfaces ceramic scaffolds were manufactured to achieve four conditions through combinations of calcium phosphate ceramic compositions and sintering temperatures as outlined in table 1 . hydroxyapatite powder ( ha , merck , germany ) , beta - tricalcium phosphate powder ( tcp , merck , germany ) and biphasic calcium phosphate powder ( bcp , wt% 85/15 ha / tcp , isotis sa ) were obtained commercially . the ha and tcp raw powders were calcined by heating from ambient to 900c at a rate of 100c / hour and then cooled naturally with no dwell period . aqueous slurries of ha , tcp , and bcp powders were prepared as previously described for the production of cast plates . in brief , the slurry components detailed below were slowly admixed until a homogenous blend was achieved . the ha and tcp slurries consisted of 67.1 wt% calcined ha powder , 28.6 wt% demineralized water , 2.6 wt% ammonia solution ( 25% , merck ) , 1.5 wt% deflocculant ( dolapix , aschimmer & schwarz gmbh , germany ) . once a homogeneous blend was obtained , a cmc binder was added ( 0.15 wt% , pomosin bv , the netherlands ) to the ha slurry and the slurry further mixed until homogeneous . the bcp slurry consisted of 56.4 wt% ceramic powder , 37.6 wt% demineralized water , 3.9 wt% ammonia solution , and 2.1 wt% deflocculant . all slurries were stored in covered beakers until there use within the same day.table 1scaffold dimensions , shrinkage , volume , weight , and apparent porosity of scaffolds compared to the solids in slurry and sintering temperature during manufacturingmaterialsolids in slurry ( wt%)sintering temp . ( c)exterior dimensions ( mm sd)srinkage ( from as molded , % ) volume ( mm sd)weight ( mg sd)apparent porosity ( % sd)ha h67.11250x : 5.05 0.05x : 22.3547.92 0.8776.16 2.8949.65 1.76y : 3.48 0.03y : 22.68z : 2.73 0.03 ha l67.11150x : 6.14 0.05x : 5.4884.63 3.1773.82 5.4072.39 1.27y : 4.23 0.13y : 6.04z : 3.26 0.11bcp56.41150x : 5.45 0.05x : 16.2158.96 1.3848.86 2.3373.72 1.03y : 3.73 0.06y : 7.86z : 2.90 0.06tcp67.11150x : 6.05 0.04x : 6.8880.53 3.0569.49 4.5472.54 1.10y : 4.15 0.13y : 7.86z : 3.21 0.09 scaffold dimensions , shrinkage , volume , weight , and apparent porosity of scaffolds compared to the solids in slurry and sintering temperature during manufacturing france ) were divided in half and the filter paper removed to expose the perforated interior surface . the open face of a mold was carefully placed against the interior surface of a filter half and secured by circumferentially wrapping with wax laboratory film . the mold / filter constructs were attached to 50 ml syringes and flushed with demineralized water . during casting , the beakers containing slurry were placed on a porex vibrating table ( renfert , germany ) . this assisted in mold filling by imparting shear energy and thus lowering the viscosity of the pseudoplastic ( shear thinning ) slurries . the molds were filled by submerging the open face of each mold in slurry and then drawing vacuum pressure using the syringe . the molds were then placed on a sheet wax laboratory film and the syringes and filter halves removed . the molds were allowed to air dry overnight at room temperature and were then further dried for 24 h at 50c in air . excess slurry from each ceramic composition was processed identically to the molded ceramics to serve as controls when examining material chemistry and to further examine the previously observed influence of the rapid prototyping wax on the material composition . debinding and sintering of the ceramics were performed in two steps in a high temperature furnace ( nabertherm 1400 , germany ) . debinding of all ceramics was performed by heating at a rate of 0.5c / minute to 400c and then cooling naturally with no dwell period . the ceramics were then sintered using a 600 min heating phase with a 480 min dwell period at the final sintering temperature followed by natural cooling . one set of ha scaffold was sintered at 1250c , designate ha h , while a sintering temperature of 1150c was used for a second set of ha scaffolds , designate ha l , as well as all of the tcp and bcp scaffolds . excess ceramic , occasionally present on the scaffold faces corresponding to the open sides of the molds , was removed using a rotary polisher ( labopol-5 , struers , denmark ) with 1200 grit waterproof silicon carbide paper ( struers ) . the ceramics were cleaned by ultrasound for 15 min each in acetone , 100% ethanol and deionized water , and then dried in air at 50c . the exterior scaffold dimensions were measured using a digital caliper ( cd-15c , mitutoyo ltd . , uk ) and used to calculate the shrinkage resulting from the combined debinding and sintering processes . scanning electron microscopy ( sem , xl 30 esem - feg , philips , the netherlands ) was used to examine the macro - architecture and surface micro - structure of the scaffolds . the dimensions of the macroporosity were measured in each of the orthogonal planes ( fig . 1 ) . the apparent porosity of the scaffolds was determined by comparing the apparent density of each scaffold ( dry weight / measured volume ) and the theoretical density of ha ( 3.156 g / cm ) , tcp ( 3.14 g / cm3 ) , and bcp ( 85% ha , 15% tcp ) . the chemistry of raw ceramic powder , calcined ceramic powder , non - molded sintered ceramic and molded scaffolds were evaluated by x - ray diffraction ( xrd , miniflex , rigaku , japan ) . finally , the potential contamination of the ceramics by residues from the wax mold material was investigated by performing energy - dispersive x - ray spectroscopy ( edx , xl 30 esem - feg , philips , the netherlands ) on the surface of cast and non - cast ( not exposed to wax mold material ) ceramics specimens . the manufacturing process resulted in scaffolds with remarkably similar structural appearances ( fig . 2 ) . scaffold dimensions , shrinkage , volume , weight and apparent porosity values are summarized in table 1 . the bcp scaffolds also demonstrated considerable shrinkage but maintained a high apparent porosity similar to the low sintering temperature ha and tcp scaffolds . the low sintering temperature ha and tcp scaffolds exhibited the lowest shrinkage . shrinkage in the z - direction and volumetric shrinkage were not calculated since the respective surfaces were manually polished to remove excess ceramic and therefore do not represent the as cast properties . in order to evaluate whether the various treatments influenced the ratio of macroporosity to total porosity , computer models of the porous scaffolds were created using the measured exterior and macropore dimensions in tables 1 and 2 , respectively . table 3 shows the volumes approximated by the computer models for the various treatments and compares the resulting macroporosity to the measured apparent porosity.fig . note the similarity of the scaffold structures and the differences in the scaffold colorstable 2pore dimensions by orthogonal plane ( fig . 1)materialpore dimensions ( m sd)x y planex z planey z planeha hx : 286 15x : 353 28y : 394 24y : 280 16z : 339 17z : 376 30 ha lx : 414 44x : 470 37y : 484 29y : 416 34z : 496 21z : 486 27bcpx : 366 24x : 444 47y : 432 37y : 377 18z : 433 20z : 414 42tcpx : 405 43x : 460 36y : 474 29y : 408 33z : 486 21z : 476 26table 3scaffold volumes and macroporosity calculated form computer models compared to the measured total apparent porosity and other porosity ( difference between macro and apparent porosity)materialtotal volume ( mm)material volume ( mm)pole volume ( mm)macro - porosity ( % ) apparent porosity ( % ) other porosity ( % ) ha h47.9831.7816.1933.7549.6515.90 ha l84.6750.1934.4840.7272.3931.67bcp58.9534.8224.1440.9473.7232.78tcp80.6048.0832.5140.3472.5432.20 the four ceramic compositions all in 25 well plates . note the similarity of the scaffold structures and the differences in the scaffold colors pore dimensions by orthogonal plane ( fig . 1 ) scaffold volumes and macroporosity calculated form computer models compared to the measured total apparent porosity and other porosity ( difference between macro and apparent porosity ) sem images of the resulting scaffolds are shown in fig . 3 . these scaffolds are discussed in the following text using the axes and orthogonal planes depicted in fig . 1 . a distinctive texture of parallel ridges and valleys was observed in sem micrographs on all vertical scaffold surfaces , i.e. , surfaces parallel to the x z and y z planes . this texture is an impression of the rapid prototyped mold and a consequence of the layer - by - layer manufacturing of the mold . the cross - sectional geometry of the channels was dependant upon the orientation of the channel . channels running in the x- and y - directions were square in cross section with textured vertical surfaces and smooth horizontal surfaces . rows top to bottom ha h , ha l , bcp , and tcp . first column perspective view of scaffolds at 50 magnification ( bar 50 m ) . fourth column scaffold surfaces at 1000 magnification ( bar 20 m ) sem micrographs of the four scaffold materials . rows top to bottom ha h , ha l , bcp , and tcp . first column perspective view of scaffolds at 50 magnification ( bar 50 m ) . second column scaffold structures at 100 magnification ( bar 200 m ) . note the regular surface texture on the scaffolds . fourth column scaffold surfaces at 1000 magnification ( bar 20 m ) the surface microporosity of the scaffolds , as observed by sem , varied with material composition and sintering temperature ( fig . 3 ) . the bcp scaffolds exhibited a spectrum of surface microporosity features from approximately 1 to 10 m in size . the surface features of the low sintering temperature ha scaffolds were similar to the bcp with perhaps somewhat less and smaller surface microporosity , approximately 0.55 m . the tcp material , in contrast to the other materials sintered at low temperatures , appeared very similar to the high sintering temperature ha with very little surface microporosity . the xrd patterns for the different ceramic chemistries , shown in fig . 4 , were generally as expected . figure 4a shows the patterns for the ha ceramic raw powder , calcined powder , non - molded material sintered at 1150c , scaffolds sintered at 1150c , non - molded material sintered at 1250c , and scaffolds sintered at 1150c . several peaks associated with tcp formation were observed in the patterns for the cast ha scaffold materials at both the 1150 and 1250c sintering temperatures ( vertical lines in fig . the bcp ceramics also demonstrated these tcp peaks in the cast scaffolds ( fig . 4b , vertical lines ) . the tcp ceramics exhibited changes , relative to the raw powder , that were consistent with the calcination and sintering process temperatures ( fig . the xrd patterns for the four scaffold conditions are shown in fig . 5 for clarity . edx of the surfaces of both cast and non - cast ceramic specimens showed identical spectra consistent with the calcium phosphate materials . 4xrd patterns of a ha h ( 1250 ) and ha l ( 1150 ) , b bcp , and c tcp . shown are xrd patterns of the raw powder , calcined powder and molded ceramics ( scaffolds ) . xrd patterns of non - molded ceramics are also shown in ( a ) for the ha h ( 1250 ) and ha l ( 1150 ) materials . vertical dotted lines indicate additional peaks associated with beta - tcp formation that are only present in the molded specimens ( scaffolds)fig . 5xrd patterns of the four scaffold materials . vertical dotted lines indicate beta - tcp peaks which form in the ha and bcp material as a result of the molding process xrd patterns of a ha h ( 1250 ) and ha l ( 1150 ) , b bcp , and c tcp . shown are xrd patterns of the raw powder , calcined powder and molded ceramics ( scaffolds ) . xrd patterns of non - molded ceramics are also shown in ( a ) for the ha h ( 1250 ) and ha l ( 1150 ) materials . vertical dotted lines indicate additional peaks associated with beta - tcp formation that are only present in the molded specimens ( scaffolds ) xrd patterns of the four scaffold materials . vertical dotted lines indicate beta - tcp peaks which form in the ha and bcp material as a result of the molding process the present study demonstrates the application of computer aided design and rapid prototyping technologies for the production of ceramic scaffolds from different chemistries but with defined , virtually identical , macro - architectures . other than producing macroporosities with pore dimension in the range suggested in the literature for osteoconduction , i.e. , between 50 and 500 mm [ 4 , 28 , 30 , 31 , 34 ] , we did not attempt to produce optimal or ideal porous structures . the purpose of this study was to manufacture porous scaffolds in which the macroporous architecture was designed and sufficiently similar to be able to reasonable exclude the macroporous architecture as a confounding variable in future research studies . the material chemistries and thermal processing methods employed in this study were chosen to provide continuity with materials used in past and ongoing research [ 4954 ] . although the visual appearance of the scaffolds was similar with regard to structure , there were differences in shrinkage and therefore the macroporous dimensions . as expected from our previous work , a sintering temperature of 1250c resulted in a shrinkage of just over 22% for the ha material compared to approximately 6% shrinkage for ha and tcp sintered at 1150c . the relatively large shrinkage of 1617% for the bcp scaffolds , also sintered at 1150c , can almost completely be accounted for by the lower solids loading of the bcp slurry ( 56.4 wt% ) compared to the ha and tct slurries ( 67.1 wt% ) . the lower solids loading was necessary to achieve appropriate rheological properties for the casting of scaffolds from slurries of the non - calcined bcp powder . interestingly , the apparent porosity of bcp scaffolds was very similar to the tcp and low sintering temperature ha scaffolds despite the much higher shrinkage ( table 2 ) . comparing the porosity resulting from the measured macroporous structure to the total apparent porosity , table 3 , reveals that the a much greater proportion of the apparent porosity of the high sintering temperature ha is likely due to the macroporosity compared to the lower sintering temperature materials . the bcp , tcp and low sintering temperature ha all had similar proportions of macroporosity despite the much higher shrinkage of the bcp material . the texture exhibited on the vertical surfaces of the scaffold ( fig . 1 ) , as well as the rounded nature of the macropores in the z - direction , are a consequence of the rapid prototyping technique used to manufacture the molds . this technique jets molten droplets of wax material , which flatten and spread when they strike the surface , to build each layer of the molds . as molds are built up vertically layer by layer , this results in a texture on the vertical surfaces ( x z and y z planes ) which is subsequently cast into the ceramic . the rounded corners of channels running in the z - direction ( cross - sections parallel to the x y plane ) result from the coalescing or pooling of adjacent droplets prior to solidification , resulting in rounding of both inside and outside corners within the printed layers . these rounded mold corners are then cast into the resulting ceramic scaffolds and observed in channels running parallel to the z - direction . ha and bcp scaffolds indicated that a tcp phase had been introduced . since the xrd patterns of the non - molded specimens did not show the tcp phase and the molded and non - molded materials were treated identically with the exception of the molding process , it is likely that the presence of tcp phase after molding results from the exposure of the ha and bcp materials to the wax mold material itself , despite elemental analysis of cast and non - cast specimens demonstrated there was no direct contamination of the ceramics by the mold material . the mechanism for this is not clear but is consistent with our previous findings for ha materials . in conclusion , we have demonstrated a rapid prototyping method for fabricating ceramic scaffolds with virtually identical , 3-dimensional , macroporous architectures from different calcium phosphate ceramics . scaffolds produced by this method will not only enhance research aimed at optimizing macroporous architectures and material compositions but will improve many other aspects of tissue engineering research by eliminate differences in macroporous structure as a confounding variable .
calcium phosphate ceramics , commonly applied as bone graft substitutes , are a natural choice of scaffolding material for bone tissue engineering . evidence shows that the chemical composition , macroporosity and microporosity of these ceramics influences their behavior as bone graft substitutes and bone tissue engineering scaffolds but little has been done to optimize these parameters . one method of optimization is to place focus on a particular parameter by normalizing the influence , as much as possible , of confounding parameters . this is difficult to accomplish with traditional fabrication techniques . in this study we describe a design based rapid prototyping method of manufacturing scaffolds with virtually identical macroporous architectures from different calcium phosphate ceramic compositions . beta - tricalcium phosphate , hydroxyapatite ( at two sintering temperatures ) and biphasic calcium phosphate scaffolds were manufactured . the macro- and micro - architectures of the scaffolds were characterized as well as the influence of the manufacturing method on the chemistries of the calcium phosphate compositions . the structural characteristics of the resulting scaffolds were remarkably similar . the manufacturing process had little influence on the composition of the materials except for the consistent but small addition of , or increase in , a beta - tricalcium phosphate phase . among other applications , scaffolds produced by the method described provide a means of examining the influence of different calcium phosphate compositions while confidently excluding the influence of the macroporous structure of the scaffolds .
Introduction Materials and methods Scaffold design and mold fabrication Ceramic slurries Scaffold fabrication Scaffold characterization Results Discussion and conclusions
PMC3321450
skeletal muscle presents unique features that allow it to respond to several exogenous stimuli . this characteristic is named plasticity . exercise and nutrition are examples of such stimuli that may promote adaptive responses in skeletal muscle in terms of structure and function [ 24 ] . for example , there are some reports describing that mechanical stimuli , particularly resistance exercise , may induce histological changes such as fiber type transition and profile and increase in cross - sectional area , and alterations in muscle function [ 6 , 7 ] . branched - chain amino acids ( bcaa ) , especially leucine , are also well - known nutrients that may influence the adaptive response of skeletal muscle . leucine supplementation has been described as a potential nonpharmacological tool able to stimulate both muscle anabolism and decrease catabolism [ 8 , 9 ] and to modulate glucose homeostasis . furthermore , leucine can act synergistically with exercise to improve the efficiency and effectiveness of these adaptive responses . currently , there are some cellular pathways that partially explain why bcaa supplementation may promote such responses in skeletal muscle . most of these consistent evidences were observed on incubated cells , which have contributed to elucidate important mechanisms regarding amino acids modulation of skeletal muscle protein turnover . however , we have to consider that such conditions are considerably different from the human body . although experimental animals ( rodents ) represent an in vivo model , it may also present distinct results when compared to humans . recently , our group observed that due to differences in muscle metabolism , rodents may respond differently from humans to amino acids supplementation . although the same signaling pathways are found in rodent and human cells the response of these models to amino acids supplementation present individualities that may compromise the extrapolation of results . the mammalian target of rapamycin ( mtor ) pathway is a signal - dependent cascade that responds to a variety of stimuli ranging from growth factors and mitogens to amino acid deprivation and hypoxic stress . it has been well characterized that mtor pathway has a pivotal role in modulating protein translation initiation through eukaryotic initiation factors ( eifs ) and kinases , which in turn alter the phosphorylation status and activity of several proteins in this cellular pathway . amino acids supplementation is involved in signaling to upstream proteins , responsible for sensing and triggering ( mtor , human vacuolar protein sorting 34 ( hvps4 ) , calcium - related proteins ) , as well as downstream proteins , responsible for ribosome initiation complex formation ( eif4e , eif4e - binding protein 1 ( 4e - bp1 ) , eif4f complex ) [ 15 , 16 ] . additionally , it has been shown that bcaa can also interact with the proteolytic machinery ( ubiquitin proteasome system ups ) in order to attenuate muscle wasting . this response may partially involve the protein kinase akt / pkb , which also participates in glucose homeostasis and muscle hypertrophy . regarding the proteolytic machinery , akt / pkb is known to phosphorylate the transcription factor forkhead box class - o ( foxo ) , which translates the two majority genes ( or e3 ligases ) of muscle atrophy : atrogin-1 and muscle ring - finger protein-1 ( murf-1 ) [ 12 , 18 , 19 ] , to phosphorylate mtor and stimulate protein synthesis , and to modulate glucose transporter 4 ( glut4 ) to the sarcolemma . in view of this , these cellular pathways ( synthesis and degradation ) are not distinct and may be controlled by amino acids through indirect genomic and nongenomic actions . although much attention has been given to the role of amino acids in these pathways , the responsiveness of skeletal muscle to these nutrients may be limited . for instance , amino acids infusion stimulate muscle protein accretion until it reaches a plateau . this condition , known as anabolic resistance to amino acidsthe inability of skeletal muscle to maintain or increase its protein mass by appropriate nutritional stimulation occurs because skeletal muscle protein synthesis is refractory to hyperaminoacidemia . thus , it appears that the optimal action of amino acids on skeletal muscle growth occur in combination with other exogenous stimuli ( e.g. , exercise ) or in situations characterized by disruption of organic homeostasis ( e.g. , cancer , diabetes , muscle disuse , sepsis , chronic heart failure ) . in this context , the inflammatory status has a considerable role and the innate immune system ( responsible for cytokines and chemokines production ) should be carefully considered . the focus of this paper is to discuss the possible metabolic and cellular roles of bcaa supplementation on the inflammatory status of skeletal muscle and the effects on protein synthesis and degradation . it is possible that , in some conditions , the administration of these amino acids could exert an anti - inflammatory role or indirectly modulate the inflammatory status and balance of the system and/or the muscle cell in order to favor the biological response and tissue adaptation . the healing of injured muscle is composed of sequential but overlapped phases of injury , inflammation , regeneration , and fibrosis . injury and inflammation predominate the first few days after injury , followed by regeneration . when there is a severe injury the muscle does not recover completely and forms fibrotic tissue approximately two weeks after injury ( figure 1 ) . the inflammatory response is an important phase of the natural healing process . during this phase there is a release of several types of cytokines and growth factors to increase the permeability of blood vessels and chemotaxis of inflammatory cells , such as neutrophils and macrophages . these cells contribute to the degradation of damaged muscle tissue by releasing reactive oxygen species ( ros ) and producing proinflammatory cytokines [ 2427 ] such as tumor necrosis factor alpha ( tnf- ) , interleukin-1 ( il-1 ) , and il-6 that regulate the inflammatory process [ 28 , 29 ] . the role of these cells is quite complex and they can promote both injury and repair . a detailed discussion of their action is beyond the scope of this paper and has been reviewed elsewhere . some systemic inflammatory cytokines such as tnf- and il-1 have direct catabolic effects on skeletal muscle . the cytokine tnf- plays a key role in the skeletal muscle wasting present in chronic diseases , such as cancer , sepsis , and rheumatoid arthritis , conditions in which a raise in the plasma tnf- concentration have been described [ 27 , 31 ] . tnf- impairs muscle protein synthesis [ 32 , 33 ] by destabilizing myogenic differentiation and altering transcriptional activity and increases muscle loss [ 35 , 36 ] by targeting proteins to the ubiquitin - proteasome- mediated degradation pathway [ 3739 ] . have shown that exposure of myoblasts to tnf- inhibits their differentiation [ 40 , 41 ] . the release of ros induced by tnf- induces degradation of the inhibitor-b ( ib ) , which allows the nuclear factor - kappab ( nf-b ) to translocate to the nucleus and to activate the transcription of several b - dependent genes such as those encoding proinflammatory cytokines , and breakdown of myod and myogenin ( regulators of the transition from proliferation to differentiation ) in the proteasome . although most research has focused on the muscle wasting effects of tnf- , under specific conditions this cytokine can also promote muscle protein synthesis and stimulate satellite cell proliferation and differentiation [ 38 , 42 ] . among the factors that mediate the different effects of tnf- on protein synthesis or degradation are the state of cell differentiation and the concentration of tnf-. chen et al . have shown that the effects of tnf- on myogenesis and muscle regeneration are concentration dependent : a low concentration of tnf- ( 0.05 ng / ml ) promoted the differentiation of cultured myoblasts while higher concentrations ( 0.5 and 5 ng / ml ) inhibited it . furthermore , differences in the expression of the tnf- receptor on the surface of different cell types may explain the variable effects of this cytokine . in primary myotubes , low doses of tnf- ( 1 ng / ml ) stimulated maximal protein synthesis , while a much higher dose ( 50 ng / ml ) was required to stimulate maximal protein synthesis in c2c12 myotubes . therefore , the effects of tnf- depend on the concentration and exposure duration : low concentrations help the repair process , while high and prolonged exposure impairs the regeneration process . it is possible that other factors , such as insulin growth factor 1 ( igf-1 ) and inflammatory cytokines mediate the effects of tnf- on protein synthesis / degradation , but their roles are still unclear [ 43 , 44 ] . elevated levels of tnf- have also been implicated in sarcopenia , the age - related loss of muscle mass , strength , and function . it is a key factor contributing to loss of functional mobility , frailty , and mortality in the elderly [ 46 , 47 ] . inflammation , which generally increases with age , is a key factor contributing to sarcopenia , and high level of tnf- is partly responsible for the decrease in muscle protein synthesis that occurs in the elderly [ 4851 ] . have found elevated levels of tnf- mrna and protein in the skeletal muscle of elderly ( 81 1 years ) when compared to young ( 23 1 years ) men and women . the same authors also showed that resistance exercise decreased tnf- expression in the elderly group , suggesting that tnf- contributes to the age - related muscle wasting , and that resistance exercise may attenuate this process by suppressing tnf- expression . it is well known that gram - negative infection ( or the administration of lipopolysaccharides ) causes loss of skeletal muscle protein . the decrease in muscle mass results from increases in the rate of proteolysis and decreased rates of protein synthesis [ 53 , 54 ] . a decrease in mtor activity may explain , at least in part , the impaired muscle protein synthesis . have shown that a combination of lps and ifn - gamma dramatically downregulated the autophosphorylation of mtor and its substrates s6k1 and 4e - bp1 via an increased expression of inos ( nos2 ) and excessive production of nitric oxide ( no ) . studies in mice have shown that overexpression of il-6 may increase muscle atrophy . however , under certain conditions il-6 furthermore , al - shanti et al . demonstrated that il-6 in combination with tnf- stimulate growth of myoblasts . therefore , the role of il-6 in regulating muscle mass appears to be concentration dependent : when overexpressed it may stimulate muscle atrophy , whereas its insufficiency inhibits muscle growth . it mainly controls the expression of genes involved in the immune response , but it also regulates the expression of genes outside the immune system and is therefore able to influence several aspects of normal and disease physiology [ 59 , 60 ] . the classic form of nf-b , a heterodimer of the p50 and p65 subunits , is retained in the cytoplasm through interactions with ib inhibitory proteins . inducing stimuli lead to the phosphorylation and degradation of ib by ib kinases ( ikk ) , allowing nf-b to enter the nucleus and regulate gene expression . nf-b activation causes severe muscle wasting and nf-b is a key factor in cytokine induced loss of skeletal muscle . exercise may activate several signaling cascades and increase the production of ros , which activate nf-b [ 42 , 64 ] in muscle [ 65 , 66 ] . the exercise induced increase in nf-b induces acute - phase proteins and also proinflammatory genes that facilitate the regenerative response in damaged tissues . the involvement of nf-b in muscle damage has been shown in several reports where exhaustive exercise has caused increases in nf-b binding activity . roberts et al . have also shown that the inflammatory response to exercise is attenuated by chronic training , demonstrating that the activity of nf-b can be seen as a beneficial mediator of exercise - induced adaptations to cellular stress . a training program can also exert an inhibitory effect on nf-b dna binding [ 70 , 71 ] . regular physical training leads to several adaptations in the vascular , oxidative , and inflammatory systems , suggesting that transcriptional regulators of the various nitric oxide synthase ( nos ) isoforms by nf-b play a key role in training - induced adaptations [ 72 , 73 ] . lima - cabello et al . have demonstrated that the effects observed after a bout of acute exercise on the nf-b signaling were attenuated by submaximal eccentric exercise training for 8 weeks . also , the levels of nnos , inos and enos expression and nitrotyrosine formation decreased when compared to the acute group . unlike other amino acids , the most active enzyme system for bcaa transamination is found in the skeletal muscle rather than the liver . the first reaction involved in the catabolism of bcaa is the reversible reaction of transamination by isoenzymes bcat ( branched - chain aminotransferase ) found in both cytosol and mitochondria , which convert amino acids into their respective keto acids ( branched - chain keto acids bcka ) , being the -ketoisocaproic acid ( kic ) for leucine , -keto--methylvaleric acid ( kmv ) for isoleucine , and -ketoisovaleric acid ( kiv ) for valine . the bckas formed may undergo oxidative decarboxylation reactions and/or be released in the blood stream and taken up by different tissues , where they are resynthesized to bcaa or oxidized . it is well known that the amino group from bcaa can be incorporated into the -keto - glutarate ( -kg ) producing glutamate through glutamate dehydrogenase ( gdh ) . the glutamate can lose the amino group for oxalacetate ( oaa ) through glutamate - oxalacetate aminotransferase ( got ) producing aspartate to be used in the purine cycle for regeneration of adenosine monophosphate ( amp ) from inosinic acid . glutamate can also be metabolized by glutamine synthetase producing glutamine through atp - dependent incorporation of nh3 ( figure 2 ) . therefore , the amino group released from bcaa could be easily reincorporated by bcka ( reamination ) to produce bcaa or directed to the liver to be oxidized . alanine can also be synthetized from bcaa since the glutamate - pyruvate aminotransferase can produce pyruvate which can be transaminated to alanine through pyruvate aminotransferase [ 78 , 79 ] . however , this reaction appears to occur only in situations characterized by absence of energy ( i.e. , fasting ) since skeletal muscle also presents high concentrations of alanine . the intracellular pool of amino acids can be derived from biosynthesis ( i.e. , nonessential amino acids ) or from transfer across the plasma membrane ( i.e. , essential amino acids ) . transfer across biological membranes can occur through active ( na - dependent ) or passive transport ( na - independent ) due to their ionic nature . in some instances the process of transfer involves not only the entry but exit of amino acids ( exchange ) . the transport system of glutamine is called system a ( na - dependent ) and the one of leucine is called system l ( l ; na - independent ) , which are integrated . the glutamine entry in the cell the requirement of glutamine is increased ( i.e. , catabolic illness ) , theoretically the intracellular content of glutamine is decreased . the decrement in intracellular pool of glutamine may impair leucine transport to inside the cell . on the other hand , the increase of glutamine transport outside the cell may favor the entry of leucine into the cell . if leucine transport is stimulated , the final result of catabolism could increase the availability of glutamine to the cell through glutamate . however , there are no studies evaluating the effects of leucine and glutamine supplementation under inflammatory conditions . bcaa can indirectly modulate the inflammatory status of muscle cells through glutamine production but this reaction appears to occur only in situations characterized by high glutamine consumption and/or decreased glutamate concentrations ( i.e. , catabolic illnesses , cancer , burning , and sepsis ) . ehrensvard et al . first described the importance of glutamine to survival and proliferation of cells such as kidney , intestine , liver , specific neurons in the central nervous system ( cns ) , pancreatic cells , and cells of the immune system . it is widely known that cells of the immune system such as lymphocytes , macrophages , and neutrophils use high rates of glutamine and many functional parameters of immune cells such as t - cell proliferation , b - lymphocyte differentiation , macrophage phagocytosis , antigen presentation , and cytokine production , plus neutrophil superoxide production and apoptosis are enhanced by glutamine . under pathological conditions , which increase the activity of these cells glutamine is extremely used as substrate [ 8385 ] . it has already been demonstrated that the availability of glutamine influences the production of cytokines such as interleukin- ( il- ) 2 in cultured rodent lymphocytes and , il-2 , il-10 , and interferon- ( ifn- ) in cultured human lymphocytes [ 87 , 88 ] . studies have also demonstrated that glutamine may play an important role on nf-b signal transduction pathways , contributing to the attenuation of local inflammation [ 8992 ] . when inhibited in the cytoplasm nf-b , the ibs are phosphorylated by the action of specific protein kinases , such as the ikb kinase complex ( ikk ) at two serine residues with addition of ubiquitin by ubiquitin ligase and degraded by 26s proteasome complex resulting in liberation of nf-b . activated nf-b then binds to the cognate dna - binding sites inducing gene transcription that regulates the innate and adaptive immune response ( i.e. , t - cell development , maturation , and proliferation ) [ 9395 ] . regarding skeletal muscle remodeling , nf-b acts as foxo , a transcription factor of murf-1 gene which promotes sarcomeric degradation by ups . several cytokines also have their gene expression modulated by nf-b ( i.e. , tnf- and il-1 ) . it was demonstrated that il-1 presents significant correlation with skeletal muscle cross - sectional area and , therefore , can be considered as an atrophic modulator . nf-b also promotes the transcription of the inducible isoform of nitric oxide synthase ( inos ) which leads to insulin resistance through nitrosylation of the insulin receptor ( ir ) . under such conditions , the mtor translation pathway has impaired signal transduction through proteins involved in translation initiation such as insulin receptor substrates ( irs ) , akt , and 4e - bp1 . counteracting such effects , bcaa ( especially leucine ) has demonstrated to be a strong nutritional stimulus able to increase skeletal muscle protein synthesis and attenuate protein degradation . for example , hamel et al . demonstrated that leucine presents one of the strongest inhibitory effects upon ups in muscle cells when compared to the other essential amino acids ( for details about the antiproteolytic effects of leucine , please see zanchi et al . and nicastro et al . furthermore , it has already been demonstrated that bcaa can stimulate the phosphorylation of proteins involved in the mtor pathway such as akt , mtor , 4e - bp1 , eifs , p70s6k in order to improve the protein turnover of the cell [ 100 , 101 ] . since bcaa do not present kinase nor phosphatase activity thus , bcaa can directly modulate the protein turnover of the muscle cell in order to counteract the catabolic and anti - anabolic effects of the inflammatory stimulus . additionally , under pathological conditions , bcaa may influence the inflammatory status of the cell through glutamine production . however , this reaction appears to occur only in situations characterized by a high need of glutamine synthesis . skeletal muscle cells continuously produce reactive oxygen species ( ros ) , which can be generated by various cell organelles and enzymes , such as mitochondria , nad(p)h oxidases , xanthine oxidoreductases , and nitric oxide synthases , whereas their biological activity is opposed by an array of endogenous enzymatic and nonenzymatic antioxidants . normally ros play important physiological roles in skeletal muscle homeostasis and function [ 104 , 105 ] . however , a disturbance in the state of the well - balanced control of oxidant production and antioxidant activity , known as oxidative stress , in turn , is commonly observed during aging and is characteristic of several pathological conditions such as cancer , diabetes , muscle disuse , sepsis , and chronic heart failure . it has been reported that this oxidative stress directs muscle cells into a catabolic state and that chronic exposure leads to wasting . oxidative damage may contribute to skeletal muscle dysfunction and oxidants may stimulate expression and activity of skeletal muscle protein degradation pathways [ 38 , 109 ] . there are several evidences showing that the generation of ros is one mechanistic link between inflammation and skeletal muscle dysfunction and degradation . ros produced by infiltrating immune cells may cause direct injury to muscle tissue or activate catabolic signaling . alternatively , inflammatory cytokines can interact with muscle receptors to initiate catabolic signaling wherein ros are key mediators of this response , acting as second messengers [ 107 , 110 , 111 ] . accordingly , overexpression of tnf- in transgenic mice and single intraperitoneal doses of this cytokine promotes muscle wasting that can be attenuated by antioxidants [ 103 , 108 ] . on the other hand , ros activate transcription factors ( e.g. , nf-b and ap-1 ) and upregulate expression of proinflammatory genes such as tnf- , il-6 and c - reactive protein , which are involved in the pathogenesis of inflammation [ 113 , 114 ] . although administration of bcaa has been investigated as a tool that could exert an anti - inflammatory role or indirectly modulate the inflammatory status in order to favor the biological response and tissue adaptation , less is known about the relationship between this strategy and oxidative stress modulating skeletal muscle structure and function . there are some emerging reports describing that ros modulate the efficiency and effectiveness of the adaptive responses of skeletal muscle induced by some bcaa , especially leucine [ 115 , 116 ] . regarding bcaa supplementation and oxidative stress , an interesting study has shown that this nonpharmacological strategy increases expression of genes involved in antioxidant defense and reduces ros production in cardiac and skeletal muscles in middle - aged mice , which was accompanied by preserved skeletal muscle fiber size , enhanced physical endurance and increased average life span . of interest , bcaa - mediated effects were even more remarkable in middle - aged mice submitted to long - term exercise training ( running 30 to 60 min 5 days / week for 4 weeks ) . in young animals ( 46 months old ) , aging has been described as a condition characterized by anabolic resistance to nutrients , especially amino acids , which impairs muscle protein synthesis and contributes to muscle wasting . such resistance is partially associated to oxidative stress and low - grade inflammation and may be attenuated by chronic anti - inflammatory treatment . . demonstrated that older adults who received omega-3 fatty acids for 8 weeks increased the hyperaminoacidemia - hyperinsulinemia - induced muscle protein synthesis when compared to the control group ( corn oil ) , which was accompanied by greater phosphorylation of mtor and p70s6k . therefore , the anti - inflammatory action of nutrients such as omega-3 may attenuate anabolic resistance in order to favor amino acid - induced muscle protein synthesis . concerning bcaa supplementation , marzani et al . demonstrated that old rats supplemented with leucine and with an antioxidant mixture ( rutin , vitamin e , vitamin a , zinc , and selenium ) showed higher protein synthesis when compared to old - control animals and that these effects could be mediated through a reduction in the inflammatory state , which decreased with antioxidant supplementation . under inflammatory conditions , such as aging , anabolic resistance occurs mainly because of elevated proinflammatory cytokines . thus , antioxidant supplementation may attenuate anabolic resistance and therefore favor leucine action on skeletal muscle protein turnover . it is well accepted that their catabolic reactions can be easily modulated through alterations in metabolic demands , such as in inflammatory status . however , it is unknown if bcaa can directly modulate the status of proteins involved in inflammatory pathways and if this effect could reflect on protein turnover . since glutamine is highly consumed by inflammatory cells , it appears to be a mediator of bcaa and inflammation but this reaction is dependent of glutamate content and gdh activity in skeletal muscle . future studies should address the effects of bcaa , glutamine and the amino acids transporter activity under proinflammatory conditions .
skeletal muscle protein turnover is modulated by intracellular signaling pathways involved in protein synthesis , degradation , and inflammation . the proinflammatory status of muscle cells , observed in pathological conditions such as cancer , aging , and sepsis , can directly modulate protein translation initiation and muscle proteolysis , contributing to negative protein turnover . in this context , branched - chain amino acids ( bcaas ) , especially leucine , have been described as a strong nutritional stimulus able to enhance protein translation initiation and attenuate proteolysis . furthermore , under inflammatory conditions , bcaa can be transaminated to glutamate in order to increase glutamine synthesis , which is a substrate highly consumed by inflammatory cells such as macrophages . the present paper describes the role of inflammation on muscle remodeling and the possible metabolic and cellular effects of bcaa supplementation in the modulation of inflammatory status of skeletal muscle and the consequences on protein synthesis and degradation .
1. Introduction 2. The Role of Inflammation in Skeletal Muscle 3. The Possible Role of BCAA Supplementation on Muscle Inflammation 4. A Possible Link between Oxidative Stress and BCAA-Mediated Inflammatory Effects 5. Conclusion and Perspectives
PMC1566481
we studied the effects of weight loss and non - weight - bearing exercise ( swimming ) on blood and organ lead and essential metal concentrations in rats with prior lead exposure . nine - week - old female sprague - dawley rats ( n = 37 ) received lead acetate in their drinking water for 2 weeks , followed by a 4-day latency period without lead exposure . rats were then randomly assigned to one of six treatment groups : weight maintenance with ad libitum feeding , moderate weight loss with 20% food restriction , and substantial weight loss with 40% food restriction , either with or without swimming . blood lead concentrations were measured weekly . the rats were euthanized after a 4-week period of food restriction , and the brain , liver , kidneys , quadriceps muscle , lumbar spinal column bones , and femur were harvested for analysis for lead , calcium , copper , iron , magnesium , and zinc using atomic absorption spectrophotometry . both swimming and nonswimming rats fed restricted diets had consistently higher blood lead concentrations than the ad libitum controls . rats in the substantial weight loss group had higher organ lead concentrations than rats in the weight maintenance group . rats in the moderate weight loss group had intermediate values . there were no significant differences in blood and organ lead concentrations between the swimming and nonswimming groups . organ iron concentrations increased with weight loss , but those of the other metals studied did not . weight loss also increased hematocrits and decreased bone density of the nonswimming rats . the response of lead stores to weight loss was similar to that of iron stores because both were conserved during food restriction in contrast to decreased stores of the other metals studied . it is possible that weight loss , especially rapid weight loss , could result in lead toxicity in people with a history of prior excessive lead exposure.imagesfigure 1figure 2figure 3figure 4figure 5
Images
PMC5154716
central effect of carbohydrate ( cho ) on endurance performance ( 4 , 6 , 7 , 12 , 13 , 16 ) . this idea was first postulated when it was discovered that cho ingestion , during activity that is not limited by cho availability or oxidation rate , such as high intensity ( e.g. > 70% vo2max ) relatively short duration ( up to 1 h ) exercise , is associated with enhanced performance ( 1 , 10 ) . ( 5 ) that the intravenous infusion of glucose during a 1 h cycling time - trial did not improve performance , despite the previous work showing ingestion to improve performance . following this , carter et al . ( 4 ) were the first to provide evidence that cho ( maltodextrin ) mouth rinses improved performance compared to that of a control rinse of water . this led to the suggestion that cho sensing occurs in the mouth resulting in an ergogenic effect on performance via a central action , possibly by enhancing motor drive or motivation ( or blunting their perturbation ) during fatiguing exercise . a considerable body of research now exists showing that simply rinsing the mouth with a cho - containing solution can have an ergogenic effect on endurance exercise ( 4 , 6 , 7 , 1113 ) , although not all studies have observed benefits ( 2 , 21 ) . the work of chambers et al . ( 6 ) is particularly important as they have demonstrated that cho sensing in the mouth is associated with activation of reward centers in the brain and that this is independent of sweetness . ( 8) have provided evidence that the presence of a non - sweet carbohydrate ( maltodextrin ) in the mouth may enhance muscle function and facilitate corticomotor output . together , these findings provide mechanistic evidence that cho does have central , non - metabolic , ergogenic effects that can be induced simply by the presence of cho in the mouth , although there is a lack of evidence on the effect of different doses . the cho concentrations used in all of the previous mouth rinse studies are ~ 6% weight / volume ( w / v ) , which seems to be somewhat arbitrarily based on the composition of commercially available sports drinks and previous work on cho ingestion . however , as the mechanisms for performance benefit with rinse are very different to those with ingestion there could be greater benefit with higher concentrations , but this has not yet been determined . evidence suggests that the mechanisms responsible for the ergogenic effects of cho mouth rinsing are related to cho - sensing in the oral cavity ( 6 ) . however , it is unknown whether these oral receptors are sensitive to the concentration of cho in the solution and no dose - response studies have been conducted with cho mouth rinsing . in rodents allowed free access to different solutions , it has been demonstrated that , for glucose as well as cho polymer solutions , there is a concentration - dependent effect on affective behavior response . although animals ingested the solutions , knockout of the t1r2 and t1r3 proteins demonstrated that these behaviors were attributable , at least in part , to oral cho receptors ( 20 ) . 20 ) showed a dose - response effect with 9% w / v being the optimal concentration for glucose in their wild - type mice . there was little difference , compared to water , for solutions with a concentration of 4.5% and lower , whereas there was a plateau at concentrations above 9% . equivalent evidence is lacking in humans and there are no dose - response studies with mouth rinsing rather than ingestion . however , smeets et al . ( 17 ) conducted an fmri study ( to measure hypothalamic responses ) with glucose ingestion at a variety of solution concentrations ( 0% , 8.3% and 25% w / v ) and observed significant effects of the cho within minutes of ingestion . since these effects were observed immediately after ingestion ( i.e. before any absorption or metabolic effects would manifest ) this does suggest similar non - metabolic effects to those observed by chambers et al . ( 6 ) . in this study , these observed effects were more marked with the higher concentration glucose solution ( 17 ) . taken together , the evidence discussed above provides support for the notions that the optimal cho concentration to induce positive performance effects in humans could also be greater than the typical ~6% used in previous mouth rinse studies . furthermore , no studies have yet determined the effects of cho mouth rinsing on exercise of longer than 1 h in duration . therefore , the aims of this study were 1 ) to determine whether a carbohydrate mouth rinse enhances performance in a 90 minute treadmill performance trial ; and 2 ) to determine whether a higher concentration ( 12% ) has a greater effect than a 6% solution . this study was conducted according to the guidelines laid down in the declaration of helsinki ( 2004 ) . all procedures were approved by aberystwyth university research ethics committee for research involving human participants . written informed consent was obtained from all subjects . subjects also completed a pre - exercise screening questionnaire ( physical activity readiness questionnaire ) before participating in each test . seven male university students ( age 21 1 years ; body mass 78 7 kg ; stature 1.81 0.12 m ; means standard deviation ) participated in this study . all subjects were physically active and represented the university in a competitive sports team ( e.g. football [ soccer ] , rugby , field hockey ) but were not specifically endurance trained . all subjects took part in a familiarization trial and three main ( experimental ) trials : placebo ( pla , 0% cho - electrolyte solution , cho - e , rinse solution ) , 6% cho - e rinse solution , and 12% cho - e rinse solution . all trials took place at the same time of day ( start time within 1 hour ) for each subject , and were separated by at least five days . participants first completed the familiarization trial , which was identical to the main trials except plain water was used as the mouth rinse solution . in this trial subjects the main trials were conducted in randomized order and solutions were administered double - blind . in addition they were required to keep a record of food and activity during the 24 hours before the first main trial and replicate this before any subsequent trials . all trials were conducted on a motorized treadmill ( pps 55med , woodway gmbh , weil am rhein , germany ) . subjects were first asked to perform a 5 minute warm up at 6 km / h before beginning the performance trial . the test began with a rolling start ( at a treadmill speed of 8 km / h ) and subjects were allowed to freely control the treadmill speed using the manual controls located on the handrail . they were instructed to cover as much distance as possible during the 90 min test . subjects were not able to see the treadmill speed or distance covered , or heart rate , on the display panel but they were allowed to see the clock showing time elapsed . no encouragement was provided to the subjects during all of the tests . carbohydrate - containing solutions were made with a commercially available cho - electrolyte product ( h5 ltd . , derby , uk ) supplied as a powder . the powder was mixed with concentrated , artificially sweetened ( saccharin ) , cordial drink and plain water ( 1:3 ratio concentrate : water ) to give final cho concentrations of 6% and 12% w / v , with approximately 418 mg and 836 mg of sodium per liter in the 6% and 12% solutions , respectively . the cho was comprised of maltodextrin ( 95% ) , dextrose ( 3% ) and maltose ( 2% ) and the pla solution did not have any of this powder added but contained additional sweeteners ( saccharin ) to help with blinding , in accordance with the methods of chambers et al . ( 6 ) . for the rinse procedure subjects they were required to rinse the solution in their mouth for 5 seconds before expectorating back into the cup . the cups were marked with a graduation at 25 ml so that the volume of expectorate could be inspected to ensure that none of the liquid was swallowed . the first mouth rinse procedures occurred after the warm up and then at 15 , 30 and 45 minutes of the performance trial . room temperature and atmospheric pressure were monitored and recorded , prior to each trial , with a temperature probe ( rotronic hygromer pt100 , grant instruments , cambridge , uk ) connected to an electronic data logger ( squirrel sq2020 , grant instruments , cambridge , uk ) and a mercury column direct reading barometer ( cranlea , birmingham , uk ) , respectively . the distance and speed were recorded every 10 minutes during each trial and total distance was recorded at the completion of the 90-minute period . the distance covered at each 10 min split was used to calculate average speed over each segment . heart rate was measured using a telemetric device ( polar s610i , kempele , finland ) . rating of perceived exertion ( rpe ) , and subjective ratings of feeling and arousal were expressed using the borg scale ( 3 ) , feeling scale ( 9 ) , and arousal scale ( 19 ) , respectively . these measures were recorded after the warm up and every 15 minutes during each trial . was collected at 15 , 30 and 45 minutes during the trials using 150 l douglas bags . oxygen and carbon dioxide concentrations were determined using paramagnetic oxygen and infrared carbon dioxide analyzers ( servomex 4100 , crowborough , uk ) and gas volume was measured with a dry gas meter ( harvard apparatus ltd . , edenbridge , uk ) in order to determine oxygen consumption and carbon dioxide output . capillary blood samples were obtained from a fingertip pre- ( 5 min before warm - up ) and post - exercise ( immediately on completion of the 90 min run ) using an automatic lancet device ( soft clix pro , accu - check , mannheim , germany ) and collected into lithium - heparin treated microtubes ( microvette cb300 , sarstedt , nmbrecht , germany ) for the determination of blood glucose and lactate concentrations using an automated analyzer ( ysi 2300 stat plus , yellow springs , oh , usa ) . data analyses were carried out using the software package spss ( v17.00 ; spss inc . , all data were normally distributed as determined by z - scores for skewness and kurtosis ( within 2 ) , with the exception of feeling scale data . oneway repeated measures anova tests ( with holm - bonferroni corrected post hoc paired t - tests , where necessary ) were used to compare performance ( distance covered ) , and ambient conditions between trials . for normally distributed data , 2-way ( trial time ) repeated measures anova was used to compare variables with multiple measurement points during the trials ( distance , speed , heart rate , rpe , arousal , blood [ glucose ] , blood [ lactate ] , and respiratory variables ) between trials . if the sphericity assumption was violated the greenhouse - geisser correction was applied to anova p values ( indicated by subscript gh after p values in the text ) , otherwise no correction was applied . for the feeling overall comparisons were made between trials and within trials ( across time ) with the friedman test . also , the discrepancy between the first and last times was compared between trials for equivalence with the trial time interaction comparisons in a 2-way anova . seven male university students ( age 21 1 years ; body mass 78 7 kg ; stature 1.81 0.12 m ; means standard deviation ) participated in this study . all subjects were physically active and represented the university in a competitive sports team ( e.g. football [ soccer ] , rugby , field hockey ) but were not specifically endurance trained . all subjects took part in a familiarization trial and three main ( experimental ) trials : placebo ( pla , 0% cho - electrolyte solution , cho - e , rinse solution ) , 6% cho - e rinse solution , and 12% cho - e rinse solution . all trials took place at the same time of day ( start time within 1 hour ) for each subject , and were separated by at least five days . participants first completed the familiarization trial , which was identical to the main trials except plain water was used as the mouth rinse solution . in this trial subjects the main trials were conducted in randomized order and solutions were administered double - blind . in addition they were required to keep a record of food and activity during the 24 hours before the first main trial and replicate this before any subsequent trials . all trials were conducted on a motorized treadmill ( pps 55med , woodway gmbh , weil am rhein , germany ) . subjects were first asked to perform a 5 minute warm up at 6 km / h before beginning the performance trial . the test began with a rolling start ( at a treadmill speed of 8 km / h ) and subjects were allowed to freely control the treadmill speed using the manual controls located on the handrail . they were instructed to cover as much distance as possible during the 90 min test . subjects were not able to see the treadmill speed or distance covered , or heart rate , on the display panel but they were allowed to see the clock showing time elapsed . carbohydrate - containing solutions were made with a commercially available cho - electrolyte product ( h5 ltd . , derby , uk ) supplied as a powder . the powder was mixed with concentrated , artificially sweetened ( saccharin ) , cordial drink and plain water ( 1:3 ratio concentrate : water ) to give final cho concentrations of 6% and 12% w / v , with approximately 418 mg and 836 mg of sodium per liter in the 6% and 12% solutions , respectively . the cho was comprised of maltodextrin ( 95% ) , dextrose ( 3% ) and maltose ( 2% ) and the pla solution did not have any of this powder added but contained additional sweeteners ( saccharin ) to help with blinding , in accordance with the methods of chambers et al . they were required to rinse the solution in their mouth for 5 seconds before expectorating back into the cup . the cups were marked with a graduation at 25 ml so that the volume of expectorate could be inspected to ensure that none of the liquid was swallowed . the first mouth rinse procedures occurred after the warm up and then at 15 , 30 and 45 minutes of the performance trial . room temperature and atmospheric pressure were monitored and recorded , prior to each trial , with a temperature probe ( rotronic hygromer pt100 , grant instruments , cambridge , uk ) connected to an electronic data logger ( squirrel sq2020 , grant instruments , cambridge , uk ) and a mercury column direct reading barometer ( cranlea , birmingham , uk ) , respectively . the distance and speed were recorded every 10 minutes during each trial and total distance was recorded at the completion of the 90-minute period . the distance covered at each 10 min split was used to calculate average speed over each segment . heart rate was measured using a telemetric device ( polar s610i , kempele , finland ) . rating of perceived exertion ( rpe ) , and subjective ratings of feeling and arousal were expressed using the borg scale ( 3 ) , feeling scale ( 9 ) , and arousal scale ( 19 ) , respectively . these measures were recorded after the warm up and every 15 minutes during each trial . expired respiratory gas was collected at 15 , 30 and 45 minutes during the trials using 150 l douglas bags . oxygen and carbon dioxide concentrations were determined using paramagnetic oxygen and infrared carbon dioxide analyzers ( servomex 4100 , crowborough , uk ) and gas volume was measured with a dry gas meter ( harvard apparatus ltd . , edenbridge , uk ) in order to determine oxygen consumption and carbon dioxide output . capillary blood samples were obtained from a fingertip pre- ( 5 min before warm - up ) and post - exercise ( immediately on completion of the 90 min run ) using an automatic lancet device ( soft clix pro , accu - check , mannheim , germany ) and collected into lithium - heparin treated microtubes ( microvette cb300 , sarstedt , nmbrecht , germany ) for the determination of blood glucose and lactate concentrations using an automated analyzer ( ysi 2300 stat plus , yellow springs , oh , usa ) . data analyses were carried out using the software package spss ( v17.00 ; spss inc . , all data were normally distributed as determined by z - scores for skewness and kurtosis ( within 2 ) , with the exception of feeling scale data . oneway repeated measures anova tests ( with holm - bonferroni corrected post hoc paired t - tests , where necessary ) were used to compare performance ( distance covered ) , and ambient conditions between trials . for normally distributed data , 2-way ( trial time ) repeated measures anova was used to compare variables with multiple measurement points during the trials ( distance , speed , heart rate , rpe , arousal , blood [ glucose ] , blood [ lactate ] , and respiratory variables ) between trials . if the sphericity assumption was violated the greenhouse - geisser correction was applied to anova p values ( indicated by subscript gh after p values in the text ) , otherwise no correction was applied . for the feeling overall comparisons were made between trials and within trials ( across time ) with the friedman test . also , the discrepancy between the first and last times was compared between trials for equivalence with the trial time interaction comparisons in a 2-way anova . room temperature ( anova , p = 0.725 ) and barometric pressure ( anova , p = 0.282 ) were relatively stable and similar between trials . mean temperature was 19.5 1.0 c , 19.7 1.2 c , and 19.2 1.6 c for the pla , 6% cho - e and 12% cho - e trials , respectively . mean barometric pressure was 743 6 mmhg , 752 11 mmhg , and 755 21 mmhg for the pla , 6% cho - e and 12% cho - e trials , respectively . there was a significant difference between trials in distance covered during the 90 minute performance run ( anova , p = 0.001 , see table 1a and table 1b ) . post hoc analyses revealed that there was a significant difference between the pla and 6% cho - e trials ( p = 0.035 ) and between the pla and 12% cho - e trials ( p = 0.003 ) . there was no difference between the 6% cho - e and 12% cho - e trials ( p = 0.196 ) . two - way repeated measures anova revealed a significant main effect of trial ( p = 0.001 ) for average speed over each 10-minute segment of the run ( figure 1 ) . there was also a trend for an effect of time ( p = 0.053 ) but no significant trial time interaction ( p = 0.436 ) . due to the main effect of trial , the average speed for each 10 min segments were compared between trials with 1-way anova and post hoc paired t - tests ( holm - bonferroni corrected ) where necessary ( see figure 1 ) . there were no significant differences between trials in the first 60 min ( 1-way anova , p = 0.506 , 0.213 , 0.823 , 0.359gh , 0.933 , 0.373 for the first 6 segments , respectively ) . for the 7 and 8 segments there were significant differences ( p = 0.001 and 0.010 , respectively ) but there were no differences for final segment ( p = 0.141 ) . post hoc comparisons revealed that the average segment speed , in the 7 segment , was significantly lower in the pla trial compared to the 6% cho - e ( p = 0.014 ) and 12% cho - e ( p = 0.003 ) trials with no difference between the 6% and 12% cho - e trials ( p = 0.156 ) . in the 8 segment average speed was lower in the pla trial compared to the 6% cho - e ( p = 0.038 ) and 12% cho - e ( p = 0.021 ) trials with no difference between the 6% and 12% cho - e trials ( p = 0.889 ) . for heart rate ( table 2 ) , 2-way repeated measures anova showed no significant main effect of trial ( p = 0.131 ) or trial time interaction ( p = 0.097 ) heart rate increased progressively during the trial with each point significantly higher than the previous one ( all p < 0.05 ) with one exception , in that the heart rate at 75 minutes was not significantly different from 60 minutes ( p = 0.573 ) . for blood glucose concentration ( table 2 ) the 2-way repeated measures anova showed no significant main effect of trial ( p = 0.246 ) and trial time interaction ( p = 0.511 ) . there was a significant main effect of time ( p = 0.018 ) with higher concentrations post - exercise . for blood lactate concentration ( table 2 ) the 2-way repeated measures anova showed no significant main effect of trial ( p = 0.761 ) and trial time interaction ( p = 0.938 ) . there was a significant main effect of time ( p = 0.018 ) with higher concentrations post - exercise . for rating of perceived exertion 2-way repeated measures anova showed no significant main effect of trial ( p = 0.258 ) and trial time interaction ( p = 0.657 ) . rpe increased progressively during the trial with each point significantly higher than the previous one ( all p < 0.05 , see table 2 ) . for feeling scale ratings ( figure 2 ) , a friedman test revealed a significant effect of time ( p < 0.001 ) in all trials ( pla , 6% cho - e and 12% cho - e ) . there were no between trial differences at any of the time points although there was a trend at 90 min ( friedman , p = 0.084 ) . a 1-way anova on the discrepancy data ( which were normally distributed ) revealed a significant difference between trials ( p = 0.030 ) . post hoc analysis for the discrepancy data revealed no difference between the pla and 6% cho - e trials ( p = 0.173 ) , a significant difference between the pla and 12% cho - e trials ( p = 0.030 ) and no difference between the 6% and 12% cho - e trials ( p = 0.386 ) . when analyzed in 30-minute segments feeling data were normally distributed and 2-way repeated measures anova revealed a significant main effect of time ( p < 0.001gh ) . there was no significant main effect of trial ( p = 0.593 ) and a trend for a trial time interaction ( p = 0.071 ) . post hoc analysis for the time effect showed that feeling ratings were significantly lower in the last 30-minute segment compared to the first 30-minute ( p = 0.002 ) and second 30-minute ( p = 0.003 ) segments . ratings were also significantly lower in the second compared to first 30-minute segment ( p < 0.001 ) . for arousal ratings ( table 2 ) , 2-way repeated measures anova showed no significant main effect of trial ( p = 0.328 ) , time ( p = 0.125gh ) and trial time interaction ( p = 0.377 ) . for oxygen consumption ( table 3 ) , 2-way repeated measures anova showed no significant effect of trial ( p = 0.247 ) , time ( p = 0.082 ) or trial time interaction ( p = 0.244 ) . for carbon dioxide output ( table 3 ) , 2-way repeated measures anova showed no significant effect of trial ( p = 0.066 ) , time ( p = 0.476gh ) , or trial time interaction ( p = 0.151gh ) . for respiratory exchange ratio ( table 3 ) , 2-way repeated measures anova showed no significant main effect of trial ( p = 0.886 ) , time ( p = 0.533 ) and trial time interaction ( p = 0.477 ) . the main findings of the present study are that rinsing the mouth with a carbohydrate - electrolyte ( cho - e ) solution , compared to a cho - e - free placebo , resulted in the accumulation of a greater distance in a 90-minute running performance trial on a motorized treadmill at a self - selected pace . however , a higher cho concentration solution ( 12% w / v ) did not result in additional performance benefit compared to a standard cho concentration of 6% w / v . these findings agree with previous research showing enhanced endurance performance with cho and cho - e mouth rinses but this is the first study to show that there is no dose - response effect above concentrations of ~6% . significantly more distance was covered in the 6% cho - e ( p = 0.035 ) and 12% cho - e ( p = 0.003 ) trials compared to the placebo trial . however , there was no significant difference ( p = 0.196 ) between the two cho - e containing solutions ( table 1 ) . it would appear that the performance differences were attributable to a better speed maintenance in the final 20 min of the cho - e trials ( figure 1 ) , despite the fact that the last solution was provided 45 min before the end of the trial . this suggests that the beneficial effects of cho - e mouth rinsing during prolonged exercise may persist for at least 20 45 minutes , which may have practical relevance in situations in which free access to drinks / solutions is restricted by the nature of the sport or activity ( e.g. drinks stations in endurance races or breaks in match play ) . the speed profile in the present study suggests that the performance benefit is evident in the latter stages of the trial , at a time when fatigue becomes more apparent ( i.e. speed or power output tends to decrease ) rather than increasing speed in the earlier stages . ( 4 ) who observed differences in the first 3 quarters of a cycling time - trial ( although differences could be related to differences between studies in trial duration and exercise mode ) . however , in the present study , no rinses were provided after 45 minutes meaning that the effects either persist for more than 20 minutes post - rinse or are caused by other mechanistic pathway(s ) . other potential mechanisms include : some cho from the rinse remaining adhered to receptors in mouth ( i.e. not rinsed away ) after expectoration ; or there is some cephalic phase hormonal response which exhibits a lag of effect duration , and/or has some effect on performance ( or fatigue ) in the latter stages . however , these mechanisms can not be confirmed or refuted by the present data and further research is now needed to determine the mechanisms responsible for this apparent persistent effect and the duration for which the effects remain after the final ( or each ) rinse . interestingly , smeets et al . ( 18 ) observed changes in fmri signal that persisted for at least 30 minutes after the ingestion of glucose and energy - free , artificially sweetened beverages . these data lead the present authors to suggest that the effects observed in our study were due to central effects persisting for this time period ( i.e. at least 30 minutes post - rinse ) . although , the study of smeets et al . ( 18 ) was an ingestion study the fact that these effects persisted for at least 30 minutes in the energy - free drink condition suggest that some taste receptors , albeit for sweetness in this instance , may be able to stimulate brain responses that persist for this time period . although it is believed that the performance effects are due to different receptors ( for cho , not sweetness ) this data seems to support the notion that receptor - mediated mechanisms of action for cho rinsing ( and hence oral detection ) stimulates central effects that persist ( or remain above control conditions , being beneficial ) for at least 30 minutes , although this must be confirmed with similar studies on cho rinsing before this theory can be accepted . ( 13 ) used a 1 hour performance run , in which subjects were instructed to run as far as possible in the allowed time , and observed that greater distance was covered with a cho - e compared to pla mouth rinse . mean running speed was relatively stable throughout most of the trial with the exception of the first 5 minutes when it was slower ( presumably whilst subjects were adjusting and settling in to their preferred pace ) , and the final 5 minutes when mean speed was increased significantly ( the familiar sprint finish that is commonly observed in such performance trials , ( 15 ) ) . interestingly , mean running speed was significantly higher in the cho - e condition at two points in the middle of the run ( 5-minute average sections 2530 min and 3540 min ) as well as in the final 5 minutes , which combined to produce better overall performance in the cho - e trial . a similar profile was evident in the present study in that mean running speed was relatively stable over the duration of the 90-minute run , with segmental analysis showing no differences between trials until the final 30 minutes , where mean running speed was significantly higher than pla in both choe trials . there were no significant differences in the present study between the two cho - e solutions ( 6% and 12 % cho ) . this suggests that cho - e rinsing in the current study had no impact on the early and middle stages of the trials , which differs from the findings of rollo et al . the present findings are in agreement , however , with whitham and mckinney ( 21 ) , although they reported no significant difference between cho and pla mouth rinses for a 45-minute running performance trial , as the benefits in our study only become evident after 60 minutes or more . the rpe results showed significant differences across time ( p < 0.001 ) , which differs from the suggestions of carter et al ( 4 ) in that subjects did not select speeds that maintained a constant rpe . rather , average speed was relatively stable over the first two thirds of the trial whilst rpe progressively increased , culminating with near maximal ratings at the end ( coinciding with the familiar sprint finish as mentioned above ) . however , this appears to be more typical of running rather than cycling protocols ( 13 ) . nevertheless , the fact that rpe was not different between trials shows that more work was performed for the same relative subjective exertion , in agreement with previous studies in cycling ( 4 ) , and running ( 13 ) . a similar pattern was also evident for the feeling scale ratings , in that there was a significant decrease in ratings as the trial progressed but there were no differences between trials ( figure 2a ) showing that faster times and more work were achieved in the cho - e trials for the same ( or less ) relative decrease in feeling ratings . ( 6 ) that enhanced feeling ratings contributed to the enhanced performance with cho mouth rinsing . in the present study feelings ratings , when analyzed in 30 minute segments did show a trend ( p = 0.071 ) for a trial time interaction . furthermore , analysis of the feeling rating discrepancy scores showed a smaller discrepancy with cho - e , although this only reached statistical significance ( compared to pla ) in the 12% cho - e mouth rinse trial ( p = 0.030 ) . it would seem , therefore , that the higher concentration mouth rinse may better limit the typical reduction in feeling ratings observed during prolonged exercise but this does not appear to be of sufficient magnitude to further enhance performance when compared to the 6% cho - e rinse solution , although this requires further research . it is possible that subjects could have ingested some of the solutions during the rinse procedure . however , clear instructions were provided to expectorate all of the solution and this was practiced in the familiarization trials . the expectorated solution was visually inspected to ensure a volume similar to that taken into the mouth was expelled ( beakers were clearly marked to aid this ) . whilst it is possible that this could be confounded by saliva output , this volume is negligible ( saliva flow rate is usually less than 0.5 ml ) in the time allowed for rinsing . as the sample size was quite small it is conceivable that there was insufficient statistical power to detect differences between the 6% and 12% doses , which may be expected to be more subtle than the differences between pla and cho - containing solutions . however , post hoc power analysis on the present data revealed that a larger sample size would be unlikely to result in a finding of a significant difference between doses . nevertheless , we can not exclude the possibility that a much greater sample size ( n = 30 or more ) would have resulted in a significant difference between cho doses ( 6% and 12% ) but further research is required to determine whether this would actually be the case . however , the solutions were taste matched and all drinks were flavored and strongly sweetened with artificial sweeteners . we believe that we were successful at blinding the subjects from trial order as when questioned after each trial ( which were at least 1 week apart ) subjects could not distinguish between the solutions . after all 3 trials had been completed subjects were also asked to reflect on all trails again and suggest which solution they received in each . only one subject guessed all trials correctly and 4 guessed 1 trial correctly . however , all 7 participants covered a greater distance in both cho - e trials compared to the pla . hence , whilst we can not rule out the possibility of a placebo effect in some subjects , because of the fact that all subjects performed better with cho - e ( regardless of how they guessed ) we are confident that the observed effects are due to cho - sensing in the mouth as suggested previously ( 4 , 6 , 13 ) . it should also be noted that metabolic data ( e.g. gas exchange variables ) were only collected in the first 45 minutes yet the differences observed in the performance tests did not occur until after 60 minutes . whilst we are confident that the observed effects of cho rinsing were indeed non - metabolic ( also supported by blood glucose and lactate measurements at the beginning and end of trials ) it would be beneficial to also measure gas exchange throughout the whole exercise bout in future studies . in the study by rollo et al . ( 13 ) they used a customized automated treadmill to allow self - paced running whereas the current study used a traditional motorized treadmill with manual controls located on the handrail . according to whitham and mckinney , ( 21 ) studies in which runners manually change their running speed ( e.g. using a traditional motorized treadmill ) might not have the same degree of sensitivity to nutritional interventions as is the case when using an automated treadmill . this does not seem to have been true in the present study however , possibly due to the longer duration of the performance trial . therefore , it is possible that the use of longer duration running ( e.g. 90 minutes ) provides sufficient sensitivity to detect differences in self - paced treadmill running , even with a manual treadmill . in summary , we have demonstrated that rinsing the mouth with a cho - e solution , compared to placebo , enhances distance covered in a 90-minute running performance trial . this is the first study to show that a higher concentration solution ( 12% cho w / v ) does not offer any additional benefit compared to a standard concentration of 6% w / v , thus there is no dose - response effect with cho concentration above ~6% . it is not known whether 6% is actually the optimal concentration for a cho - containing mouth rinse solution or whether similar effects can be achieved with lower concentrations . hence , the minimal concentration of cho that is required to elicit these ergogenic effects has not been determined and this requires further research . the cho - e mouth rinse seemed to have a positive effect on the subjects feelings in the later stages of the 90-minute running performance trial and the speed of the athletes in the final 1030 minutes were greater in the cho - e trials compared to the pla trials , despite the fact that the last rinse procedure occurred 45 minutes before the end of the trial . furthermore , there was no difference in rpe despite greater speeds being obtained in the cho - e trials . this supports previous work suggesting that cho mouth rinsing acts via a central action related to motivation , perceptions of effort and/or motor drive but shows , for the first time , that this effect is also capable of having ergogenic effects in more prolonged exercise . based on the current results it would seem that it is not the quantity of cho in the mouth rinse that enhances performance , it is the fact there is a presence of cho in the mouth . in addition , the benefits seem to last for at least 2045 minutes after the final mouth rinse , which could have practical relevance in situations when access to drinks / rinsing is limited or not readily available at all times .
there is a substantial body of recent evidence showing ergogenic effects of carbohydrate ( cho ) mouth rinsing on endurance performance . however , there is a lack of research on the dose - effect and the aim of this study was to investigate the effect of two different concentrations ( 6% and 12% weight / volume , w / v ) on 90 minute treadmill running performance . seven active males took part in one familiarization trial and three experimental trials ( 90-minute self - paced performance trials ) . solutions ( placebo , 6% or 12% cho - electrolyte solution , cho - e ) were rinsed in the mouth at the beginning , and at 15 , 30 and 45 minutes during the run . the total distance covered was greater during the cho - e trials ( 6% , 14.6 1.7 km ; 12% , 14.9 1.6 km ) compared to the placebo trial ( 13.9 1.7 km , p < 0.05 ) . there was no significant difference between the 6% and 12% trials ( p > 0.05 ) . there were no between trial differences ( p > 0.05 ) in ratings of perceived exertion ( rpe ) and feeling or arousal ratings suggesting that the same subjective ratings were associated with higher speeds in the cho - e trials . enhanced performance in the cho - e trials was due to higher speeds in the last 30 minutes even though rinses were not provided during the final 45 minutes , suggesting the effects persist for at least 2045 minutes after rinsing . in conclusion , mouth rinsing with a cho - e solution enhanced endurance running performance but there does not appear to be a dose - response effect with the higher concentration ( 12% ) compared to a standard 6% solution .
INTRODUCTION METHODS Participants Protocol Statistical Analysis RESULTS DISCUSSION
PMC4053674
the traditional technique for indirect esthetic restorations consists of taking an impression of the tooth immediately after preparation , followed by the luting of a provisional restoration . after the indirect restoration fabrication , the provisional material is removed and an adhesive system is applied to the tooth after which a resin luting agent is used for the adhesive luting procedure . some studies have shown that adhesive systems bond better to freshly prepared dentin than to dentin contaminated by provisionalization , which may lead to microleakage , hybridization failure , and sensitivity . to avoid these problems , the immediate dentin sealing ( ids ) technique this technique consists of the application of an adhesive system immediately after tooth preparation and before taking the impression . another technique was developed in which a sealing film is produced on the dentinal surface using an adhesive system and a low - viscosity composite resin immediately after tooth preparation . this layer of low - viscosity composite resin is thought to isolate the underlying hybrid layer , consequently aiding in the preservation of the dentinal seal . ids techniques have the clinical advantages of covering the prepared dentin with a resinous agent immediately after cavity preparation , thereby sealing and protecting the dentin pulp complex as well as preventing or decreasing sensitivity and bacterial leakage during the provisional stage . thus , ids has been suggested when a significant area of dentin has been exposed during tooth preparation for indirect restorations , such as inlays , onlays , veneers , and crowns . most studies on ids techniques have evaluated the efficacy of the bond strength between the resin cement and dentin , showing good bonding of the resin used in ids as well as an increased resin bond strength in ids with an adhesive system and an additional low - viscosity microfilled resin . restoration interface in the specimens coated with an adhesive system and a low - viscosity microfilled resin compared with non - coated specimens . due to the demand for tooth - colored restorations , ceramic biocompatibility and mechanical properties ( e.g. , high - elastic modulus and hardness ) make them attractive for use as biomechanical prostheses . thus , ceramics are used widely for cusp replacement restorations as well as for esthetics . despite their many advantages , this weakness can be attributed to the presence and propagation of microflaws present on the surface of the material , making the ceramic susceptible to fracture during the luting procedure and under occlusal force . to increase retention , and fracture strength of the restored tooth , resin luting materials are commonly used to join ceramic crowns to the prepared hard tissue foundation . the cement layer may act as a cushion between the crown and dentin substrate , although the effect of this on the fracture strength of all - ceramic restorations is not well - established . molin et al . , verified the influence of the film thickness of resin luting agents on the joint bond strength of the ceramic dentin interface and showed that the bond strength values were significantly lower with 20-m film than with 50- , 100- or 200-m films . scherrer et al . , reported the effect of cement film thickness on the fracture resistance of glass ceramic plates loaded under compression using a spherical indenter . they found that the fracture resistance of glass ceramic cemented with zinc phosphate cement was not dependent on film thickness . when resin cement was used , a gradual decrease in the fracture strength was observed with increasing cement thickness . prakki et al . , evaluated the fracture resistance of ceramic plates ( 1- and 2-mm thick ) cemented to dentin as a function of the resin cement film thickness . these authors concluded that a higher cement film thickness resulted in increased fracture resistance only for 1-mm ceramic plates . the materials used in the ids can create a film thickness covering a vast range of values , depending on the type of resin material and the topography of the tooth preparation . however , no information exists regarding such film thickness in a full crown preparation and its influence on the fracture load of all - ceramic crowns . therefore , the aim of this in vitro study was to evaluate the thickness of an adhesive , a low - viscosity microfilled resin and a resin cement under full crown preparations as well as the influence on the compressive fracture load of a reinforced all - ceramic crown luted to human teeth . this study investigated the following hypotheses : ( a ) there are differences in the thickness of the resin materials at different positions under crowns and ( b ) the thickness of the resin materials does not influence the compressive fracture load of the all - ceramic crown . sixty sound maxillary premolars extracted for therapeutic indications were cleaned and disinfected by immersion in 10% thymol for 24 h. the premolars were then stored in distilled water at 4c for a maximum period of 6 months . these teeth had the following coronal dimensions : buccal - lingual distance of 9.0 - 9.6 mm ; mesiodistal distance of 7.0 - 7.4 mm ; and cervical - occlusal distance of 7.7 - 8.8 mm . a variation of 0.5 mm was associated with each measurement . the roots were mounted in acrylic resin approximately 2 mm below the cementoenamel junction of the tooth . tooth preparation was performed using a standardized preparation machine consisting of a high - speed hand piece ( kavo , joinville , sc , brazil ) coupled to a mobile base . the mobile base moved vertically and horizontally , in increments of 3 m , with the aid of a micrometer ( mitutoyo , tokyo , japan ) . cusps were removed and the long axes of teeth were positioned vertically on the preparation machine . 3139 diamond wheel bur ( sorensen , cotia , sp , brazil ) was attached to a high - speed hand piece and all lateral convex surfaces were leveled . the dimensions of the preparations were as follows : 6 taper on each side , 1.2 0.2 mm shoulder margin and a 5 mm core height with rounded line angles . the prepared teeth were then randomly divided into the following 3 groups ( n = 20 ) according to the materials used [ table 1 ] : group 1 : control , without the ids techniquegroup 2 : ids with clearfil se bond . se primer was first applied to the cavity for 20 s and gently air dried . se bond was then applied ; mildly air - dried and light cured for 10 s using a conventional halogen light curing unit . polymerization of the adhesive was followed by the application of an air - blocking barrier ( glycerine jelly ) and light cured for a further 10 s to polymerize the oxygen inhibition layer . the glycerine jelly was rinsed under running tap watergroup 3 : ids with clearfil se bond and protect liner f. clearfil se bond was applied as described in group 2 but without the air - blocking barrier . after application of the adhesive , protect liner f was placed on the adhesive surface using a brush - on technique and light cured for 20 s. the surface of the cured low - viscosity microfilled resin was wiped with a cotton pellet soaked in alcohol for 10 s to remove the unpolymerized layer on the surface . group 1 : control , without the ids technique group 2 : ids with clearfil se bond . se primer was first applied to the cavity for 20 s and gently air dried . se bond was then applied ; mildly air - dried and light cured for 10 s using a conventional halogen light curing unit . polymerization of the adhesive was followed by the application of an air - blocking barrier ( glycerine jelly ) and light cured for a further 10 s to polymerize the oxygen inhibition layer . the glycerine jelly was rinsed under running tap water group 3 : ids with clearfil se bond and protect liner f. clearfil se bond was applied as described in group 2 but without the air - blocking barrier . after application of the adhesive , protect liner f was placed on the adhesive surface using a brush - on technique and light cured for 20 s. the surface of the cured low - viscosity microfilled resin was wiped with a cotton pellet soaked in alcohol for 10 s to remove the unpolymerized layer on the surface . materials used in the study an impression of each prepared tooth was taken using a polyvinyl siloxane impression material ( express , 3m / espe , st . paul , mn , usa ) and a custom - made impression tray fabricated with acrylic resin . the impressions were then cast in type iv stone ( durone , dentsply , york , pa , usa ) to produce dies . after the impression , the preparations were temporized with self - curing acrylic resin crowns cemented with non - eugenol provisional cement ( tempbond ne , kerr , orange , ca , usa ) . tooth specimens were stored in distilled water at 37c for 2 months . for 10 specimens from each group , ips empress 2 restorations were fabricated in accordance with the manufacturer 's instructions in a dental laboratory . a 0.8-mm lithium disilicate core was made and ips empress veneer ceramic ( dentin shade ) was applied to the core to create a crown thickness of 1.5 mm . after storage , provisional restorations were removed and preparations were cleaned using pumice slurry until all provisional cement was removed . the intaglio surface of each crown was etched with 10% hydrofluoric acid for 20 s , rinsed and dried . a layer of silane ( clearfil ceramic primer , kuraray medical inc . , tokyo , japan ) was applied , followed by gentle air drying for 5 s. the coated surfaces of the preparation ( except in group 1 ) were then acid etched with 37% phosphoric acid for 10 s and rinsed and dried to remove any debris . a mixture of ed primer a and b was applied for 30 s and gently air - dried for 5 s. the base and catalyst of panavia f resin cement were mixed according to the manufacturer 's instructions . excess cement was removed with a microbrush and each surface ( buccal , lingual , mesial , distal , and occlusal ) was light cured for 40 s. the margins were finished with polishing discs and silicone tips ( soft - lex , 3 m espe , st . after 2 months of storage in distilled water at 37c , each specimen was seated in a jig placed on the base of a universal testing machine . a compressive load was applied through a 3.2-mm diameter hardened steel sphere attached to the moving head of the testing machine ( model 1123 , instron corp . , canton , ma , usa ) . the remnant ceramic on the prepared tooth was determined as type i ( 0% ) , type ii ( less than 50% ) or type iii ( more than 50% ) . in the other 10 specimens for each group , after storage in 37c distilled water for 2 months , each crown was sectioned buccolingually through the center of the crown with a diamond blade in an isomet saw ( buehler , lake bluff , il , usa ) , resulting in two portions . one portion of each specimen was placed under a measuring microscope ( profile projector v-16d , nikon , tokyo , japan ) , with a measuring sensitivity of 1 m , under 100 magnification . the thickness of the adhesive system , low - viscosity microfilled resin and resin cement was measured at 10 positions as shown in figure 1 . thickness of the resin materials was measured in a direction perpendicular to the dentin surface at each position . the thickness of the resin cement , adhesive and low - viscosity microfilled resin were measured at 10 different positions along the preparation the final thickness of the resin materials ( adhesive , low - viscosity microfilled resin and resin cement ) at the different positions in each group was compared using the friedman and wilcoxon signed - rank non - parametric tests . the kruskal - wallis and mann - whitney u non - parametric tests were also used to compare the final thickness values between the groups in each position . fracture loads were analyzed using the one - way analysis of variance , followed by tukey 's multiple comparison tests . the correlation between fracture load and the thickness of the resin materials was analyzed by the pearson correlation test . the mean film thickness of the adhesive , low - viscosity microfilled resin and resin cement in each position for the different groups is shown in table 2 and in figures 24 . the thickness of the resin cement was higher in positions 5 and 6 than in other positions . the thickness of adhesive was higher in positions 2 and 9 and lower in positions 1 and 10 . the thickness of the low - viscosity microfilled resin was higher in positions 5 and 6 and lower in positions 1 and 10 . mean thickness ( m ) and standard deviation of the resin cement , adhesive and low - viscosity microfilled resin of the experimental groups in the different positions group 1 - mean thickness ( m ) of the resin cement group 2 - mean thickness ( m ) of the adhesive and resin cement group 3 - mean thickness ( m ) of adhesive , low - viscosity microfilled resin , and resin cement the sum of the resin materials in each position is presented in table 3 . according to the friedmann non - parametric test , statistically significant differences were noted between the positions ( p < 0.01 ) . in group 1 , a significantly higher resin cement thickness was obtained in positions 5 and 6 . in group 2 ( adhesive + resin cement ) and group 3 ( adhesive + low - viscosity microfilled resin + resin cement ) , significantly lower resin thickness values were obtained in positions 1 and 10 . intermediate values were found in positions 2 , 3 , 7 , and 8 . although no statistically significant difference was observed between these positions and positions 5 and 6 in groups 2 and 3 , a higher thickness of the resin material was observed at the occlusal surface ( positions 5 and 6 ) . sum of thickness of resin material ( m ) at different positions according to kruskal - wallis , the thickness of the resin material differed significantly between the groups in all positions ( p < 0.01 ) . the highest values were obtained in group 3 , which were significantly different than those of group 2 . the lowest values were obtained in group 1 , which differed significantly from those of group 2 [ table 3 ] . the fracture load of group 3 ( 1300 n ) was statistically higher than of group 1 ( 1001 n ) ( p < 0.01 ) . group 2 ( 1189 n ) was not significantly different from groups 1 and 3 [ table 4 ] . all fractures occurred through the veneer and the core materials . in group 1 , 3 specimens presented with type i failure and 7 specimens with type ii failure . in group 2 , 2 specimens presented with type i failure , 6 with type ii and 2 with type iii . in group 3 , 4 specimens presented with type ii failure and 6 specimens with type iii failure [ table 5 ] . mean fracture load ( n ) of the experimental groups remnant ceramic ( % ) on the crown after fracture pearson 's correlation coefficient indicated a regular positive correlation between the final thickness of the resin material and the fracture load ( r = 0.549 ) [ figure 5 ] . the first hypothesis was accepted because the film thickness values of the 3 resin materials ( adhesive , low - viscosity microfilled resin , and resin cement ) were different and appeared to be influenced by their positions under the crown . in groups 2 and 3 , the clearfil se bond adhesive system was applied to seal the dentin immediately after tooth preparation . the film thickness of this material presented a vast range of values at different positions of the adhesive layer , which was in accordance with other studies . higher thickness was obtained in positions 2 and 9 ( concave parts of the preparation ) , which is consistent with the tendency of the adhesive to pool at the inner angles of the preparation . the minimum thickness in both groups the thinner film of the adhesive at the borders is fortunate because a thicker film would expose more adhesive to the degradation process in the oral cavity . in group 2 , the thickness of the adhesive could be measured in practically all positions , likely because the application of the glycerine gel allowed the polymerization of the outer layer . in some positions ( positions 1 , 4 , and 10 ) , the film thickness was less than 40 m [ figure 3 ] , which corresponds to the inhibition layer associated with oxygen inhibition of the radicals that initiate the polymerization reaction . without the glycerine gel layer , the adhesive would not have polymerized and would have been removed during the cleaning of the adhesive interface , resulting in many areas of exposed dentin . in fact , in group 2 , the adhesive film could not be seen or measured at one of the borders of the preparation in 6 specimens . the film thickness was likely very thin and was removed during the cleaning procedure before luting with panavia f. when the adhesive film thickness was compared between groups 2 and 3 , a trend toward higher thickness was observed in group 3 , likely due to the application of the protect liner f over the adhesive , which protected the adhesive layer during the cleaning procedure . the cleaning of the adhesive interface was performed with pumice slurry to remove all remnants of the provisional cement . during this procedure , part of the adhesive layer was likely removed and the thickness of the adhesive reduced . the film thickness of the protect liner f ( group 3 ) presented a more uniform range of values at different positions compared with the adhesive layer . this material has a higher percentage of filler compared with clearfil se bond as well as a decreased likelihood of pooling at the inner angles of the preparation . using a microbrush , the material was applied over the adhesive as thinly as possible from a visual perspective . at the borders , a clean microbrush was applied to remove a part of the material and to avoid a thicker layer , which could have considerably increased the amount of material exposed to the oral cavity . the minimum thickness was obtained in positions 1 and 10 ( marginal areas of the preparation ) , which ranged from 19 m to 67 m . glycerine gel was not used , although the surface of the cured low - viscosity microfilled resin was wiped with a cotton pellet soaked in alcohol to remove the unpolymerized layer on the surface . without this procedure , in addition , the surface of the low - viscosity microfilled resin was cleaned with pumice slurry to remove the cement remnants , whereby some micrometers of the material may have also been removed . the thickness of the resin cement can be influenced by many factors , including margin geometry and the presence of the die spacer . in relation to the margin geometry , a shoulder bevel facilitates better seating than does a shoulder , although the preparation for a lithium disilicate ceramic requires a shoulder or a pronounced chamfer . the omission of a die spacer affects the proper seating of the restoration while an excessive layer can also enlarge the luting space . the best crown seating was found when 20 - 40 m of cement space was provided . in the present study , 2 coats of die spacer were applied , which corresponds to a thickness of approximately 30 m . however , the thickness of the resin cement was higher in positions 5 and 6 ( the occlusal portions of the preparation ) . this finding corroborates previous reports regarding marginal fit and cement distribution under all - ceramic restorations , which showed the highest cement film thickness was usually located at the occlusal surface underneath the crown . ids with clearfil se bond and protect liner f ( group 3 ) had the highest film thickness of the resin material in all positions compared with the other groups [ table 3 ] . at the borders of the preparation ( positions 1 and 10 ) , the median thickness of the resin materials exposed to the oral environment corresponded to 120 m , 85 m , and 56 m for groups 3 , 2 , and 1 , respectively . the marginal and internal fit of all - ceramic crowns is still very important for conventional and adhesive luted restorations . however , marginal fit is one of the most crucial criteria in the clinical decision involving the insertion of a restoration . most authors agree that discrepancies in the range of 100 m seem to be clinically acceptable with regard to the longevity of restorations . for other authors , however , marginal discrepancies up to 160 m might be tolerable . using the latter criteria , the results of the present study are within biologically acceptable standards for in all 3 groups . for the luting procedure with panavia f , ed primer was applied on the clearfil se bond adhesive ( group 2 ) and on the low - viscosity microfilled resin ( group 3 ) . it is likely that this material contributed to the final thickness of the resin materials . however , it was not possible to visualize the layer of ed primer . in relation to the luting procedure , hence , it would have been more appropriate to apply a hydrophobic adhesive that did not contain water . nevertheless , according to the study of okuda et al . , ed primer did not negatively influence the bond strength when it was applied on protect liner f for luting with panavia f while a higher bond strength was obtained in the study of udo et al . the reason for this finding is not clear , but it may be related to the polymerization of panavia f in the presence of ed primer . ed primer contains an aromatic sulfinate salt , which is believed to accelerate interfacial polymerization between the sealed dentin surface and the resin cement . the second study hypothesis was rejected because a significant upward trend was noted in the fracture load with increasing thickness of the resin material . this finding was not in accordance with other studies that observed a downward trend in the fracture load with increasing thickness of the resin cement . kim et al . observed that increased cement thickness can have an effect on reducing flexural failure load . in the study , the load to failure of silicon bonded to glass with variations in the thickness of the bonding epoxy layer indicated a 50% reduction in strength when this layer was increased from 20 m to 200 m . burke and watts , evaluated the resin cement thickness of 2-mm ceramic crowns that were submitted to compressive fracture load . the authors concluded that the film thickness did not influence the overall results because the mean film thickness of the best performing material tested was similar to that in a group that did not perform as well . however , such studies evaluated the influence of the thickness of the resin cement on ceramic strength without taking into consideration the film thickness formed by ids techniques . therefore , it is difficult to make direct comparisons between studies because of the different specimen dimensions , types of ceramic , and resin cement systems that were used , especially because numerous factors can affect ceramic fracture resistance behavior . in the present study , the load was applied on the occlusal regions of the crowns , corresponding to positions 5 and 6 . it was at these positions that the highest final thickness of the resin material was recorded for all groups ( approximately 130 m , 250 m , and 360 m for groups 1 , 2 , and 3 , respectively ) . because , the resin cement thickness was similar for all groups in positions 5 and 6 ( approximately 150 m ) , it is thought that the thickness of the clearfil se bond and protect liner f influenced the values of the compressive fracture load . during the curing process , the resin cement is transformed from a liquid to a solid state , thereby causing volume change and shrinkage of the material . the additional film thickness formed by the adhesive and the low - viscosity microfilled resin may have favored greater absorption of stresses generated by the shrinkage of the resin cement , contributing to greater stress relief at the interfaces . according to rees and jacobsen , high shrinkage stress , even over a small area of an interface , is sufficient to induce crack formation . this becomes an area of stress concentration and is liable to induce further failures under occlusal loading . the integrity of the ceramic - resin cement interface is predicted because of the great bond strength between the composite material and silanized ceramic . however , crack formation may have been possible at the dentin - resin cement interface during shrinkage of the resin cement , especially in the group that did not receive ids ( group 1 ) , which may explain its lower fracture load . another factor that could have contributed to the higher fracture load in group 3 was the fact that ids with the adhesive system and low - viscosity microfilled resin significantly improved the bond strength of indirect restorations bonded to dentin using the resin cement . increasing the bond strength of the luting material helps to increase the fracture strength of the restorative material . concluded that ids with another adhesive system , clearfil tri - s bond , increased the bonding durability of the resin cement to dentin against occlusal loading , which may reduce the possibility of fracture of all - ceramic crowns in clinical situations . in all specimens , crown fractures occurred through the veneer and core ceramics . the classification of fractures used in the present study was based on the remnant ceramic on the prepared tooth because this was the main difference observed between the groups . more than 50% of the ceramic crown remained bonded to the preparation after the compressive fracture load test in most specimens in group 3 . this provides support for the idea that ids with clearfil se bond and protect liner f may promote a stronger bond between the ceramic crown and the dental preparation than does ids with clearfil se bond ( group 2 ) or does uncoated specimens ( group 1 ) , in which less than 50% of the ceramic crown remained bonded to the preparation . one advantage of the ids technique is that the thickness of the resin materials is considered before the restoration is fabricated because it is captured in the impression . even so , the thickness of the resin materials can be a concern for crowns . a part of the tooth preparation was observed to be occupied by clearfil se bond and protect liner f. as a consequence , a part of the space designated for the ceramic core was occupied by clearfil se bond and protect liner f in group 3 , especially at the concave part of the preparation ( positions 2 and 9 ) . despite this , this alteration in the geometry of the ceramic could be a concern for unreinforced ceramics such as ips empress leucite and feldspathic ceramics . ips empress 2 ceramic was used in the present study because reinforced ceramics tend to be used in clinical practice for full crowns on posterior teeth . recently , this ceramic has been replaced by ips e. max ceramic , which has a similar composition as ips empress 2 . for this reason , the results of the present study may have been similar if ips e. max ceramic had been used . the ids technique should not be recommended with other reinforced dental ceramic systems such as glass infiltrated aluminum oxide , high - purity alumina , and zirconia ceramics . the main reason is that these reinforced ceramics resist the formation of microretentive surfaces after hydrofluoric acid etching and airborne particle abrasion , which are important surface treatments for adhesive luting . therefore , an interesting study could evaluate the influence of ids with feldspathic ceramic crowns , which have lower fracture resistance . despite the limitations of this in vitro study , the following conclusions can be drawn : the film thickness of clearfil se bond was higher at the concave and occlusal portions of the crown preparation and thinner at the bordersprotect liner f had a more uniform range of values at different positions except at the borders of the preparations , where the film thickness was thinnerthe film thickness of panavia f resin cement was higher at the occlusal portion of the crown preparationthe film thickness formed by clearfil se bond and protect liner f increased the fracture load of ips empress 2 ceramic crowns . the film thickness of clearfil se bond was higher at the concave and occlusal portions of the crown preparation and thinner at the borders protect liner f had a more uniform range of values at different positions except at the borders of the preparations , where the film thickness was thinner the film thickness of panavia f resin cement was higher at the occlusal portion of the crown preparation the film thickness formed by clearfil se bond and protect liner f increased the fracture load of ips empress 2 ceramic crowns .
objectives : the objective of this study is to evaluate , in vitro , the thickness of immediate dentin sealing ( ids ) materials on full crown preparations and its effect on the fracture load of a reinforced all - ceramic crown.materials and methods : sixty premolars received full crown preparation and were divided into the following groups according to the ids technique : g1-control ; g2-clearfil se bond ; and g3-clearfil se bond and protect liner f. after the impressions were taken , the preparations were temporized with acrylic resin crowns . ips empress 2 restorations were fabricated and later cemented on the preparations with panavia f. 10 specimens from each group were submitted to fracture load testing . the other 10 specimens were sectioned buccolingually before the thicknesses of panavia f , clearfil se bond and protect liner f were measured in 10 different positions using a microscope.results:according to analysis of variance and tukey 's test , the fracture load of group 3 ( 1300 n ) was significantly higher than that of group 1 ( 1001 n ) ( p < 0.01 ) . group 2 ( 1189 n ) was not significantly different from groups 1 and 3 . the higher thickness of clearfil se bond was obtained in the concave part of the preparation . protect liner f presented a more uniform range of values at different positions . the thickness of panavia f was higher in the occlusal portion of the preparation.conclusions:the film thickness formed by the ids materials is influenced by the position under the crown , suggesting its potential to increase the fracture load of the ips empress 2 ceramic crowns .
INTRODUCTION MATERIALS AND METHODS RESULTS DISCUSSION CONCLUSIONS
PMC3147118
the human intestinal microbiota is a complex community composed of at least several hundred different species of bacteria with approximately 1010 cells per gram of feces [ 14 ] . the intestinal microbiota plays a critical role in human health including colonization resistance , nutrition , metabolism of nondigestible dietary components and xenobiotics , proliferation and differentiation of intestinal mucosal epithelial cells , and homeostasis of the immune system [ 57 ] . direct analysis of the intestinal microbiota in the human colon is inherently difficult for routine experiments . therefore , most studies are conducted with human fecal specimens , animal models , in vitro batch culture , and continuous culture systems that mimic the human gastrointestinal tract . however , recently , a study using high - throughput anaerobic culture techniques reported that 56% of human fecal microbiota belongs to readily cultured species , over 40% of gut microbiota was uncultured species to date [ 3 , 8 , 9 ] . one of the reasons for this limitation generates from the difficulty of providing all of the appropriate nutrients and conditions for growth of the complex intestinal microbiota community . therefore , research to provide more information on in vitro culture conditions as the study by goodman et al . would enhance the evaluation of perturbation of the intestinal microbiota by factors that might adversely affect human health . molecular techniques that target the 16s rrna gene and other genetic markers have been used to analyze microbial community ecology in the human intestine . denaturing gradient gel electrophoresis ( dgge ) has been used to monitor differences and changes in the overall microbial community from fecal samples [ 1012 ] , and quantitative real - time pcr has provided numerical abundance data for fecal microbiota [ 13 , 14 ] . recently , application of high - throughput techniques such as pyrosequencing and the human intestinal tract chip ( hitchip ) microarray have been used to obtain deep phylogenetic analysis of intestinal microbiota [ 12 , 1517 ] . in the present study , dgge , real - time pcr , and pyrosequencing were used to profile the abundance and diversity of the bacterial community from human fecal inoculum grown under different culture conditions . the aim of this study was to compare various batch culture conditions for activating and maintaining a complex fecal microbiota community to mimic growth conditions of the gastrointestinal tract . the culture conditions developed in this investigation can be applied for future research to determine the impact of antimicrobial agents , food contaminants , xenobiotics , probiotics , and dietary supplements on the human intestinal microbiota . each fecal sample was coded individual a , b , c , and d. fecal samples were cultured in brain heart infusion ( bhi ) broth , modified high - concentration carbohydrate medium ( hcm ) , or low - concentration carbohydrate medium ( lcm ) [ 18 , 19 ] . the composition of high- and low - carbohydrate media is described in table 1 . feces were diluted with anaerobic maximum recovery diluent ( mrd ; labm idg , bury , uk ) buffer to a final concentration of 25% ( w / v ) . to compare the difference of microbiota grown in three media , fecal suspensions were diluted to give an inoculum concentration of 1% ( w / v ) , then inoculated in each medium ( 10 ml of final volume ) , and cultured anaerobically at 37c . the growth was analyzed by optical density ( od ) and flow cytometry on an accuri c6 fcm ( accuri cytometers , ann arbor , mich , usa ) following the manufacturers instruction with collected samples at each time point . to determine the optimal incubation time and check the metabolic activity of the microbiota , 1% fecal suspension decolorization of gentian violet indicates the metabolic activity of fecal microbiota [ 20 , 21 ] . fecal supernatants were assessed as medium supplements to determine whether unknown growth factors affect in vitro growth of intestinal microbiota . individual fecal supernatant was prepared from 25% diluted fecal specimens with anaerobic mrd buffer after centrifugation at 11,000 rpm for 30 minutes . autoclaved fecal supernatant was added to each medium with a final concentration 1% ( v / v ) . the optimal fecal inoculum concentration was also determined by the inoculating 0.1 to 5% of inoculum to low carbohydrate medium . the growth of each inoculum was analyzed by optical density at 600 nm and by flow cytometry ( fcm ) . genomic dna was extracted from 1 ml of samples at each time point using the dna elution accessory kit of the rna power soil total rna isolation kit ( mobio laboratories , carlsbad , calif , usa ) by following the manufacturer 's protocol . preliminary experiments showed that this kit had the best extraction efficiency ( produce high concentration of dna from same amount of fecal material ) among several kits ( data not shown ) . to conduct dgge analysis , 16s rrna gene fragments of the v3 region were amplified using primers gc - clamp-340f ( 5-tcc tac ggg agg cag cag-3 ) and 518r ( 5-att acc gcg gct gct gg-3 ) as described [ 22 , 23 ] . the pcr reaction was performed using a mastercycler gradient instrument ( eppendorf , hauppauge , ny , usa ) , in a final volume of 50 l with 10x taq buffer , dntp mixture ( takara , shiga , japan ) , 10 m of each primer ( mwg - biotech , ebersberg , germany ) , 2 u of taq polymerase ( ex taq ; takara ) , and 1 l of template . after initial denaturation at 94c for 5 minutes , amplification consisted of 30 cycles of denaturation ( 30 seconds , 94c ) , primer annealing ( 30 seconds , 55c ) , and primer extension ( 30 seconds , 72c ) and a final extension step of 7 minutes at 72c . the pcr product was checked by using 2% agarose gel electrophoresis and visualized using a gel doc system ( biorad , hercules , calif , usa ) . pcr products were concentrated and purified with the qiaquick pcr purification kit ( qiagen inc . , valencia , calif , usa ) . dgge was conducted using a d - code system ( biorad ) with 8% ( w / v ) polyacrylamide gels contained 40%65% denaturant gradient , 1 mm thick , in 1x tae buffer . the equal amounts of purified pcr products were loaded on gel , and electrophoresis was performed at 25 v for 15 minutes then at 70 v for 16 hours and 30 minutes at 60c . the gel was stained in 250 ml of running buffer containing ethidium bromide ( 50 g ml ) for 15 minutes and then rinsed in 250 ml of running buffer for 20 minutes . the sequences were identified using blast search on the genbank database and the database of type strains at eztaxon server . the profile of dgge gel was analyzed with the bionumerics program , version 6.0 ( applied maths , st .- martens - latem , belgium ) . cluster analysis of the band pattern was performed using the unweighted pair group method using arithmetic averages ( upgma ) and the similarity between lanes was calculated based on the band position . the dice coefficient was used to create dendrograms of the dgge profiles obtained from different samples . real - time pcr were performed in a final 25 l volume containing 12.5 l of 2x iq sybr green supermix ( biorad ) , 10 m of each primer ( mwg - biotech ) , 1 l of template dna ( tenfold dilution series of standard and samples dna ) or distilled water ( negative control ) . bact349f ( 5-agg cag cag tdr gga at-3 ) and bact518r ( 5-att acc gcg gct gct gg-3 ) were used to quantify total bacteria , btr275f ( 5-cga tgg ata ggg gtt ctg-3 ) and btr555r ( 5-ccc ttt aaa ccc aat raw tcc gg-3 ) were used for bacteroidetes , firm350f ( 5-ggc agc agt rgg gaa tct tc-3 ) and firm814r ( 5-aca cyt agy act cat cgt tt-3 ) were for firmicutes [ 2628 ] . the quantifications were performed with three independent real - time pcr runs using the cfx96 real - time pcr detection system ( biorad ) , associated with cfx manager interface software ( version 1.0.1035.131 ; biorad ) . the amplifications were carried out with the following steps : 50c for 2 minutes , 95c for 10 minutes , and 40 cycles of 95c for 10 seconds and 60c for 30 seconds . melting curve data were obtained from 60c to 95c at a rate of 0.5c sec with continuous measurements of the sybr green i signal intensities . dnas from cultures of escherichia coli atcc25922 , bacteroides eggerthii atcc27754 , and clostridium butyricum atcc19398 were used to construct standard curves for quantification by plotting the ct values obtained from amplification of dilution series . for pyrosequencing , amplification of genomic dna was performed using barcoded primers , which targeted the v1 to v3 region of the bacterial 16s rrna gene . the amplification , sequencing , and basic analysis were performed according to the methods described by chun et al . and completed by chunlab inc . ( seoul , korea ) using a 454 gs flx titanium sequencing system ( roche , branford , conn , usa ) . briefly , analyzed sequencing reads of each sample were separated by unique barcode and filtered to remove reads , which was shorter than 300 bp or the average quality score of read was less than 25 or containing 2 more ambiguous nucleotides ( ns ) , and then removed chimera products for further analyses [ 29 , 30 ] . the extended eztaxon database ( http://www.eztaxon-e.org/ ) , which contains representative sequences of both cultured and uncultured bacteria with hierarchical taxonomic classification , was used for taxonomic assignments . the pyrosequencing reads were compared with sequences in the eztaxon - e database using blastn search and obtained similarity using pairwise comparison , and then the sequences were assigned a taxonomic classification through using the criteria of 97% sequence identity for species , 94% identity for genus , 90% identity for family , 85% identify for order , 80% identity for class , and 75% identity for phylum . if the sequence identity was below the cutoff value , the sequence was assigned to the unclassified group at each phylogenetic level . the diversity index and statistical analysis were performed using mothur program with the cutoff value of 97% similarity for assigning phylotypes . bacterial sequences from excised dgge bands were submitted to the genbank database under accession numbers from hq645054 to hq645071 . the sequence reads from pyrosequencing are available in the embl sra database under the study accession number erp000433 ( http://www.ebi.ac.uk/ena/data/view/erp000433 ) . brain heart infusion ( bhi ) , low - concentration carbohydrate ( lcm ) , and high - concentration carbohydrate media ( hcm ) were used for intestinal microbiota growth . previous studies used hcm in human intestinal continuous culture [ 18 , 19 ] . however , the digestible carbohydrate concentrations in the large intestine are lower than carbohydrate concentration in high carbohydrate medium . therefore , we wanted to compare hcm , lcm , and bhi media under the same inoculum and growth conditions . diluted feces ( 1% ) were inoculated in the different media , and the intestinal microbiota growth was analyzed by spectrophotometer and quantitative real - time pcr ( supplementary figure 1 available at doi : 10.1155/2011/838040 ) . the growth of intestinal microbiota showed maximum od at 18 hours in lcm and hcm , while the maximum in bhi medium was earlier in the incubation period . the 16s rrna genes of cultured bacteria in each medium increased over the incubation period . the cell number of inoculum was 4.8 10 cells / ml ( mean value of cell numbers in inoculum of three media ) . the highest cell number was detected at 18 hours in lcm ( 1.85 10 cells / ml ) and hcm ( 1.19 10 cells / ml ) , whereas bhi reached maximum cell numbers ( 1.64 10 cells / ml ) after 18 hours . to determine the metabolic activity of cultured bacteria , the fecal microbiota cultures were dosed with gentian violet , and the activity was monitored by measuring color disappearance with time . the microbiota completely decolorized gentian violet after 18 hours of incubation ( supplementary figure 2 ) . eighteen hours was chosen , because the residence time of readily digestible compounds in intestinal tract is generally within a day . the growth of intestinal microbiota in different media showed similar maximum od and the total bacterial 16s rrna gene increased over time ( supplementary figure 1 ) . however , this result did not correlate with the cell numbers determined by flow cytometry . this difference was most likely caused by the difference of rrna gene copy numbers in each species . dgge fingerprinting was used to evaluate the ability of each medium to maintain the initial fecal microbiota . the dgge banding patterns derived from the initial cultures were compared to those from the 18 hours cultures , and similarity between inoculum and cultured sample was used as the measure of microbiota stability . overall , the number of bands and the dominant bands were different in each medium ( figure 1(a ) ) . the band numbers from the in vitro cultures were fewer than the fecal inoculum and formed different profiles from the inoculum on the dgge gel . the sequences of bands were assigned to the firmicutes , bacteroidetes , and actinobacteria phyla . firmicutes and bacteroidetes were major phyla in both the fecal inoculum and the in vitro culture . bands affiliated to bacteroidetes were more dominant in the in vitro culture after 18 hours ( band number 1 , 6 , and 11 ) than at zero time . the dominant firmicutes bands ( band 2 , 4 , 5 , 10 , and 13 ) at zero time were less dominant in cultured samples . this result was supported by previous studies that the ratio of bacteroidetes / firmicutes was different between the in vitro intestinal model and the inoculum [ 12 , 34 ] . band number 5 contained pairs of dna fragments , because similar sequences had similar denaturant gradient and the length of amplified fragment for dgge ( ranged from 150 to 180 bp ) was insufficient to distinguish similar sequences completely . bhi and lcm had relatively similar numbers of bands ( 2022 bands ) and hcm had fewer bands ( 14 bands ) . the lower number of bands on hcm may be due to the high carbohydrate content in the medium . high carbohydrate promotes the proliferation of bacteroidetes , which possess a larger glycobiome than firmicutes [ 34 , 35 ] . cluster analysis showed that the microbiota of lcm was similar to inoculum population ( figure 1(a ) ) . although the number of bands in bhi medium was similar to that of lcm , their profiles and dominant bands were more different from the inoculum than those in lcm . moreover , the profile of minor bands in lcm was similar to the profile of inoculum , and the number of minor bands was more abundant than bhi medium . profiles of lcm after 18 hours displayed the highest similarity ( 78.94% ; mean value of triplicate samples ) with profiles of original inoculum at zero time . therefore , the lcm medium was used as basal medium for intestinal microbiota growth culture conditions . furthermore , they are unique in each individual because of interindividual differences of intestinal microbiota population , dietary habits , and metabolism . three different culture conditions ( 1% of fecal inoculum , 1% of fecal inoculum with 1% fecal supernatant , or 2% fecal inoculum ) were compared using dgge profile ( figure 1(b ) ) and revealed relatively similar band patterns . however , the inoculum with 1% fecal supernatant added to lcm was more similar to original inoculum than cultures without fecal supernatant in population cluster analysis ( 80.37% similarity ) . the inoculum concentration of feces is a significant factor for in vitro culture conditions , because the number and diversity of bacteria can affect growth . therefore , we determined using dgge and real - time pcr , the optimal inoculum concentration for use in this type of in vitro human fecal culture experiments . different fecal inoculum suspensions ( 0.1% , 0.5% , 1% , 2% , 3% , 4% , and 5% ) were used to compare the bacterial communities at each concentration . we did not test concentrations over 5% because of the difficulty with handling the dense and viscous fecal samples . the dgge profiles of different inoculum concentrations were relatively similar to each other and as expected the interindividual variation of microbial community was found ( figure 2 ) . although profiles were relatively similar among different inoculum concentrations , small variations of band intensity were observed . cluster analysis of profiles showed that 1% , 2% , or 3% inoculum cultures were the most similar to original inoculum in individual b and c ( figure 2 ) . we investigated the change of bacterial communities at each incubation time ( supplementary figure 3 ) . the communities of original fecal bacteria were similar to those of inoculum and the profiles of bacterial communities were stable after 6 hours of incubation in every inoculum concentration . in addition , the batch culture was reproducible , as determined by the cluster analysis of triplicate cultures in dgge analysis ( data not shown ) . the real - time pcr - based quantification was used to enumerate total bacteria , bacteroidetes and firmicutes in cultures with 1%5% inoculum concentrations of individual - coded a , b , and c samples at 0 and 18 hours of incubation ( table 3 ) . different numbers of bacteria , bacteroidetes and firmicutes were found in each sample at the varied inoculum concentrations , and their growth was different in the same medium . the 16s rrna gene copy number of total bacteria increased to 10 copies ml for all cultured samples , and bacteroidetes and firmicutes reached to 1010 copies ml . the 16s rrna gene copies of total bacteria and bacteroidetes in cultured fecal materials from individual c were higher than individual a and b. firmicutes were more abundant in 18 hour cultured samples of individual b than individuals a and c. these results indicated that the different community composition affected the growth of each phylum in fecal microbiota . the increased ratio of bacteroidetes ( 2.29 fold ; mean value of increased copy number ratio ) was more abundant than that of firmicutes ( 1.90 fold ) . we compared the 16s rrna gene copies of total bacteria in 0.1%3% inoculum cultures of individual d ( supplementary figure 4 ) over time . the increased numbers of total bacterial 16s rrna gene copies were higher in low concentration of inoculum ( 0.1% and 0.5% ) than the high concentration of inoculum ( 1%3% ) . the batch culture with higher cell numbers has limited nutrients and there would be more competition for obtaining nutrients . however , the high concentration of inoculum ( 1%3% ) added to the batch culture could provide more fecal material to facilitate and personalize the cultivation of indigenous microbiota as fecal supernatant [ 3638 ] . therefore , 3% inoculum concentration of fecal materials was chosen for in vitro culture conditions since this level would maintain a high cell number of intestinal microbiota and grow a variety of indigenous microbiota . a comparison of the intestinal microbiota of each individual fecal sample before and after culturing was determined by pyrosequencing . a total of 45,674 reads were obtained from pyrosequencing and 5,843 sequences were removed by filtering process of chimera check , length cutoff , ambiguous base call and average quality check . therefore , a total of 39,831 sequences were analyzed ( ranging from 3,402 to 8,898 per sample ) after the filtering process ( table 4 ) . the average length of sequences was 385.39 bp and the observed number of phylotypes ranged from 590 to 1,287 with 92% to 97% good 's coverage . the richness of samples was investigated by rarefaction curves ( supplementary figure 5 ) . the changed numbers of observed phylotype and diversity indices of samples from three individuals were different at zero time and after 18 hours of incubation . samples from individual b had the most similar observed number of phylotypes and shannon indices between the zero time culture and the 18 hour culture among the three individuals . the dominant phyla ( firmicutes , bacteroidetes , actinobacteria , proteobacteria , and verrucomicrobia ) from fecal samples of each person were maintained in improved culture condition of this study ( figure 3 ) . the abundance of firmicutes decreased after 18 hours ( average from 72.39% to 44.95% ) , while bacteroidetes increased from 17.14% to 39.11% in cultured samples . although the proportion of each phylum was changed in the in vitro cultures , the dominant phyla were maintained after 18 hours of incubation . these trends are similar to those seen in a previous gut model system analyzed using phylogenetic microarray . they reported that the abundance of bacteroidetes increased from 52.49% to 75.50% ( ascending model ) , 80.59% ( transverse model ) and 75.60% ( descending model ) , while the firmicutes decreased from 44.57% to 16.81% ( ascending ) , 10.56% ( transverse ) , and 13.23% ( descending ) . the abundances of actinobacteria ( average 4.91% ) and verrucomicrobia ( 0.21% ) in the present study were higher than observed in the previous model system . at the genus level , a total of 210 genera ( read number 0.01% of total analyzed reads ) were retrieved from the zero time fecal cultures of individual a , b , and c , and 173 genera were obtained from the 18 hour cultured samples ( figure 3 ) . the dominant genera were bacteroides , subdoligranulum , faecalibacterium , parabacteroides , bifidobacterium , ruminococcus , eubacterium , blautia , roseburia , alistipes , clostridium , escherichia , and dorea ( read number 1% of total analyzed reads ) . the community profiles of the microbiota from individuals a and c were more similar to each other than to that for individual b , both at zero time and after the 18 hour incubation . therefore , the bacterial community of each in vitro cultured sample reflected the interindividual uniqueness of the fecal microbiota . we tested a variety of conditions for the human intestinal microbiota growth in short - term in vitro batch cultures . the combination of dgge , real - time pcr , and pyrosequencing was sufficient to compare communities of intestinal microbiota in the different cultures . of the combinations tested , low - concentration carbohydrate medium ( lcm ) supplemented with 1% fecal supernatant and inoculated with a fecal suspension to a final concentration of 3% performed best in maintaining a metabolically active diverse population of bacteria over the 18 hour incubation . the culture conditions developed in this investigation should be suitable for use in future studies on the impact of xenobiotics on the human intestinal microbiota .
a stable intestinal microbiota is important in maintaining human physiology and health . although there have been a number of studies using in vitro and in vivo approaches to determine the impact of diet and xenobiotics on intestinal microbiota , there is no consensus for the best in vitro culture conditions for growth of the human gastrointestinal microbiota . to investigate the dynamics and activities of intestinal microbiota , it is important for the culture conditions to support the growth of a wide range of intestinal bacteria and maintain a complex microbial community representative of the human gastrointestinal tract . here , we compared the bacterial community in three culture media : brain heart infusion broth and high- and low - carbohydrate medium with different growth supplements . the bacterial community was analyzed using denaturing gradient gel electrophoresis ( dgge ) , pyrosequencing and real - time pcr . based on the molecular analysis , this study indicated that the 3% fecal inoculum in low - concentration carbohydrate medium with 1% autoclaved fecal supernatant provided enhanced growth conditions to conduct in vitro studies representative of the human intestinal microbiota .
1. Introduction 2. Material and Methods 3. Results and Discussion 4. Conclusions
PMC5227320
obesity is a risk factor able to trigger several inflammatory alterations favorable for the chronic low - grade inflammation , by imbalance between pro- and anti - inflammatory cytokine productions . furthermore , obesity is positively associated with installation and development of diseases as type 2 diabetes , cardiovascular diseases and metabolic syndrome in populations of different ages and genders ( weghuber et al . , 2014 ; wen et al . , 2013 ) . chronic low - grade inflammation observed in illness associates with modifications in plasmatic cytokines concentration , as high concentrations of plasminogen activator inhibitor-1 ( lira et al . , 2010 ) , c - reactive protein , interleukin ( il ) 6 and tumor necrosis factor - alpha ( tnf- ) , which are considered pro - inflammatory biomarkers . in addition , this inflammatory context reduces anti - inflammatory cytokine concentration , as adiponectin , and il-10 . obese individuals have low concentrations of adiponectin , and this is associated with the metabolic syndrome ( renaldi et al . , 2009 ) , type 2 diabetes ( hotta et al . , 2000 ) , dyslipidemia ( lara - castro et al . , 2006 ) and coronary artery diseases ( vilarrasa et al . , several studies have documented the effectiveness of exercise training on metabolic disorders , prevention and treatment , and its anti - inflammatory effect ( de meirelles et al . , 2014 ; karstoft and pedersen , 2016 ; lira et al . high intensity interval training ( hiit ) has been proposed to be a time - efficient ( low volume ) method to improve aspects related to body composition and disease ( madsen et al . , 2015b ; sijie et al . , 2012 ; talanian et al . , 2007 ) . ( 2015a ) showed only a modest improvement in anti - inflammatory cytokine after eight weeks of hiit for subjects at risk for metabolic syndrome individuals , while inflammatory cytokines did not change . on the other hand , have observed that hiit is more effective to treat excessive body weight than moderate continuous training ( sijie et al . , 2012 ) . these differences in results may be due to differences in training intensity and volume , and training length ( 8 weeks ) ( madsen et al . , 2015a ) . thus , whether longer than 8-week hiit , with lower volume than typical training session can improve metabolic and inflammatory profile of subjects with overweight and obesity is unknown . therefore , the purpose of the present study was ( a ) analyze the effects of 16 weeks of different models exercise training ( high - intensity interval vs. moderate - intensity continuous training ) on inflammatory profile ; and ( b ) analyze the effects of 16 weeks of two hiit protocols ( 1-bout 14 min vs. hiit 44 min ) on metabolic and inflammatory profile of subjects with overweight and obesity . subjects with overweight and obesity ( body mass index25 kg / m ) were invited to participate in the study through dissemination of the project in social networks , printed posters , emails list from students and employees at the university of sao paulo - campus of ribeirao preto , patients list of hospital s ribeirao preto medical school , university of so paulo , radio , and television stations . in this study , subjects of both genders ( women and men ) , aged 18 years old or more , was randomized and stratified into three groups : hiit 3 times a week , 44 min ( hiit ) ; one bout training 3 times a week , 14 min ( 1-bout ) ; and continuous training ( cont ) ( 30 min , 5 times a week ) . the criteria for exclusion were : unstable angina , recent heart attack ( 4 weeks ) , decompensated heart failure , severe valvular disease , uncontrolled hypertension , renal failure , orthopedic / neurological limitations , cardiomyopathy , presence of surgeries planned during the study period , reluctance to sign the consent form and informed participation in another study , and alcohol or drug abuse . hiit and 1-bout programs were carried out by walking / running 3 times a week on a treadmill . warming up consisted of 10 min at 70% of maximum heart rate ( hrmax ) ( 13 on borg scale of 6 to 20 points ) . afterwards subjects performed four ( hiit ) or a unique ( 1-bout ) period of four minutes at 90% of hrmax ( 16 on borg scale ) . on the hiit session the bouts were interspersed by 3 min of active recovery ( 70% of hrmax ) . finally , both groups , there were 5 min cool down . the cont corresponded to 70% of hrmax ( 13 on borg scale ) for 30 min , 5 times a week . the duration of all three training programs was 16 weeks being the faults recorded and heart rate monitored continuously during the sessions supervised to ensure that the subjects exercised at the correct intensity ( tjnna et al . , 2008 ) . blood samples was collected by venipuncture in the antecubital fossa and collected 10 ml of blood that were centrifuged at 3,000 rpm for 8 min at 4c to separated serum and plasma , and then stored in aliquots at 80c for future analysis . analysis of cytokines were performed by enzyme immunoassay elisa ( enzyme - linked immunosorbent assay ) using a microplate reader by spectramax plus 384 absorbance microplate reader ( san diego , ca , usa ) with a 450 nm filter for reading absorbance . analysis of tnf- , il-6 and adiponectin concentrations was performed using reagents kits from r&d system ( r&d system inc , minneapolis , mn , usa ) with sensitivity of 1,000 to 15.6 pg / ml , 300 to 4.7 pg / ml , and 4,000 to 62.5 pg / ml , respectively . the intra - assay variability of the tnf- kit was 4.2 to 5.2% ; il-6 kit was 1.6 to 4.2% and adiponectin kit was 0.6 to 6.0% . for the analysis of circulating il-10 concentrations the reagent kits from ebioscience ( affymetrix inc , , san diego , ca , usa ) was used , with sensitivity of 300 to 2.3 pg / ml , intra - assay variability was 0.3 to 1.0% . analysis of insulin concentrations was performed using reagents kits from accubind ( monobind inc . , lake forest , ca , usa ) with sensitivity of 0 to 300 lu / ml , intra - assay variability was 4.3 to 8.3% . the student t - test was applied between pre and post 16 weeks in each group . to verify possible differences between baseline ( pretraining ) , as well as the magnitude of the variations ( ) after training among the three groups we used the one - way analysis of variance . for all analyzes , we used the spss ver . 13.0 ( spss inc . , chicago , il , usa ) . subjects with overweight and obesity ( body mass index25 kg / m ) were invited to participate in the study through dissemination of the project in social networks , printed posters , emails list from students and employees at the university of sao paulo - campus of ribeirao preto , patients list of hospital s ribeirao preto medical school , university of so paulo , radio , and television stations . in this study , subjects of both genders ( women and men ) , aged 18 years old or more , was randomized and stratified into three groups : hiit 3 times a week , 44 min ( hiit ) ; one bout training 3 times a week , 14 min ( 1-bout ) ; and continuous training ( cont ) ( 30 min , 5 times a week ) . the criteria for exclusion were : unstable angina , recent heart attack ( 4 weeks ) , decompensated heart failure , severe valvular disease , uncontrolled hypertension , renal failure , orthopedic / neurological limitations , cardiomyopathy , presence of surgeries planned during the study period , reluctance to sign the consent form and informed participation in another study , and alcohol or drug abuse . hiit and 1-bout programs were carried out by walking / running 3 times a week on a treadmill . warming up consisted of 10 min at 70% of maximum heart rate ( hrmax ) ( 13 on borg scale of 6 to 20 points ) . afterwards subjects performed four ( hiit ) or a unique ( 1-bout ) period of four minutes at 90% of hrmax ( 16 on borg scale ) . on the hiit session the bouts were interspersed by 3 min of active recovery ( 70% of hrmax ) . finally , both groups , there were 5 min cool down . the cont corresponded to 70% of hrmax ( 13 on borg scale ) for 30 min , 5 times a week . the duration of all three training programs was 16 weeks being the faults recorded and heart rate monitored continuously during the sessions supervised to ensure that the subjects exercised at the correct intensity ( tjnna et al . , 2008 ) . blood samples was collected by venipuncture in the antecubital fossa and collected 10 ml of blood that were centrifuged at 3,000 rpm for 8 min at 4c to separated serum and plasma , and then stored in aliquots at 80c for future analysis . analysis of cytokines were performed by enzyme immunoassay elisa ( enzyme - linked immunosorbent assay ) using a microplate reader by spectramax plus 384 absorbance microplate reader ( san diego , ca , usa ) with a 450 nm filter for reading absorbance . analysis of tnf- , il-6 and adiponectin concentrations was performed using reagents kits from r&d system ( r&d system inc , minneapolis , mn , usa ) with sensitivity of 1,000 to 15.6 pg / ml , 300 to 4.7 pg / ml , and 4,000 to 62.5 pg / ml , respectively . the intra - assay variability of the tnf- kit was 4.2 to 5.2% ; il-6 kit was 1.6 to 4.2% and adiponectin kit was 0.6 to 6.0% . for the analysis of circulating il-10 concentrations the reagent kits from ebioscience ( affymetrix inc , , san diego , ca , usa ) was used , with sensitivity of 300 to 2.3 pg / ml , intra - assay variability was 0.3 to 1.0% . analysis of insulin concentrations was performed using reagents kits from accubind ( monobind inc . , lake forest , ca , usa ) with sensitivity of 0 to 300 lu / ml wilks test . to evaluate the effect of training the student t - test was applied between pre and post 16 weeks in each group . to verify possible differences between baseline ( pretraining ) , as well as the magnitude of the variations ( ) after training among the three groups we used the one - way analysis of variance . for all analyzes , we used the spss ver . 13.0 ( spss inc . , chicago , il , usa ) . a significance level of 5% the modification of total body mass and body mass index before and after training are presented in table 2 . none group promoted significant changes in body composition of participants , however the hiit group tended to reduce the total body weight and bmi ( p=0.059 and p= 0.060 , respectively ) . group il-6 concentrations decreased ( p= 0.035 ) in contrast to tnf- that increased ( p=0.001 ) with training . in cont group the plasma concentrations of adiponectin decreased significantly in all three groups ( p=0.009 , p=0.022 , and p=0.0002 in cont , 1-bout , and hiit , respectively ) , but there was no difference in magnitude ( % ) of these changes between group ( % cont=43.725.6 ; % 14 min=28.9 74.9 ; %=66.315.5 hiit ) . with the exception of adiponectin il-10 , insulin , and homeostasis model assessment insulin resistance ( homa - ir ) did not change across the three training models . the aim of the present study was to evaluate the effect of 16 weeks of different models of exercise training on metabolic and inflammatory profile of subjects with overweight and obesity . the main findings were that 16 weeks of training on a treadmill ( a ) reduced il-6 in the hiit-44 min group , ( b ) increased tnf- in hiit-44 min group and reduced it in cont , and ( c ) reduced adiponectin levels in all groups . the elevated il-6 production by skeletal muscle during training sessions performs important functions for auxiliary energy supply , stimulating lipolysis in situations of high adenosine monophosphate / adenosine triphosphate ratio and low glycogen stores ( pedersen , 2012 ) . it also acts on anti - inflammatory form , promoting il-10 production , which in turn inhibits the action of nuclear factor kappa b , and therefore the synthesis of pro - inflammatory cytokines such as tnf- and interleukin-1 ( galic et al . , 2010 ) . in a recent study cabral - santos et al . ( 2015 ) showed that il-6 concentrations are significantly elevated after high intensity intermittent stimulus ( 5 km performed 1:1 100% of vo2peak [ peak oxygen consumption ] ) when compared to continuous moderate intensity . this may be due to increased demand for glucose , as evidenced by the high blood lactate concentrations usually found in these training models . ( 2015 ) reported that in a protocol in cycle ergometer ( 44 min - 95% of hrmax ) similar to the present study hiit the blood lactate concentrations and respiratory exchange ratio were significantly higher than in the lower intensity continuous . these data indicate the highest metabolic stress generated in hiit favoring greater il-6 production . unlike the exercise , at rest approximately 30% of circulating il-6 comes from the adipose tissue and that total two - thirds of infiltrated macrophages ( fried et al . , 1998 ; mohamed - ali et al . , it is important to consider that the il-6 is also considered to be the main inducer of the expression of several proteins that play an important role on inflammatory status . high plasma concentrations of il-6 together with c - reactive protein , fibrinogen , and amyloid a are related to chronic low - grade inflammation and involved with the development of some diseases , such as diabetes , atherosclerosis , and rheumatoid arthritis ( reihmane and dela , 2014 ) . thus , the present study showed that 16 weeks of hiit training reduced il-6 concentration in overweight / obeses subjects . these lifestyle habits promote the adipocyte hypertrophy , which decreases blood and oxygen supply to the adipose tissue . this chronic and progressive reduction stimulates the recruitment of monocytes and secretion of pro - inflammatory cytokines , such as il-6 and tnf- ( gleeson et al . , 2011 ) . however , other immune cells , such as lymphocytes and natural killer cells , may also produce it . high concentration of tnf- is usually associated with cell death , cardiovascular diseases , inflammation and other acute phase proteins ( golbidi and laher , 2014 ) . this cytokine also activates certain intracellular kinases , which inhibit the signaling of insulin , impairing glucose uptake ( diehl , 2004 ) . the leggate et al.s ( 2012 ) study with overweight / obese men showed that 2-week hiit not alter the blood concentrations of il-6 , il-10 , and tnf-. on the other hand , adipose tissue concentrations of monocyte chemoattractant protein-1 , il-10 , and tnf- were undetectable after training , indicating beneficial adaptations in the resting inflammatory profile . robinson et al . ( 2015 ) evaluated the effects of hiit and continuous in the same population ( predominantly obese women ) and also found an improvement in inflammatory profile indicated by the decrease of toll - like receptors type 4 ( lymphocytes and monocytes ) and type 2 ( lymphocytes ) , membrane receptors known to be related to inflammatory response , even without changes in blood cytokines . in light of this knowledge , the reduction of circulating tnf- after 16 weeks of cont training can be beneficial , but we can not claim that the increase in circulating tnf- post hiit is harmful because it is known that blood concentrations can not accurately reflect the intracellular and tissue reality . in addition these changes did not alter the concentrations of insulin and homa - ir values , suggesting that insulin sensitivity was not affected by different concentrations of tnf-. another issue to consider is that the action of tnf- , as well as its efficiency , are dependent on their receivers . in cell membrane there are two types of receptors ( tnfr1 and 2 ) and pro - inflammatory characteristics generally associated with the protein ( mainly in adipose tissue ) are past their binding to tnfr1 . these receptors may also be cleaved and become soluble in plasma ( stnfr ) and thus act in on anti - inflammatory form , binding to tnf- and prevent its connection to the cell membrane and subsequent signal transduction ( cawthorn and sethi , 2008 ; gatanaga et al . , 1990 ; so , if tnf- post hiit increases are accompanied by the increase of stnfr , the functions of this protein can be suppressed . studies showed that it is reduced in situations of obesity and insulin resistance when compared to healthy individuals and animals ( hu et al . , 1996 ; weyer et al . , 2001 ) . weight loss and exercise are common treatments for improvement in insulin sensitivity , but hulver et al . ( 2002 ) showed that only the weight loss is effective for increased in the adiponectine levels . suggesting that exercise training improves insulin sensitivity by independent mechanisms for weight loss and adiponectin action . ( 2008 ) showed that an increase in adiponectine levels requires a reduction of at least 10% in total body mass . ( 2010 ) found conflicting results showing augment of adiponectin only in the groups that performed restrictive diet and diet coupled with aerobic exercise for 12 weeks . in this study there was also a third group that performed only exercises without calorie restriction , with smaller reduction in fat mass and no significant decrease in adiponectine levels . a review by simpson and singh ( 2008 ) showed that only three of eight studies with exercise increased adiponectine levels . despite the benefits of this adipokine in increased insulin sensitivity , it is important to note that there are different isoforms of adiponectin with not entirely clear functions , and exercise seems to regulate differently each isoform .
obesity is a risk factor able to trigger several inflammatory alterations and the imbalance between pro- and anti - inflammatory cytokine productions . physical exercise is an important strategy for reduction of inflammatory established process . the aim of this study was to evaluate the effect of 16 weeks of three exercise training programs in the inflammatory profile and insulin resistance in overweight / obesity . thirty two men and women ( 46.410.1 years ; 162.09.1 cm ; 82.013.6 kg ) were divided into three groups for training on a treadmill : continuous at 70% maximum heart rate ( hrmax ) 5 times a week ( cont ) ; 14 min ( 1-bout ) and 44 min ( high intensity interval training , hiit ) at 90% hrmax 3 times a week . interleukin ( il ) 6 and il-10 , tumor necrosis factor - alpha ( tnf- ) , insulin and adiponectin levels were analyzed by enzyme - linked immunosorbent assay , and homeostasis model assessment insulin resistance was calculated . after 16 weeks of training blood concentrations of il-6 decreased in the hiit group ( p=0.035 ) , tnf- decreased in the cont ( p=0.037 ) and increased in hiit ( p=0.001 ) and adiponectin decreased in the three training models . there was a trend towards decreased body weight and body mass index ( bmi ) after hiit only ( p=0.059 and p=0.060 , respectively ) . despite the decrease of adiponectin and the increase of tnf- in hiit group , insulin sensitivity showed a trend for improvement ( p=0.08 ) . hiit program decreased il-6 at rest and although not significant was the only who tended to decrease total body weight and bmi . taken together , our data suggest that both hiit as well as cont exercises training program promotes changes in inflammatory profile in overweight / obesity , but dissimilar response is seen in tnf- levels .
INTRODUCTION MATERIALS AND METHODS Participants Training program Immunoassays for cytokines Statistical analysis RESULTS DISCUSSION
PMC4161366
as a service to our authors and readers , this journal provides supporting information supplied by the authors . such materials are peer reviewed and may be re - organized for online delivery , but are not copy - edited or typeset . technical support issues arising from supporting information ( other than missing files ) should be addressed to the authors
we describe a new platform to identify structure - switching dna beacon aptamers , which detect small molecules in a specific manner . by clonally amplifying a dna library designed to fluoresce in response to binding events onto microbeads , aptamer beacons can be selected by stringent fluorescence - assisted sorting . we validated this method by isolating known and novel anti - steroid aptamers from two separate dna libraries that were structurally enriched with three - way junctions . importantly , aptamers were retrieved in only a few ( three ) rounds of selection by this approach and did not require further optimization , significantly streamlining the process of beacon development .
Supporting Information
PMC5137258
vascular smooth muscle contraction is activated by an increase in cytosolic free ca concentration ( [ ca]i ) as a result of ca entry from the extracellular space and/or ca release from intracellular stores , primarily the sarcoplasmic reticulum ( 1 ) . ca diffuses to the contractile machinery where it binds to calmodulin ( cam ) ( 2 ) . the ( ca)4-cam complex induces a conformational change in myosin light chain kinase ( mlck ) , which involves removal of the autoinhibitory domain from the active site , thereby converting the kinase from an inactive to an active state ( 3 ) . mlck is physically bound through its n - terminus to actin filaments and , upon activation , phosphorylates nearby myosin molecules ( 4 ) . myosin ii filaments are composed of hexameric myosin molecules , each consisting of two heavy chains and two pairs of light chains ( 17-kda essential light chains ( lc17 ) and 20-kda regulatory light chains ( lc20 ) ) located in the neck region of the myosin molecule ( fig . smooth muscle myosin ii is a hexameric protein composed of two heavy chains ( 205 kda each ) and two pairs of light chains : the 17-kda essential light chains ( lc17 ) and the 20-kda regulatory light chains ( lc20 ) . the n - termini of the heavy chains make up most of the head domains ( pink ) and the c - termini of the heavy chains account for the complete coiled - coil rod domain ( which is responsible for assembly of myosin filaments ) and the terminal unstructured tail ( black ) . each globular head includes an actin - binding site ( blue ) and an atp - binding site ( green ) . the light chains , lc17 ( yellow ) and lc20 ( red ) , are associated with the neck region . vascular smooth muscle myosin light chain diphosphorylation : mechanism , function and pathological implications . activated mlck phosphorylates ser19 of lc20 and this simple post - translational modification induces a conformational change that is transmitted to the myosin heads , resulting in actin interaction and a marked increase in the actin - activated mgatpase activity of myosin ( 5 ) . the energy derived from the hydrolysis of atp then drives cross - bridge cycling and the development of force or contraction of the muscle . relaxation follows the removal of ca from the cytosol , primarily by caatpases , which pump ca out of the cell and back into the sarcoplasmic reticulum ( 6 ) . mlck is inactivated as ca dissociates from cam and the autoinhibitory domain of mlck blocks the active site . phosphorylated myosin is then dephosphorylated by myosin light chain phosphatase ( mlcp ) , a type 1 protein serine / threonine phosphatase ( 7 ) . smooth muscle myosin ii is a hexameric protein composed of two heavy chains ( 205 kda each ) and two pairs of light chains : the 17-kda essential light chains ( lc17 ) and the 20-kda regulatory light chains ( lc20 ) . the n - termini of the heavy chains make up most of the head domains ( pink ) and the c - termini of the heavy chains account for the complete coiled - coil rod domain ( which is responsible for assembly of myosin filaments ) and the terminal unstructured tail ( black ) . each globular head includes an actin - binding site ( blue ) and an atp - binding site ( green ) . the light chains , lc17 ( yellow ) and lc20 ( red ) , are associated with the neck region . walsh mp . vascular smooth muscle myosin light chain diphosphorylation : mechanism , function and pathological implications . mlcp is a trimeric phosphatase with a 38-kda catalytic subunit ( pp1c ) , a 130-kda regulatory subunit ( mypt1 ) and a 21-kda subunit of uncertain function . mlcp is inhibited by both protein kinase c ( pkc ) and rhoa / rho - associated kinase ( rok ) pathways . phosphorylation of the cytosolic phosphatase inhibitory protein of 17-kda ( cpi-17 ) at thr38 by pkc converts cpi-17 to a potent inhibitor of mlcp , which is achieved by direct interaction of phosphorylated cpi-17 with pp1c ( 8) . in addition , and apparently of greater physiological significance , rok phosphorylates mypt1 to induce inhibition of mlcp activity . rok phosphorylates mypt1 at thr697 and thr855 ( rat numbering ) in vitro and both phosphorylation events result in phosphatase inhibition ( 10 ) . however , it appears that rok predominantly phosphorylates thr855 in intact tissues ( 11 ) . activation of pkc and rok pathways , therefore , can result in decreased mlcp activity , resulting in an increase in the ratio of mlck : mlcp activity and an increase in force . since rok and novel pkc isoforms are ca - independent , this results in ca sensitization of contraction , i.e. an increase in force without an increase in [ ca]i . the possibility that lc20 may also be phosphorylated at thr18 was originally demonstrated in vitro when it was shown that high concentrations of mlck phosphorylate thr18 in addition to ser19 ( 12 ) . it is clear , however , that mlck does not phosphorylate thr18 of lc20 in intact smooth muscle tissues , and most contractile stimuli are associated with lc20 phosphorylation exclusively at ser19 , which can be attributed to mlck ( 13 ) . interest in thr18 phosphorylation was revived , however , when it was shown that treatment of smooth muscle tissues with membrane - permeant phosphatase inhibitors ( such as calyculin - a ) induced ca - independent contractions that correlated with diphosphorylation of lc20 at thr18 and ser19 ( 14 ) . two ca - independent kinases ( integrin - linked kinase ( ilk ) and zipper - interacting protein kinase ( zipk ) ) were shown to be the most likely kinases responsible for lc20 diphosphorylation ( 15,16,17 ) . phosphorylation of lc20 at thr18 and ser19 has been observed in several smooth muscle tissues , e.g. bovine tracheal smooth muscle in response to neural stimulation or carbachol ( 18 , 19 ) , rabbit thoracic aorta treated with prostaglandin - f2 ( 20 , 21 ) and renal afferent arterioles stimulated with endothelin-1 ( 22 ) . lc20 diphosphorylation has frequently been associated with pathophysiological conditions involving hypercontractility , including cerebral vasospasm ( 23 ) , coronary ( 24 , 25 ) and femoral arterial vasospasm ( 26 ) , intimal hyperplasia ( 27 ) and hypertension ( 28 ) . early studies comparing the properties of smooth muscle myosin phosphorylated at ser19 ( by low concentrations of mlck ) and at both thr18 and ser19 ( by high concentrations of mlck ) indicated that the additional phosphorylation at thr18 increased the actomyosin mgatpase activity some two- to three - fold ( 12 , 29,30,31 ) . however , the velocity of movement of myosin - coated beads along actin cables ( 32 ) or of actin filaments over immobilized myosin in the in vitro motility assay ( 31 ) was similar whether lc20 was phosphorylated at ser19 alone or at both thr18 and ser19 . we recently addressed the hypothesis that phosphorylation at thr18 may enhance the level of steady - state force achieved with ser19 phosphorylation ( 13 ) . to test this hypothesis the objective was to achieve stoichiometric phosphorylation of lc20 at ser19 , then elicit phosphorylation at thr18 and observe whether or not there was a further increase in steady - state force . the challenge was to achieve stoichiometric phosphorylation at ser19 since this can not be done in intact tissue due to the competing actions of mlck and mlcp , which results in stable phosphorylation of lc20 at steady - state of 0.5 mol pi / mol lc20 . furthermore , this problem can not be overcome by inhibition of phosphatase activity since phosphatase inhibitors such as calyculin - a and okadaic acid unmask the basal activities of ilk and zipk , which phosphorylate both thr18 and ser19 of lc20 . we took advantage of the fact that protein kinases generally , including mlck , can utilize adenosine 5-o-(3-thiotriphosphate ) ( atps ) as a substrate to thiophosphorylate their protein substrates ( 33 ) , but the thiophosphorylated protein ( lc20 in this case ) is a very poor phosphatase substrate ( 34 ) . it was necessary to use triton - skinned tissue for this experiment since the plasma membrane is impermeant to atps . stoichiometric thiophosphorylation of lc20 at ser19 in triton - skinned rat caudal arterial smooth muscle . ( a ) the viability of triton - skinned rat caudal arterial smooth muscle strips was initially verified by transfer from relaxing solution ( pca 9 ) to high [ ca ] ( pca 4.5 ) solution containing atp and an atp regenerating system ( rs ) , which induced a contractile response . tissues were then relaxed by 3 washes in pca 9 solution containing atp and rs . atp was then removed by 6 washes in pca 9 solution without atp or rs . tissues were then incubated in pca 4.5 solution containing atps ( 4 mm ) in the absence of atp and rs . excess atps was then removed by washing twice with pca 9 solution without atp or rs . once steady - state force was established , microcystin ( 1 m ) was added in pca 9 solution containing atp and rs . tissues were harvested at the indicated times during this protocol for phos - tag sds - page and western blotting with anti - pan lc20 ( b ) , as shown by the arrows in ( a ) ( the numbers correspond to the lanes in ( b ) ) : ( i ) lanes 1 and 8 , tissue incubated at pca 9 showing exclusively unphosphorylated lc20 ; ( ii ) lanes 2 and 3 , pca 4.5 + atps in the absence of atp and rs ; ( iii ) lane 4 , pca 9 in the absence of atp and rs following thiophosphorylation ; ( iv ) lane 5 , at the plateau of force development following transfer to pca 9 solution containing atp and rs ; ( v ) lanes 6 and 7 , following treatment with microcystin at pca 9 in the presence of atp and rs . an additional control is included in lane 9 : triton - skinned tissue treated with microcystin at pca 9 for 60 min to identify unphosphorylated ( 0p ) , monophosphorylated ( 1p ) , and diphosphorylated ( 2p ) lc20 bands . thiophosphorylated forms of lc20 are indicated as follows : 1sp , monothiophosphorylated lc20 ; 2sp , dithiophosphorylated lc20 ; 1sp1p , lc20 thiophosphorylated at one site and myosin regulatory light chain diphosphorylation slows relaxation of arterial smooth muscle . 2012 ; 287(29 ) : 2406476 . the american society for biochemistry and molecular biology . shows the experimental protocol and corresponding force measurements ( fig . 2a ) and analysis of lc20 ( thio)phosphorylation by phos - tag sds - page ( see below for a description of this technique , which enables the separation of unphosphorylated and phosphorylated forms of lc20 ) ( fig . 2b ) . at resting tension ( pca 9 ) , lc20 is unphosphorylated ( lanes 1 and 8 , fig . a control contraction was elicited by increasing [ ca ] to pca 4.5 in the presence of atp and relaxation followed removal of ca ( fig . when basal force was restored , the tissue was washed several times in pca 9 solution without atp to remove all atp , following which atps was added at pca 4.5 to elicit close - to - stoichiometric lc20 thiophosphorylation ( fig . 2b , lanes 2 and 3 ) . contraction did not occur under these conditions since atps is not hydrolysed by the actin - activated myosin mgatpase and , therefore , atps can not support cross - bridge cycling ( 35,36,37 ) . following lc20 thiophosphorylation , atps was washed out by several washes with pca 9 solution ( fig . atp was then added at pca 9 , whereupon the tissue contracted rapidly due to the fact that lc20 was previously thiophosphorylated . the level of force achieved during this treatment was comparable to that elicited initially by addition of atp at pca 4.5 ( fig . finally , the phosphatase inhibitor microcystin was added at pca 9 in the presence of atp , whereupon ser19-thiophosphorylated lc20 was phosphorylated at thr18 ( 1s1p in lanes 6 and 7 of fig . small amounts of monophosphorylated ( 1p ) and diphosphorylated ( 2p ) lc20 were also detected due to the presence ( fig . 2b , lane 5 ) of a small amount of unphosphorylated lc20 prior to the addition of microcystin . 2b , lane 9 shows a control triton - skinned tissue treated with microcystin and atp at pca 9 to indicate the migration of unphosphorylated ( 0p ) , monophosphorylated ( 1p ) and diphosphorylated ( 2p ) lc20 , as previously established ( 13 ) . the key finding from this experiment was that phosphorylation of lc20 at thr18 on top of close - to - stoichiometric thiophosphorylation of lc20 at ser19 did not elicit an increase in force . stoichiometric thiophosphorylation of lc20 at ser19 in triton - skinned rat caudal arterial smooth muscle . ( a ) the viability of triton - skinned rat caudal arterial smooth muscle strips was initially verified by transfer from relaxing solution ( pca 9 ) to high [ ca ] ( pca 4.5 ) solution containing atp and an atp regenerating system ( rs ) , which induced a contractile response . tissues were then relaxed by 3 washes in pca 9 solution containing atp and rs . atp was then removed by 6 washes in pca 9 solution without atp or rs . tissues were then incubated in pca 4.5 solution containing atps ( 4 mm ) in the absence of atp and rs . excess atps was then removed by washing twice with pca 9 solution without atp or rs . once steady - state force was established , microcystin ( 1 m ) was added in pca 9 solution containing atp and rs . tissues were harvested at the indicated times during this protocol for phos - tag sds - page and western blotting with anti - pan lc20 ( b ) , as shown by the arrows in ( a ) ( the numbers correspond to the lanes in ( b ) ) : ( i ) lanes 1 and 8 , tissue incubated at pca 9 showing exclusively unphosphorylated lc20 ; ( ii ) lanes 2 and 3 , pca 4.5 + atps in the absence of atp and rs ; ( iii ) lane 4 , pca 9 in the absence of atp and rs following thiophosphorylation ; ( iv ) lane 5 , at the plateau of force development following transfer to pca 9 solution containing atp and rs ; ( v ) lanes 6 and 7 , following treatment with microcystin at pca 9 in the presence of atp and rs . an additional control is included in lane 9 : triton - skinned tissue treated with microcystin at pca 9 for 60 min to identify unphosphorylated ( 0p ) , monophosphorylated ( 1p ) , and diphosphorylated ( 2p ) lc20 bands . thiophosphorylated forms of lc20 are indicated as follows : 1sp , monothiophosphorylated lc20 ; 2sp , dithiophosphorylated lc20 ; 1sp1p , lc20 thiophosphorylated at one site and phosphorylated at the other . we next addressed the hypothesis that phosphorylation of lc20 at both thr18 and ser19 reduces the rate of dephosphorylation and relaxation compared to phosphorylation at ser19 alone . to test this hypothesis , lc20 was phosphorylated at ser19 only or at both thr18 and ser19 to comparable stoichiometry . this was achieved by treatment of triton - skinned rat caudal arterial smooth muscle strips with atp at pca 4.5 ( for monophosphorylation at ser19 ) or with atp and the phosphatase inhibitor okadaic acid at pca 9 ( for diphosphorylation at thr18 and ser19 ) . the time - courses of dephosphorylation and relaxation were then followed to determine if there was a reduction in the rates of dephosphorylation and relaxation when lc20 was diphosphorylated at thr18 and ser19 compared to monophosphorylated at ser19 . as shown in fig . comparison of the time courses of relaxation and lc20 dephosphorylation in triton - skinned rat caudal arterial smooth muscle following contraction with ca or okadaic acid in the absence of ca . triton - skinned tissues that had been contracted with ca ( open circles ) or okadaic acid ( 20 m ) at pca 9 ( closed circles ) were transferred to pca 9 solution and the time courses of dephosphorylation ( a ) and relaxation ( b ) were followed . tissues were harvested at 10 , 20 , 30 , 40 , 50 , 75 , and 100% relaxation and lc20 phosphorylation levels were quantified by phos - tag sds - page and western blotting with anti - pan lc20 . the american society for biochemistry and molecular biology . , both treatments induced lc20 phosphorylation to 0.5 mol pi / mol lc20 . at pca 4.5 , this was exclusively ser19 phosphorylation , whereas in response to okadaic acid at pca 9 , phosphorylation occurred at both thr18 and ser19 ( fig . 3b ) clearly shows that both dephosphorylation and relaxation rates were significantly slower when lc20 was diphosphorylated at thr18 and ser19 compared to monophosphorylated at ser19 . this finding suggests a functional effect of lc20 diphosphorylation and raises the possibility that pathological situations of hypercontractility may result from impaired relaxation due to this dephosphorylation event . mlcp has been shown to dephosphorylate thr18 in addition to ser19 ( 38 ) suggesting that the same phosphatase dephosphorylates mono- and diphosphorylated lc20 in situ . comparison of the time courses of relaxation and lc20 dephosphorylation in triton - skinned rat caudal arterial smooth muscle following contraction with ca or okadaic acid in the absence of ca . triton - skinned tissues that had been contracted with ca ( open circles ) or okadaic acid ( 20 m ) at pca 9 ( closed circles ) were transferred to pca 9 solution and the time courses of dephosphorylation ( a ) and relaxation ( b ) were followed . tissues were harvested at 10 , 20 , 30 , 40 , 50 , 75 , and 100% relaxation and lc20 phosphorylation levels were quantified by phos - tag sds - page and western blotting with anti - pan lc20 . 4 . schematic diagram illustrating the anatomical relationship of the afferent arteriole , glomerulus , and the efferent arteriole . the afferent arteriole controls the glomerular inflow resistance . this vessel must constrict rapidly in response to fluctuations in blood pressure to prevent pressure elevations from being transmitted to the downstream glomerular capillaries . when renal perfusion pressure is compromised , a sustained increase in efferent arteriolar tone maintains adequate filtration pressure within the upstream glomerular capillaries , thereby preserving renal function . provides a schematic representation of the renal microcirculation , which consists of the afferent arteriole conveying blood from the interlobular artery to the glomerular capillaries where it is filtered prior to returning to the systemic circulation via the efferent arteriole . the afferent arteriole plays a key role in regulating glomerular inflow resistance and must be able to respond very rapidly to sudden changes in systemic blood pressure in order to protect the fragile glomeruli from pressure - induced damage ( 39 ) . angiotensin ii ( ang ii ) is a renal - selective vasoconstrictor that contributes to renal vascular resistance under normal physiological conditions and thereby plays an important role in modulating renal hemodynamics ( 40 ) . endothelin-1 ( et-1 ) , on the other hand , is a renal vasoconstrictor that does not contribute to renal vascular resistance under normal physiological conditions , but is implicated in abnormal renal vasoconstriction and reduced glomerular filtration in pathological states such as diabetes and chronic kidney disease ( 41,42,43,44,45,46,47 ) . schematic diagram illustrating the anatomical relationship of the afferent arteriole , glomerulus , and the efferent arteriole . this vessel must constrict rapidly in response to fluctuations in blood pressure to prevent pressure elevations from being transmitted to the downstream glomerular capillaries . when renal perfusion pressure is compromised , a sustained increase in efferent arteriolar tone maintains adequate filtration pressure within the upstream glomerular capillaries , thereby preserving renal function . ( 52 ) with permission . based on our studies of the effects of lc20 diphosphorylation described above , we developed a hypothesis that pathological situations of hypercontractility may result from impaired relaxation due to diphosphorylation of lc20 at thr18 and ser19 . we have tested this hypothesis by studying the effects of two contractile stimuli , one physiological ( ang ii ) and one pathophysiological ( et-1 ) , on renal afferent arteriolar constriction and lc20 phosphorylation . we compared the patterns of phosphorylation of lc20 in the afferent arteriole to determine whether or not lc20 diphosphorylation is associated with the pathophysiological stimulus et-1 and not the physiological stimulus ang ii . this proved to be a challenging proposition due largely to the very small size of the afferent arteriole : a single afferent arteriole has a diameter of 1520 m , contains < 100 smooth muscle cells and is 1/10 the size of a human eyelash . this necessitated developing a technique to isolate individual afferent arterioles and enhance the sensitivity of detection and quantification of lc20 phosphorylation . perfusion of the renal artery with molten agarose followed by cooling to solidify the agarose enabled dissection and recovery of intact afferent arterioles ( 48 ) . phos - tag sds - page ( 49 ) proved to be a suitable technique for rapid and efficient separation of unphosphorylated and phosphorylated forms of lc20 ( 50 ) . in this technique , tissue proteins are separated in laemmli sds gels in which a phosphate - binding ligand ( phos - tag reagent ) is immobilized in the running gel . in the presence of mn ions , the migration of proteins containing phosphorylated serine , threonine and/or tyrosine residues is retarded due to binding to the ligand . the higher the stoichiometry of phosphorylation , the slower the migration rate through the gel . the protein of interest ( lc20 in this instance ) is then detected by western blotting with an antibody that recognizes all forms of the protein , phosphorylated and unphosphorylated . the effectiveness of the separation of the various lc20 species by phos - tag sds - page can be clearly seen in figs . quantification of the different lc20 species by densitometric scanning enables determination of the stoichiometry of phosphorylation , and the individual phosphorylation sites can be identified by using phosphorylation site - specific antibodies in parallel western blotting experiments . we were able to increase the sensitivity of detection of lc20 > 4,000-fold over existing methods ( 50 ) . this was achieved by : ( i ) the use of biotinylated secondary antibodies in conjunction with streptavidin - conjugated horseradish peroxidase , combined with enhanced chemiluminescence detection of lc20 species , ( ii ) fixing the lc20 on the pvdf membrane by treatment with glutaraldehyde , and ( iii ) incorporating cangetsignal ( toyobo , japan ) into the protocol . utilization of a minimum number of steps in the protocol with the least possible number of sample transfers maximized lc20 yield . the limit of detection of lc20 was thereby increased from 200 fmol ( 4 ng ) to 0.05 fmol ( 1 pg ) ; we estimate that a single afferent arteriole contains 2.5 fmol ( 50 pg ) of lc20 . using this approach , we succeeded in quantifying lc20 phosphorylation levels in single isolated afferent arterioles and observed that ang ii induced exclusively monophosphorylation of lc20 whereas et-1 induced diphosphorylation , both in a time- and concentration - dependent manner ( 22 ) . et-1-induced diphosphorylation was confirmed to occur at thr18 and ser19 using phosphorylation site - specific antibodies . et-1-induced lc20 diphosphorylation was confirmed by the proximity ligation assay ( 51 ) . furthermore , afferent arteriolar vasodilation ( relaxation ) occurred more slowly following washout of et-1 than ang ii ( 22 ) . these findings are , therefore , consistent with the hypothesis that pathophysiological signals such as et-1 that are associated with prolonged vasoconstrictor responses involve lc20 diphosphorylation , whereas physiological signals such as ang ii induce lc20 phosphorylation exclusively at ser19 . the additional phosphorylation at thr18 induced by et-1 is , therefore , proposed to account , at least in part , for the sustained contractile response of the afferent arteriole to et-1 compared to ang ii . lc20 is phosphorylated at thr18 and ser19 in a ca - independent manner by ilk and/or zipk , which are associated with the contractile machinery in vascular smooth muscle . this occurs in concert with inhibition of mlcp by rok - catalysed phosphorylation of mypt1 , the regulatory and targeting subunit of the phosphatase . diphosphorylation of lc20 occurs in response to et-1 ( pathological stimulus ) but not ang ii ( physiological stimulus ) in renal afferent arterioles , and diphosphorylation of lc20 is associated with decreased rates of lc20 dephosphorylation and relaxation . ilk and zipk are , therefore , potential therapeutic targets for the treatment of diseases associated with hypercontractility , such as hypertension , cerebral vasospasm following subarachnoid hemorrhage , coronary arterial vasospasm , intimal hyperplasia , acute renal insufficiency and chronic kidney disease .
smooth muscle contraction is activated primarily by phosphorylation at ser19 of the regulatory light chain subunits ( lc20 ) of myosin ii , catalysed by ca2+/calmodulin - dependent myosin light chain kinase . ca2 + -independent contraction can be induced by inhibition of myosin light chain phosphatase , which correlates with diphosphorylation of lc20 at ser19 and thr18 , catalysed by integrin - linked kinase ( ilk ) and zipper - interacting protein kinase ( zipk ) . lc20 diphosphorylation at ser19 and thr18 has been detected in mammalian vascular smooth muscle tissues in response to specific contractile stimuli ( e.g. endothelin-1 stimulation of rat renal afferent arterioles ) and in pathophysiological situations associated with hypercontractility ( e.g. cerebral vasospasm following subarachnoid hemorrhage ) . comparison of the effects of lc20 monophosphorylation at ser19 and diphosphorylation at ser19 and thr18 on contraction and relaxation of triton - skinned rat caudal arterial smooth muscle revealed that phosphorylation at thr18 has no effect on steady - state force induced by ser19 phosphorylation . on the other hand , the rates of dephosphorylation and relaxation are significantly slower following diphosphorylation at thr18 and ser19 compared to monophosphorylation at ser19 . we propose that this diphosphorylation mechanism underlies the prolonged contractile response of particular vascular smooth muscle tissues to specific stimuli , e.g. endothelin-1 stimulation of renal afferent arterioles , and the vasospastic behavior observed in pathological conditions such as cerebral vasospasm following subarachnoid hemorrhage and coronary arterial vasospasm . ilk and zipk may , therefore , be useful therapeutic targets for the treatment of such conditions .
The central role of myosin regulatory light chain phosphorylation in the activation of smooth muscle contraction Ca Myosin regulatory light chain diphosphorylation Evidence of myosin regulatory light chain diphosphorylation in intact smooth muscle tissues The functional effects of diphosphorylation of myosin regulatory light chains The renal microvasculature Conclusions Conflict of interest
PMC4782633
nasal septal deviation is a common nasal deformity . it can be a congenital disorder or a consequence of nasal trauma . deviation of the bony or cartilaginous component of the nasal septum from the midline leads to its deviation . this results in external nasal deformity , internal nasal obstruction due to nasal airway constriction , or a combination [ 13 ] . presently , septal deviation classification has largely been descriptive , based on nasal septal geometry and relationships between the bony and cartilaginous septa [ 47 ] . jang et al . presented a simplified classification of nasal deviation and the associated treatment outcome into five types based on the orientation of the bony pyramid and the cartilaginous vault . jin et al . presented a four - category classification of septal deviation based on the morphology , site , severity , and its influence on the external nose . buyukertan et al . reported a morphometric study of nasal septal deviation by separating the nasal septum into 10 segments . they concluded that the system would constitute a new , objective , simple , and practical classification system . i. baumann and h. baumann argued that the existing nomenclatures of septal deviation only dealt with nasal septum deformation exclusively and were rarely used in routine clinical work . they instead presented a method for the classification of septal deviations based upon the anatomical structures of the nasal septum and common clinical concepts . however , the most observable nasal septal deviation classification system was proposed by rohrich et al . . therefore , for simplicity , nasal septal deviations will be classified according to that proposed by rohrich et al . . in order to improve the clinical outcome of septoplasty , a greater understanding of the etiopathogenesis of nasal septal deviation is necessary . we aim to apply incremental force to a computer - generated septal model using structural modal analysis , which has also been utilized by laura et al . , who previously described a simple method for determining the fundamental mode of a vibrating ulna to approximate its dynamic response . the objective of this study is to identify areas of high - stress and septal deformation patterns . clinically , this may assist surgeons in the delineation of key areas for septal realignment and reconstruction . cranial ct scans were obtained from a patient who possessed normal features a straight nasal septum , normal occlusion and a perceivably symmetric face ( figure 1(a ) ) . this study was performed in accordance with the guidelines of the institutional review board ( irb ) and conforms to the helsinki 's declaration . the patient had not previously undergone septoplasty or rhinoplasty , nor subject to nasal injury . superposition of the ct images to create a three - dimensional ( 3d ) model was conducted with mimics software ( materialise technologies , leuven , belgium ) . an idealized model ( figure 1(b ) ) and a patient - specific finite element model were generated for the study . from the ct scans we chose to base the idealized model on the middle slice ( figure 1(a ) ) and measured the significant features of the nasal septum . we then utilized these measurements to create an idealized model ( figure 1(b ) ) in the finite element analysis software , abaqus ( dassault systmes technologies , providence , ri , united states of america ( usa ) ) , where the idealized model was , subsequently , meshed . however , to simplify analyses and gain an estimate of nasal deformation , models were prescribed with a uniform thickness of 2 mm , which is an approximate average septum thickness , as reported previously [ 1113 ] . to ensure mesh accuracy , convergence studies were carried out on the model . to create a more realistic representation of the septum , which incorporated thickness variation , a 3d patient - specific model was created from the same ct scan utilizing mimics software ( materialise technologies , leuven , belgium ) and meshed with hypermesh ( altair hyperworks , troy , mi ) . nonhomogenous , anisotropic , nonlinear , viscoelastic behaviour . for deformations below 20% , however , no significant changes occur within the cartilage , and it is therefore sufficiently accurate to model cartilage as a homogenous , linearly elastic material in our analyses . utilized a similar homogenous , linear elastic material property to simulate septal l - strut deformation . to define the linear elastic model of the cartilage , the young 's modulus , e , and poisson 's ratio , v , are required . however , the tensile and compressive young 's moduli are vastly different due to the structure of cartilage . according to lee et al . , the tensile modulus ranges from 2.62 mpa to 10.6 mpa , the compressive modulus ranges from 0.40 mpa to 0.83 mpa , and the poisson 's ratio ranges from 0.26 to 0.38 . cartilage is approximately 75% water , while the other 25% consists mainly of type - two collagen fibrils and proteoglycan molecules . the density of water is 1000 kg / m , while the other components are highly dense structures . therefore , the density of cartilage was estimated to be 2000 kg / m . as the relative displacement within the septum is the main area of concern in this analysis , and since material properties affect the absolute and not the relative displacement of the septum , the average values of the elastic modulus and poisson 's ratio and an estimated value of the density were used . the elastic modulus was assigned a value of 5 mpa , poisson 's ratio was 0.32 , and the density was 2000 kg / m . as the bony interfaces with the nasal septum ethmoidal , vomer , hard palate , and nasal bone interfaces ( figure 1(a))are much stiffer than the septal cartilage , most of any applied force will be absorbed by the cartilaginous septum , leaving the bony septum uninjured . the nasal bone length overlapping the cartilaginous septum may affect the degree of nasal deformation and normally ranges from 3 to 15 mm . however , to simplify analyses , a candidate that displayed a length that fell within this range in this case , 14 mm was considered , so that a typical deformation pattern could be observed . in vivo , the nasal tip lies anterior to the anterior septal angle ( asa ) where the lower lateral cartilages ( llcs ) meet , although this may vary . however , due to the small distance between the asa and the nasal tip , and for simplification in this analysis , the asa was assumed to be the nasal tip . according to lee , the nasal tip cartilages may be thought of as a spring and a cantilever , as they exhibit deformation recoil and elasticity . a cantilever is a result of the unequal stability in the tripod formed by the medial crura and paired llcs , and a spring results from the llcs , which produce an upward force that is in the form of stored elastic potential energy . the spring - stiffness constant , k , may be defined by ( 1 ) , and a spring - stiffness constant of 20 kn / m is applied in the three orthogonal axes . a free nasal tip was prescribed as a preliminary step . a spring - supported nasal tip boundary condition , where the spring was connected between the two orange points on the nasal tip ( figure 2 ) , the dorsal and caudal septa were prescribed a free boundary condition . as frontal force to the septum causes damage ranging from simple fracture of the nasal bones to severe flattening of the nasal bones and the septum , two forms of frontal loading were applied the force and pressure applied are estimates and are inconsequential to the relative displacements of the septum . as the present intention is to determine the eigen modes , or the most likely deformed patterns of the septum , only the possible in - plane loading which will affect the resulting eigen modes will be considered . in the case of anteroposterior loading ( figure 2 ) , a couple of forces of 1n each in both the vertical and horizontal axes were applied to the nasal tip to simulate a direct frontal punch at an angle such that the forces on the nasal tip are the most significant . in the case of dorsal and caudal septa in - plane loading ( figure 2 ) , a uniform pressure of 2000 pa was applied to both the dorsal and caudal septa . this was to simulate a frontal punch at an angle such that both the dorsal and caudal septa components are equally significant . every object , including the septum , has a set of eigen modes , depending on its structure and composition . in each mode , all parts of the system vibrate with the same distinct frequency , which is referred to as the system 's eigen value at that mode . since lower modes have lower frequencies and energies , they are more likely to occur . hence , only the first 10 modes of the nasal septum were analyzed . abaqus ( dassault systmes technologies , providence , ri ) was used to obtain the eigen mode shapes of the septum under the various loading conditions . a general , static step is created , in which one of the two loading conditions is applied . thereafter , a linear perturbation , frequency step is created , in which the natural frequency and the corresponding mode shape will be extracted . the patterns of nasal septal deviation were similar to those described by rohrich et al . . in our study , the deviation patterns were therefore classified into three groups , each with its specific sites of high stress and dislocation and possible surgical corrective procedures ( table 1 ) . through observation of all deformation patterns , we were also able to identify the intrinsic points of fatigue within the cartilaginous septum the bc junction , anterior nasal spine ( ans ) , vomer - ethmoidal cartilage junction ( vej ) , and a single or couple of cracks in the quadrangular cartilage that lead to c - shaped and s - shaped nasal deformations , respectively . these points could lead to the septum levering off the vomerine groove and , in the latter two cases , a shortening of the septum ( figure 3 ) . for an idealized model , the slanted ( figure 4(a ) ) , c - shaped ( figure 4(b ) ) , and s - shaped ( figure 5(a ) ) deviation patterns were all observed ( table 2 ) . in some modes , the system vibrates in - plane and therefore lacks a resultant deformation shape . in such cases , a dash is indicated . however , due to the lack of restriction on the nasal tip , it moves relatively freely , which may not represent in the vivo conditions . in the following idealized model , the nasal tip is now constrained by a spring . while displaying similar patterns of deviation with a free nasal tip model , the spring - supported nasal tip model exhibits decreased displacement due to its prescribed restriction . a patient - specific model was then analysed . by observation of the previous idealized models the patient - specific model exhibited similar deformation patterns as the idealized nasal septal models ( table 2 and figure 5 ) . the nasal septum is of utmost importance in the support of the distal nose and for the maintenance of the bilateral nasal airway . a straight septum exists where there is force equilibrium , which may be disturbed in fracture , resulting in warping of the septal cartilage [ 18 , 26 ] . depending on the sustained trauma , the septum may deform in a myriad of patterns . presently , however , studies have reported that septal deformation patterns may be categorized in a number of broad categories , regardless of the trauma and/or injuries sustained . guyuron et al . , rohrich et al . , and rhee et al . categorized nasal deformities broadly into a septal tilt , anteroposterior or cephalocaudal c - shaped and an s- or reverse - s - shaped deformities . unfortunately , these studies have not correlated these deviation patterns with degrees of force . through the correlation of septum deformation patterns with increasing degrees of force , as well as with areas of dislocation and fracture , preoperative planning and septoplasty may be improved . the prompt identification and management of septal fractures , we were able to identify clinically observable nasal septal deviations and the aforementioned high - stress areas that would require stress - relief and the possible dislocation sites ( figure 3 ) . we observed that regardless of force direction , with increasing force , the septum first tilts ( type i ) and then crumples into a c - shape ( type ii ) and finally into an s - shape ( type iii ) . this was observed through the prevalence of the tilted septum in lower modes , while the c - shaped followed by the therefore , the lower the mode in which the deviation pattern is observed , the smaller the force required to cause this deformation , and consequently , the greater the probability of observing this pattern . , in a sample size of 93 patients who had undergone primary septoplasty , 40% had a septal tilt , 32% had a c - shape anteroposterior septum , 4% had a c - shape cephalocaudal septum , 9% had an s - shape anteroposterior septum , and 1% had an s - shape cephalocaudal septum . in type i , when a tilted septum is observed , the highest stress concentration occurs at the bc junction and ans . this suggests that with a low to moderate force , the septum dislocates en - mass from the midline vomerine groove ( figure 3 ) and levers off the bc junction to a tilted position . this may be observed on ct and mri scans , and naso - endoscopy , where posterior buckling is frequently observed at the vej . through submucous resection , the septum may be repositioned onto the groove , with prior resection of the cartilage tongue in the nasal floor . the septum may then be reset to the ans ( table 1 ) . with a higher moderate force , a deformation is likely due to a central line of stress in the septum , bending it into two pieces . the line of high stress may run through the anteroposterior ( figure 4(a ) ) or cephalocaudal ( figure 4(b ) ) directions . we propose that with significant loading , intrinsic septal fractures occur by breaking the cartilaginous septum into two , leading to the clinical morphology of a c - shaped nose and shortening of the septum . in addition , the septum will be displaced off its vomerine groove and/or the ans and will likely buckle at the vej ( figure 3 ) . this is clinically significant as it can not be observed on ct or mri scans due to the invisibility of the septum in such modalities . hence , a clinical observation of a c - shaped septum may be the only indication . in addition to the corrective procedures mentioned previously , spreader grafts may be required on both sides of the septum , to assist in its straightening by providing the necessary nasal support that was relinquished when the septum fractured , thereby allowing the septum time to heal ( table 1 ) . with a force of higher magnitude , an deformation may result due to multiple lines of stress leading to a septal concertina and shortening of the septum into a minimum of three overlapping pieces . this is due to two lines of stress running in the anteroposterior direction ( figure 5 ) . in addition to being shortened , the septum might be displaced from the vomerine groove and ans ( figure 3 ) . as with a c - shaped deformed septum , in addition to the aforementioned corrective procedures , longer spreader grafts will be required to brace both deformed sites to support the septum and allow it to heal ( table 1 ) . therefore , regardless of the deviation pattern , by relieving stress in these specific strips of concavity , in combination with the aforementioned surgical procedures , we propose that a more stable , straight septum may be achieved . despite different loading conditions , the nasal septum deviates in a relatively constant pattern of a septal tilt , c- and s - shaped deviations with insignificant differences between the resultant modal shapes . the free nasal tip and spring - supported nasal tip models responded differently to the loading conditions , specifically in mode three . a septal tilt is observed in the free nasal tip model , while a c - shaped deformation is observed in the latter model . as the c - shape deviation is noted to occur with higher energy and septal tilt deviation with lower energy , this finding suggests that the spring of the llcs acts to insulate and constrain the nasal tip and septum against deformation . the protective interrelationship of the llcs to the nasal septum should be preserved during surgery . the prevalence of modal shapes in patient - specific and idealized septal models , subject to frontal point - loading , is almost identical ( table 1 ) . slight deviations , such as those in mode three , are expected , due to the difference in shape between the models . despite the patient - specific model exhibiting greater relative movement than the idealized model , this difference is insignificant as the basic modal shape remains ( figure 5 ) . the similarities observed between these two models are a testament to the accuracy of the idealized model . it is imperative to note that nasal septal deviations are secondary to the bony vault and cartilaginous changes . for the purpose of this study , future research aims to combine the study of the deformations of the bony and cartilaginous septa . due to the inherent collagen fibrils and the consequent anisotropy within the cartilaginous septum , we recognize that the prescription of a linearly elastic material model to the nasal septum material properties may not be fully representative of in vivo cartilage . in spite of this , an understanding of the relative displacement that occurs within the different models in different eigen modes remains beneficial in aiding surgeons to correct a deviated nasal septum . no physical model was mechanically tested to validate the computational model in this preliminary study , which means that absolute stresses and relative stress patterns should be considered cautiously . such an experimental validation study would typically make use of strain gages , but also infrared thermography , and global stiffness measurements [ 29 , 30 ] . the purpose of this study was to gain a greater understanding of the septal deformation biomechanics . we found that despite different loading directions , the septum deformed consistently into only three shapes a tilted position , a c - shaped septum , and an s - shaped septum . the tilted septum is seen with the least force , c shape with moderate force , and s shape with high force . this suggests an intrinsic fracture of the septum into increased number of overlapping fragments with escalating force . clinically , this is important information that provides insight into predictable patterns of internal septal fractures that need to be realigned and reconstructed to create a straight septum .
background . with the current lack of clinically relevant classification methods of septal deviation , computer - generated models are important , as septal cartilage is indistinguishable on current imaging methods , making preoperative planning difficult . methods . three - dimensional models of the septum were created from a ct scan , and incremental forces were applied . results . regardless of the force direction , with increasing force , the septum first tilts ( type i ) and then crumples into a c shape ( type ii ) and finally into an s shape ( type iii ) . in type i , it is important to address the dislocation in the vomer - ethmoid cartilage junction and vomerine groove , where stress is concentrated . in types ii and iii , there is intrinsic fracture and shortening of the nasal septum , which may be dislocated off the anterior nasal spine . surgery aims to relieve the posterior buckling and dislocation , with realignment of the septum to the ans and possible spreader grafts to buttress the fracture sites . conclusion . by identifying clinically observable septal deviations and the areas of stress concentration and dislocation , a straighter , more stable septum may be achieved .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion 5. Conclusion
PMC4565815
the egg batches of h. nigrescens were collected in niigata prefecture , japan , in april 2012 . arrows indicate the level of the line between epaxial muscles and hypaxial muscles . a : st 38 , b : st 50 , c : st 58 , d : st 63a , e : st 66 , f : st 68 . scale bar=5 mm . and table 1table 1.specimens of hynobius nigrescens used in this studydevelopmental stagehabitatsvl*(mm)st38aquatic7.27.37.5st50aquatic9.910.510.8st58aquatic12.913.514.1st63aaquatic16.817.117.2st66aquatic21.121.622.2st68terrestrial24.925.225.7*snout - vent length . ) , as described by iwasawa and yamashita . the earliest stage used in this study was st 38 , which is gill formation iii when the gills bud and balancers elongate . the larvae of st 38 swim in water by lateral undulation of trunk . the next developmental stage used in this study was st 50 , which is digital differentiation iii when the balancers disappear and first and second finger primordials clearly develop . the third developmental stage used was st 58 , which is digital differentiation vi when the fourth toe is clearly recognized . they swim by lateral undulation of trunk and sometimes hold the bottom by forelimbs to stabilize their body . the fourth developmental stage used was st 63a , which is full - grown larva i when the membrane between each toe disappears . the larvae of st 63a swim using lateral undulation of trunk in addition to crawl on the bottom . the fifth developmental stage was st 66 , which marks the disappearance of fin ii when the dorsal fin regresses as far back as the hind limbs and small gill pieces remain . the last developmental stage used was st 68 , which is completion of metamorphosis when the gills and tail fin have completely disappeared and the eyeballs protrude . samples were fixed in a straight body position in 10% formalin and were then transferred to 70% ethanol solution . arrows indicate the level of the line between epaxial muscles and hypaxial muscles . a : st 38 , b : st 50 , c : st 58 , d : st 63a , e : st 66 , f : st 68 . the following groups of the trunk muscles were examined in this study : dorsal muscles , lateral hypaxial muscles and abdominal muscles . during dissection , specimens were kept wet by moistening with water to avoid drying and subsequent measurement error . when the muscle was dried , the weights of the muscle would be measured lighter than in actual , and thus , the assumed measurement error was delivered from the drying of muscles . each trunk muscle group was weighed using an electronic balance auw220 ( shimadzu co. , ltd . , the weight ratio of each muscle group against the total weight of all measured trunk muscles was calculated . observation of trunk muscles : the components of the trunk muscles developed and changed morphologically with growth ( fig . 2fig . 2.lateral views of ontogenetic changes of the trunk muscles in hynobius nigrescens . a : st 38 , b : st 50 , c : st 58 , d-1 : st 63a after skinned , d-2 : st 63a after removed m. obliquus externus , e-1 : st 66 after skinned , e-2 ; st 66 after removed m. obliquus externus , f-1 : st 68 after skinned , f-2 : st 68 after removed m. obliquus externus . at st 38 , they possessed a single thick dorsal muscle and a single thick m. ventralis ( fig . at st 50 , when the first and second finger primordia had developed patently , a thin m. transversus abdominis with fibers extending craniodorsally developed from m. ventralis and became ventrally enlarged ( fig . the dorsal muscles were segmented by myosepta , as also observed at st 38 . when the hind limbs were revealed and forelimbs were developed at st 58 , a thin m. obliquus externus with fibers running caudoventrally developed along the edge of the abdominal contour line of m. transversus abdominis . from the ventral edge of m. obliquus externus , the muscle fibers of m. obliquus externus became parallel to the sagittal line ( fig . 2c ) . at st 63a , when they crawled on the bottom in water and swam by undulation , m. obliquus externus dorsally developed to the level of the lateral line between epaxial muscles and hypaxial muscles ( fig . 2d ) . a thin m. rectus abdominis occurred at the ventral edge of the trunk ( fig . at st 66 , when they crawled on the bottom in water in addition to swimming , m. rectus abdominis expanded and increased in thickness and was obviously separated from the fibers of the lateral hypaxial layers ( fig . 2e ) . at st 68 , after metamorphosis and movement to land for walking on ground , m. rectus abdominis became enlarged ( fig . lateral views of ontogenetic changes of the trunk muscles in hynobius nigrescens . a : st 38 , b : st 50 , c : st 58 , d-1 : st 63a after skinned , d-2 : st 63a after removed m. obliquus externus , e-1 : st 66 after skinned , e-2 ; st 66 after removed m. obliquus externus , f-1 : st 68 after skinned , f-2 : st 68 after removed m. obliquus externus . trunk muscle weight ratios : the muscle group weight ratios are represented in table 2table 2.ontogenetic changes of weight ratios of trunk muscles in h. nigrescensdevelopmental stageaverages of svl * ( mm)weight ratio ( % ) dorsal muscleslateral hypaxial musclesabdominal musclest387.33 0.252.2 2.347.7 2.30.0st5010.4 0.355.1 2.344.8 2.30.0st5813.5 1.155.3 1.841.7 2.23.0 0.4st63a17.0 0.656.7 2.638.3 2.94.9 0.3st6621.6 1.360.0 1.229.9 2.410.0 1.3st6825.2 1.261.4 2.322.1 3.516.5 1.4*snout - vent length , mean s.e.m .. ontogenetic changes were identified in the muscle group ratios among stages . the muscle weight ratio of the dorsal muscles increased with growth from 52.2% at st 38 to 61.4% at st 68 ( the averages of three samples of weight ratios of the trunk muscles : table 2 ) . in contrast , the weight ratio of the lateral hypaxial muscles decreased with growth from > 40% at st 38 , 50 and 58 to < 30% at st 66 ( table 2 ) . the weight ratio of the abdominal muscles increased with growth ( table 2 ) . at st 38 and 50 , the salamander did not possess abdominal muscles , with the percentage of abdominal muscles being recorded as zero . from st 58 to 68 , observation of trunk muscles : ontogenetic changes were recognized in the trunk muscles in h. nigrescens . maurer described that m. rectus abdominis arose from the ventral edges of m. obliquus internus on the timing of start of development of m. obliquus externus . in h. nebulosus , m. rectus abdominis develops from the ventral edges of m. obliquus externus and m. obliquus internus when the development of m. obliquus externus starts . in this study of h. nigrescens , m. rectus abdominis developed and enlarged from the ventral line when m. obliquus externus developed . the timing of formation of m. rectus abdominis on the ventral line in this study coincided with maurer and fujimoto . the number of lateral hypaxial muscles differed between h. nigrescens in this study and h. nebulosus in fujimoto . h. nigrescens has two layers , m. obliquus externus and m. transversus abdominis , as lateral hypaxial muscles , except for m. ventralis . in contrast , h. nebulosus has three layers : m. obliquus externus , m. obliquus internus and m. transversus abdominis , except for m. ventralis . the developmental sequence of the hypaxial trunk muscles of h. nebulosus is reported as follows : 1 ) m. ventralis ( the ventral muscle ) , 2 ) m. obliquus internus ( the inner lateral hypaxial layer ) from the ventral muscle , 3 ) m. obliquus externus ( the outer lateral hypaxial layer ) and m. rectus abdominis and 4)m . transversus abdominis , which sequence was similarly observed in h. nigrescens , but m. obliquus internus does not appear and m. transversus abominis appears before m. obliquus externus and m. rectus abdominis in h. nigrescens . simons and brainerd discussed that the habitat and predominant locomotor mode of salamanders do not appear to be strong associated with the number of lateral hypaxial layers . it was suggested that the differences in the number of lateral hypaxial layers in these phylogenetically very close species , h. nebulosus and h. nigrescens suggest the presence of interspecific variation in this genus and further studies on other congeneric species are required . in this study , we observed only from lateral view . at st 38 , immediate hatching , the number of lateral hypaxial layer was only one , and the layer was thick ( fig . a typical fish possesses a thick trunk muscle divided into epaxial and hypaxial segments by a myosepta , but it does not show a layered structure . because the larvae of h. nigrescens locomote by swimming and they do not possess limbs at st 38 , they have a single thick m. ventralis as the lateral hypaxial muscle for undulatory swimming . during later developmental stages , the thickness of m. ventralis decreased , and a thin layer of m. transversus abdominis developed from the ventral edge of m. ventralis ( fig . 2b ) . at st 58 , a thin layer of m. obliquus externus developed as one of the lateral hypaxial muscles ( fig . the two lateral hypaxial muscles at this stage were thinner than m. ventralis present during early developmental stages ( fig . 2c ) . when urodelians move to land , they need to resist both torsion and lateral bending . since resisting torsion can be absorbed by the two lateral hypaxial layers which fibers run in a cross direction with each other , the number of the lateral hypaxial muscles possibly increased from one to two with growth . furthermore , muscle fibers running across each other in the two lateral hypaxial layers strengthen the body in a manner similar to the lamination of chipboard . at st 58 , when the fourth toe is clearly recognized , the muscle fibers of the lateral hypaxial muscle become longitudinal at the ventral edge of the trunk ( fig . m. rectus abdominis developed and enlarged at st 58 , 63a , 66 and 68 ( fig . because m. rectus abdominis generally contributes to maintaining posture and sustaining the animals own weight , evolutionary acquisition of this muscle was possibly essential for terrestrial locomotion . it has been argued that typical fish do not have m. rectus abdominis because their basic trunk muscle structure is composed of epaxial and hypaxial muscles that facilitate lateral bending . because of buoyancy , the need of sustaining inward organs weight decreases in fish . then , fish does not have m. rectus abdominis which function is sustaining own weight . when adult salamanders are compared , terrestrial species and semi - aquatic species possess a separated and larger m. rectus abdominis , whereas aquatic species possess a smaller and unseparated m. rectus abdominis [ 14 , 15 ] . [ 14 , 15 ] suggested that a separated and larger m. rectus abdominis facilitated terrestrial locomotion by resisting gravity and that m. rectus abdominis is not essential for an aquatic lifestyle . separated m. rectus abdominis more specializes its function of sustaining own weight than unseparated ones from lateral hypaxial muscles [ 14 , 15 ] . thus , the interspecific difference of m. rectus abdominis between aquatic and terrestrial species is parallel to the muscular differences found between the aquatic and terrestrial stages of h. nigrescens . trunk muscle weight ratios : ontogenetic changes in the weight and weight ratios of the muscle groups are given in table 2 . though the actual mass of all muscle groups increased according to growth , the degree of growth was different among muscle groups . the weight ratios of the dorsal and abdominal muscles increased with growth , hence the decrease of the weight ratios of the lateral hypaxial muscles . m. dorsalis trunci , which is the largest epaxial muscle , stabilizes the trunk against sagging and torsion and increases the stiffness of the trunk during walking . because the need for stabilizing the trunk and resisting gravity on land occurs only after metamorphosis , we assume that the increase in the weight ratio of the dorsal muscles is related to the transition from water to land . the weight ratio of the lateral hypaxial muscles decreased with growth ( table 2 ) . lateral hypaxial muscles function to control torsion and lateral bending and to stabilize the body [ 1 , 3 , 4 ] . because lateral hypaxial muscles are necessary for undulatory swimming , we suggest that larvae possess larger lateral hypaxial muscles than terrestrial juveniles . after the limbs develop , the role of the lateral hypaxial muscles possibly decreases . therefore , we consider that the decreased importance of the lateral hypaxial muscles results in the decreased weight ratios of the lateral hypaxial muscles . after they are equipped with limbs , urodelians mainly locomote by undulatory swimming in addition to aquatic walking . this study suggests that they gradually modify the trunk muscles to prepare for movement on land . abdominal muscles function to prevent sagittal extension of the trunk by the action of the epaxial muscles and to sustain the body weight against gravity . abdominal muscles appeared and increased in size after the appearance of the interdigital processes in the hind limb anlage . it is possible that such growth of the abdominal muscles facilitates terrestrial life . during middle developmental stages from st 50 to st 63a , salamanders start swimming in water using their limbs ( personal observations ) . at this stage , they depend less on the lateral hypaxial muscles and more on the dorsal and abdominal muscles . after metamorphosis , they start adapting to terrestrial life by enlarging the dorsal and abdominal muscles . in conclusion , the ontogenetic changes in the trunk muscles of h. nigrescens are linked with habitat transition from water to land , with the muscle construction changing in adaptation from aquatic swimming to resisting gravity .
we investigated ontogenetic changes in the trunk muscles of the japanese black salamander ( hynobius nigrescens ) before , during and after metamorphosis . given that amphibians change their locomotive patterns with metamorphosis , we hypothesized that they may also change the structure of their trunk muscles . the trunk muscles were macroscopically observed , and the weight ratios of each trunk muscle group were quantified at six different developmental stages . immediately after hatching , we found that the lateral hypaxial muscle was composed of one thick m. ventralis , from ventral edge of which m. transversus abdominis arose later , followed by m. obliquus externus and m. rectus abdominis . the weight ratios of the dorsal and abdominal muscles to the trunk muscles increased with growth . we suggest that a single thick and large lateral hypaxial muscle facilitates swimming during early developmental stages . the increase in the weight ratios of the dorsal and abdominal muscles with growth possibly assists with gravity resistance necessary for terrestrial life .
MATERIALS AND METHODS RESULTS DISCUSSION
PMC4499546
contrast - enhanced computed tomography colonography ( ce - ctc ) is the best technique for colorectal cancer sites and staging ( 12 ) , as well as to diagnose synchronous colonic lesions ( 3 ) in patients with obstructing cancers . ce - ctc is also useful to preoperatively evaluate others colorectal diseases , such as diverticular disease and inflammatory bowel disease ( 45 ) . the laparoscopic approach for colonic surgery has become common and widely used because of the multiple advantages compared to conventional laparotomy . laparoscopic surgery produces smaller surgical incisions , less intraoperative blood loss , faster recovery of normal bowel function , and shorter hospitalization ( 67 ) . nevertheless , the disadvantages to this approach include lack of a panoramic view of the operative field and tactile sensation , leading to potential inaccurate localization of a colonic lesion and difficulties with vessel ligation and lymph node dissection ( 8) . only a few studies have analysed the vascular anatomy of the colon using multidetector ct ( 91011 ) and only one used ct colonography ( 12 ) . bowel preparation consisted of a low - fiber diet and a mild laxative ( macrogol solution ) the day before ct . faeces were tagged by administering 60 - 90 ml amidotrizoate meglumine and 500 ml water at least 3 hours before the examination . the colon was distended by insufflating at least 3 l of carbon dioxide using an automatic insufflator . we performed a pre - contrast scan with the patient in the prone position using low mas and different post - contrast scans in the supine position after injecting 500 - 600 mgi / kg / body weight . post - contrast scans may have included arterial ( obtained using bolus - tracking monitoring technique ) , portal venous , and delayed phases depending on the disease . comprehending the complex three - dimensional ( 3d ) anatomy of the colon and branching vessels is difficult on axial images , particularly for inexperienced readers . 3d imaging provides surgeons with a precise and immediate understanding of the patient 's anatomy , including colonic loop shapes , colonic lesion sites , and the courses and relationships of the branching vessels . we obtained 3d fused images using a dedicated workstation ( advantage workstation 4 , general electric healthcare , waukesha , wi , usa ) by processing the ct dataset from the arterial and portal - venous phases . three reformations with different settings ( 3d colon map and two different 3d vascular presets ) were prepared separately and fused together into a single volume , which included the 3d colon map , a 3d arteriogram , and a 3d venogram , with the mesenteric arteries colored in red and relevant venous branches colored in blue . this resulted in a colon map that overlapped with the vascular map and showed the mesenteric branching pattern and the relationships between the colonic lesions , arteries and veins . the 3d images could be tilted and rotated to obtain the view that best simulates the intraoperative field of view . ct colonography allows for an accurate pre - operative assessment of colonic anatomy , and the locations of the colonic lesions and lymph nodes . post - contrast acquisition and the vascular map allow for a precise evaluation of mesenteric artery branching patterns and the relationships between arterial and venous vessels . although the laparoscopic approach has many obvious benefits compared to laparotomy , it suffers from a restricted operative field of view and an inability to manipulate tissues , which can result in time - consuming dissections when searching for anatomical landmarks , lymph nodes , or vessels . intraoperative conversion rates to laparotomy from laparoscopic colectomy are 10 - 20% ( 713 ) and is often due to difficulties identifying mesenteric vessels , synchronous tumors , intraoperative bleeding or procedure length ( 7 ) . complications , such as bleeding and bowel ischemia , can occur because of vascular injury while dissecting nodes or ligating a vessel . previous knowledge of the patient 's mesenteric vascular anatomy , including arterial branching variants and relationships with adjacent veins , reduces operative time and the incidence of intraoperative complications ( 11 ) . the branching pattern of the superior mesenteric artery ( sma ) must be assessed before a right hemicolectomy and right transverse colon surgery . the middle colic artery ( mca ) and the ileocolic artery ( ica ) are present in almost all patients , whereas the right colic artery ( rca ) is present in about 50% of cases ( figs . 1 , 2 ) . the inconsistency in the presence of the accessory left colic artery ( alca ) , known as the artery of riolan , originates from the sma or mca and anastomoses with the left colic artery ( lca ) , feeding the transverse colon ( figs . 3 , 4 ) . the common origin of the mca , rca , and ica ( fig . the most significant variant to be considered during laparoscopic right hemicolectomy is the relationship between the colic arteries and the superior mesenteric vein ( smv ) ; arteries cross anterior to the smv in most patients , but a posterior crossing pattern of the ica , mca , or rca is also common ( figs . 5 , 6 ) . it is important to locate the alca and the branching pattern of the inferior mesenteric artery ( i m a ) when planning left transverse colon surgery and left hemicolectomy . moreover , pre - operative planning for sigmoidectomy should include an evaluation of the sigmoid artery ( sa ) branching pattern because the i m a can be preserved if the sas are selectively ligated . the number of sas varies and they can either originate from the i m a or lca ( figs . 2 , 4 , 7 ) ( 15 ) . the relationship between arteries and the inferior mesenteric vein ( imv ) can also vary : lca and sas can either cross anteriorly or posteriorly to the imv ( figs . 8 , 9 ) . because of their close proximity , the relationships between the lca , sas , and the left gonadic vein and ureter must be assessed . the origins of other splanchnic arteries from the sma or i m a must also be considered be . 7 ) ( 15 ) . variants in mesenteric vein drainage should also be evaluated ( fig . 8) . vascular maps from a ce - ctc examination are easily obtained by modifying the standard protocol and are easy to interpret . the laparoscopic surgeon , regardless of the disease , can benefit from vascular maps , as they limit risks concerning vessel ligation and/or lymph node dissection .
contrast - enhanced computed tomography colonography ( ce - ctc ) is a useful guide for the laparoscopic surgeon to avoid incorrectly removing the colonic segment and the failure to diagnose of synchronous colonic and extra - colonic lesions . lymph node dissection and vessel ligation under a laparoscopic approach can be time - consuming and can damage vessels and organs . moreover , mesenteric vessels have extreme variations in terms of their courses and numbers . we describe the benefit of using an abdominal vascular map created by ce - ctc in laparoscopic colorectal surgery candidates . we describe patients with different diseases ( colorectal cancer , diverticular disease , and inflammatory bowel disease ) who underwent ce - ctc just prior to laparoscopic surgery .
INTRODUCTION Contrast-Enhanced CT Colonography Protocol Vascular Mapping Benefits for the Laparoscopic Surgeon Main Vascular Variants Related to Colonic Laparoscopic Surgery SUMMARY
PMC539970
understanding the rules that govern protein folding is one of the great challenges of molecular biology . studies of protein folding , combining experiment and simulation , have led to a solid understanding of the physical process of folding and the forces that stabilize proteins . the last 10 years have witnessed a revolution in our understanding of the pathway and stability of protein folding ( 1 ) . the sustained growth of folding studies is fuelled by the availability of new sequences , rapid structure determination and radical developments in experimental methods . furthermore , recent successes in folding simulations have improved our understanding of the protein folding process at atomic resolution ( 2 ) , providing further avenues for experimental investigation . analysis of the folding mechanisms and pathways of proteins within homologous families has propelled protein folding into the post - genomic era ( 3 ) . traditionally , kinetic and thermodynamic data are collected and analysed on an individual protein basis , and is published in an unstructured fashion , despite the best efforts to tabulate it . clearly , this presents an enormous challenge for data analysis , even simple searching for trends requires exhaustive manual inspection of the literature . with the exception of protherm [ thermodynamic database for proteins and mutants ( 4 ) ] , the vast majority of web - accessible databases focus on sequences and structures . there are currently no tools that bring together both kinetic and thermodynamic folding data for proteins and mutants . a comparison of the folding properties for more than 50 proteins represents the most comprehensive compilation of folding data to date ( 5 ) . this painstaking analysis uncovered some general trends but also highlighted the great diversity in folding behaviour . the speed at which a protein folds and the pathway it takes are dictated by its structural and energetic characteristics . recent work suggests that the fundamental physics underlying folding may be relatively simple : the mechanism of folding appears to be dictated by the low - resolution features ( or topology ) of the folded protein structure ( 6 ) . topology can be described by the parameter contact order , which is defined as the average sequence separation between contacting residues in the 3d structure . proteins having a low contact order , e.g. -helical bundles , fold faster than those with a high contact order , e.g. -sandwiches ( 6 ) . topology has been found to be the overriding determinant of folding rate for a wide range of proteins ( 69 ) . however , studies on the topologically similar members of the immunoglobulin family have shown that they fold with rate constants which correlate better with stability ( 10 ) . studies on horse and yeast cytochrome c also suggest that stability is an important factor ( 11 ) . furthermore , protein engineering studies show that mutations which do not affect the contact order can change the folding rate by many orders of magnitude ( 5 ) . thus , in many cases , factors other than topology must also be significant . the last six years have witnessed a huge increase in the number of proteins being studied , and is set to grow further as structural genomics projects gain momentum . in order to exploit this wealth of data so that data mining efforts may uncover further relationships between folding behaviour and structural character , indeed , recent benchmarking of predicted folding rates ( 12 ) , together with comparisons of the folding behaviour of two- and three - state folding proteins ( 13 ) , emphasizes the need for a centralized database . in order to address this issue , here , we describe the design and implementation of a relational database for protein folding , protein folding database ( pfd ) . a user - friendly web interface to the database allows querying using many parameters , as well as retrieval and presentation of data . the database will have three distinct roles : ( i ) data repository : new data can be rapidly deposited , validated and made available to the folding community and wider scientific arena ; ( ii ) experimental resource , the database will be of use to the biophysicist seeking to compare new folding data with the current dataset for similar proteins , bypassing the relatively slow and inefficient examination of the literature . the database will play a useful role in the design of folding experiments , e.g. both as a guide in the design of experimental methodology and in the selection of proteins belonging to homologous families ; and ( iii ) theoretical resource , all experimental folding data will be at the disposal of theoreticians , strengthening the emerging conspiracy between experiment and simulation . our approach is to create a database that captures as much as possible of the relevant information important for a folding experiment : kinetic rates of folding and unfolding ; equilibrium free energies ; experimental methods such as spectroscopic technique ( probe ) and method of perturbation ( e.g. denaturant ) , and instrument details ; publication information ; protein details , such as fold , structural class , biological function and mutation information . pfd was created using open - source mysql relational database server software , version 4.0.16 ( www.mysql.com ) , running on an apple dual 2.0 ghz g5/os x server ( version 10.3.4 ) . a web - based query interface to the database was created using the java programming language and apple webobjects software ( version 5.2.2 ) and the xcode development environment ( figure 1 ) . the essence of our approach is to allow a diverse collection of folding data to be searched via multiple parameters , and the results presented in a structured fashion . typical queries that can be formulated are compare the folding behaviour of monomeric -helical proteins? ; and which beta proteins larger than 60 residues have folding rates greater than 10 s?. the web interface allows a detailed , spreadsheet - like list of results allowing quick visualization of general trends in data ( figure 2 ) . the results of a search can be sorted on any heading , which is useful , e.g. when inspecting the variability of folding rates among proteins within a family . each entry also contains information of the publication and a url to the entry in ncbi pubmed literature database . annotation of proteins exploits the hierarchy used by the structural classification of proteins database [ scop ( 14 ) ] : proteins belong to families , which in turn belong to a structural class ( e.g. all alpha proteins ) . this was performed to minimize redundancy in the database so that all structural information for an entry can be retrieved via the scop link . scop and pdb provide an array of links to other databases ( such as entrez , pfam and astral ) , as well as an array of tools that operate on the data ( e.g. 3d visualization ) . the hierarchical classification of structural class / family / protein allows convenient browsing ( akin to browsing proteins belonging to a particular fold in scop ) : folding data for proteins are grouped under their fold or structural class , which may prove convenient when examining the folding behaviour of proteins within a family . examining any entry in more detail yields information on the protein structure , folding thermodynamics and kinetics , experimental methods , mutations ( if any ) , publication(s ) and annotations ( figure 3 ) . the power of the relational database approach allows us to visualize folding data in a novel way . availability and submissions : pfd is freely available at http://pfd.med.monash.edu.au . submissions and enquiries should be emailed to ashley.buckle@med.monash.edu.au . the constructed database and web - based query interfaces have demonstrated the applicability and usefulness of the database design . questions indicates that its design accurately reflects the organization of data in a real folding experiment . future work will focus on the following areas : functional annotation : an analysis of folding data must take into account the biological function of the protein . any trends uncovered must also be considered in the context of function . to enable these entries will be linked to the gene ontology database ( 15 ) , which annotates database entries on molecular function , biological process and cellular location.data exchange : how will other databases be able to use data from the folding database ? this is a serious challenge because of the vast heterogeneity in database standards and data structure . this can be addressed by making folding data available using extensible markup language ( xml ) . xml provides the capability of representing protein data in a single , standardized data structure that is easily transmitted over a network . this will require the construction of a specification language for protein folding data that will allow for portable , system - independent , machine - parsable and human - readable representation of essential features of protein folding . all folding data can then be made available in xml format.data visualization : as the dataset grows , visualization of text becomes cumbersome . this will require the development of graphical representations of the data , such as chevron plots . in particular , graphical methods allowing the visualization of relationships between structural parameters , such as contact order and folding kinetics , will prove very useful.data deposition and validation : it is vital that new folding data is deposited in the same timeframe as publication ( as is the case of the pdb ) . this means that the data becomes readily available to the community and amenable to analysis . this can be achieved using a forms - based system that will allow data to be deposited , via a web - browser , directly by the originator of the data , again in an analogous manner to the pdb . validation logic can also be built into the deposition process , providing both a useful service to the depositor as well as an indication on data quality to users . the latter two aims are particularly important for database functionality and growth , respectively , and will be given priority . this approach will allow us to achieve a high degree of uniformity in the structure of folding data , which will benefit experimentalists in data acquisition and handling . functional annotation : an analysis of folding data must take into account the biological function of the protein . any trends uncovered must also be considered in the context of function . to enable these entries will be linked to the gene ontology database ( 15 ) , which annotates database entries on molecular function , biological process and cellular location . data exchange : how will other databases be able to use data from the folding database ? this is a serious challenge because of the vast heterogeneity in database standards and data structure . this can be addressed by making folding data available using extensible markup language ( xml ) . xml provides the capability of representing protein data in a single , standardized data structure that is easily transmitted over a network . this will require the construction of a specification language for protein folding data that will allow for portable , system - independent , machine - parsable and human - readable representation of essential features of protein folding . this will require the development of graphical representations of the data , such as chevron plots . in particular , graphical methods allowing the visualization of relationships between structural parameters , such as contact order and folding kinetics , will prove very useful . data deposition and validation : it is vital that new folding data is deposited in the same timeframe as publication ( as is the case of the pdb ) . this means that the data becomes readily available to the community and amenable to analysis . this can be achieved using a forms - based system that will allow data to be deposited , via a web - browser , directly by the originator of the data , again in an analogous manner to the pdb . validation logic can also be built into the deposition process , providing both a useful service to the depositor as well as an indication on data quality to users . the latter two aims are particularly important for database functionality and growth , respectively , and will be given priority . this approach will allow us to achieve a high degree of uniformity in the structure of folding data , which will benefit experimentalists in data acquisition and handling . we thank christina mitchell for financial support , and james whisstock and ross coppel for continuing support .
we have developed a new database that collects all protein folding data into a single , easily accessible public resource . the protein folding database ( pfd ) contains annotated structural , methodological , kinetic and thermodynamic data for more than 50 proteins , from 39 families . a user - friendly web interface has been developed that allows powerful searching , browsing and information retrieval , whilst providing links to other protein databases . the database structure allows visualization of folding data in a useful and novel way , with a long - term aim of facilitating data mining and bioinformatics approaches . pfd can be accessed freely at http://pfd.med.monash.edu.au .
INTRODUCTION PFD DESCRIPTION USE OF PFD IN FOLDING RESEARCH CONCLUSIONS AND FUTURE DIRECTIONS ACKNOWLEDGEMENTS
PMC4689097
cardiovascular diseases are the most important medical challenge worldwide including iran . 30% of the mortality and 10% of the global burden of diseases are attributed to cardiovascular diseases . myocardial infarction ( mi ) has the highest contribution among cardiovascular diseases in iran . in 2012 , only 4% of 4564 years old population and 1% of 1544 years old population were free of cardiovascular diseases risk factors in both men and women in iran . individual risk factors of mi and the associated mortality have been examined in different countries , including iran . the association of seasons , income , socioeconomic status , individual and clinical risk factors , and geographic and environmental factors with mortality due to mi has been already investigated . mechanisms of geographic and environmental factors could explain the association between cardiovascular diseases and temperature . activation of the sympathetic nervous system and secretion of catecholamine are increased in response to cold temperature , which could result in an increase in blood pressure through increased heart rate and peripheral vascular resistance . in addition , experimental studies suggested that alterations in temperature might influence vascular function through an effect on endothelial nitric oxide synthase and the bioavailability of nitric oxide . in rats , acute and short - term exposure to elevated environmental or core body temperatures has been shown to increase endothelial nitric oxide synthase expression . in different population and communities , the pattern of mi incidence and outcomes varies . to identify the clinical and nonclinical ( environmental ) characteristics and the factors associated with the mortality due to mi , and to innovate in and implement the program for preventing and controlling the mortality in addition , because of the graded provision of services in iran 's health system and hierarchical structure of the patients data , multilevel analysis is the most appropriate approach to determining the factors associated with mortality in mi patients . no study has been yet conducted worldwide using multilevel analysis and examining concomitant effects of individual and nonindividual variables on mi outcome in different levels considering confounding variables and the interaction among the variables . this study was conducted to determine the factors independently associated with hospital mortality due to mi in iran using a multilevel analysis . in this hospital - based , nationwide and cross - sectional study , the data of 20750 new mi patients between april , 2012 and march , 2013 in iran were used . iran is a country in western asia , the middle east , central asia , and the caucasus . iran is one of the world 's large countries , which range from 25 and 3 to 39 and 47 north latitude , and 44 and 5 to 63 and 18 east longitude . as iran annual average precipitation in some cities in south iran does not exceed 40 mm while it has been reported to exceed 600 mm in western regions . these variations could be seen for other weather elements such as temperature and humidity , as well . the data used in this study were accessed with observance of all rights of the patients , and ethical considerations in research as well as the approval of the university 's ethics committee and noncommunicable diseases management center of iran 's ministry of health and medical education . in addition , the patients individual data were dealt with as confidential , and no data contributing to the identification of the patients were used . the hospital mortality due to mi was considered as the outcome of the disease and hence a dependent variable . definition of world health organization ( who ) and world heart federation ( icd : i21 ) was adopted for mi diagnosis ( as the inclusion criteria ) . echocardiography ( ecg ) was used to differentiate between the two types of mis based on the shape of the tracing . an st section of the tracing higher than the baseline is called st - segment elevation myocardial infarction ( stemi ) which usually requires more invasive treatment . for a person to qualify as having stemi , the ecg must show new st elevation in two or more adjacent ecg leads . the patients with definite diagnosis of mi by the cardiologist were enrolled into the study . the patients with mi history and no mi diagnosis were excluded from the study . in view of the inclusion and exclusion criteria , census enrollment of the patients from all hospitals across the country , and data gathering per a single form , the biases of enrollment , and data were minimized as much as possible . the demographic data and clinical and behavioral risk factors at individual level including age , gender , literacy , place of residence , smoking , type 2 diabetes , hypertension , dyslipidemia , and complications , and place and type of mi were gathered from the patients electronic medical file in iran myocardial infarction registry in 2012 . the hospital at which the patient was hospitalized at the county ( place of residence ) level was defined as the second level analysis . for the second level analysis , valid and reliable geographical and environmental data such as mean temperature , mean minimum and maximum temperature , mean relative humidity , altitude , and mean precipitation for each month were obtained from iran meteorological organization and the ratio of cardiac care unit beds was obtained from treatment deputy of iran ministry of health and medical education . the province where the patient was living was determined as the third level analysis . as with thefirst two levels , for the third level analysis , valid and reliable data on noncommunicable diseases risk factors were used , including prevalence of type 2 diabetes , hypertension , mean body mass index , smoking and hookah smoking , high cholesterol , obesity and overweight , physical inactivity , and the frequency of fish , vegetables , fruits , and fried food in household food basket ; these data were gathered per stepwise approach of who . deviance information criterion , akaike 's information criterion ( aic ) and bayesian information criterion ( bic ) were used to select the best model . any model with lower aic or bic the null model was defined as empty model to recognize the variance among the levels . in the second step the slope of thefirst level variables was considered as fixed , as opposed to random , because we did not assume a priori that the effect of these variables on mortality varies among the provinces . in the model two , the variables at the community level ( district / hospitals ) were introduced into the model one . in the model three , the variables at the community level ( province ) were added to the model two . after running the third model , we decided to do the analysis at two levels , because the amount of variance was approximately zero at the third level ( province ) . measures of association were calculated and reported by odds ratio ( or ) ( confidence interval [ ci ] 95% ) . the quantitative data were reported as mean standard deviation ( sd ) and the grouped variables as frequency and percentage . in this hospital - based , nationwide and cross - sectional study , the data of 20750 new mi patients between april , 2012 and march , 2013 in iran were used . iran is a country in western asia , the middle east , central asia , and the caucasus . iran is one of the world 's large countries , which range from 25 and 3 to 39 and 47 north latitude , and 44 and 5 to 63 and 18 east longitude . as iran annual average precipitation in some cities in south iran does not exceed 40 mm while it has been reported to exceed 600 mm in western regions . these variations could be seen for other weather elements such as temperature and humidity , as well . the data used in this study were accessed with observance of all rights of the patients , and ethical considerations in research as well as the approval of the university 's ethics committee and noncommunicable diseases management center of iran 's ministry of health and medical education . in addition , the patients individual data were dealt with as confidential , and no data contributing to the identification of the patients were used . the hospital mortality due to mi was considered as the outcome of the disease and hence a dependent variable . definition of world health organization ( who ) and world heart federation ( icd : i21 ) was adopted for mi diagnosis ( as the inclusion criteria ) . echocardiography ( ecg ) was used to differentiate between the two types of mis based on the shape of the tracing . an st section of the tracing higher than the baseline is called st - segment elevation myocardial infarction ( stemi ) which usually requires more invasive treatment . for a person to qualify as having stemi , the ecg must show new st elevation in two or more adjacent ecg leads . the patients with definite diagnosis of mi by the cardiologist were enrolled into the study . the patients with mi history and no mi diagnosis were excluded from the study . in view of the inclusion and exclusion criteria , census enrollment of the patients from all hospitals across the country , and data gathering per a single form , the biases of enrollment , and data were minimized as much as possible . the demographic data and clinical and behavioral risk factors at individual level including age , gender , literacy , place of residence , smoking , type 2 diabetes , hypertension , dyslipidemia , and complications , and place and type of mi were gathered from the patients electronic medical file in iran myocardial infarction registry in 2012 . the hospital at which the patient was hospitalized at the county ( place of residence ) level was defined as the second level analysis . for the second level analysis , valid and reliable geographical and environmental data such as mean temperature , mean minimum and maximum temperature , mean relative humidity , altitude , and mean precipitation for each month were obtained from iran meteorological organization and the ratio of cardiac care unit beds was obtained from treatment deputy of iran ministry of health and medical education . the province where the patient was living was determined as the third level analysis . as with thefirst two levels , for the third level analysis , valid and reliable data on noncommunicable diseases risk factors were used , including prevalence of type 2 diabetes , hypertension , mean body mass index , smoking and hookah smoking , high cholesterol , obesity and overweight , physical inactivity , and the frequency of fish , vegetables , fruits , and fried food in household food basket ; these data were gathered per stepwise approach of who . deviance information criterion , akaike 's information criterion ( aic ) and bayesian information criterion ( bic ) were used to select the best model . any model with lower aic or bic , the null model ( random intercept model ) was run with no independent variables . the null model was defined as empty model to recognize the variance among the levels . in the second step , the slope of thefirst level variables was considered as fixed , as opposed to random , because we did not assume a priori that the effect of these variables on mortality varies among the provinces . in the model two , the variables at the community level ( district / hospitals ) were introduced into the model one . in the model three , the variables at the community level ( province ) were added to the model two . after running the third model , we decided to do the analysis at two levels , because the amount of variance was approximately zero at the third level ( province ) . measures of association were calculated and reported by odds ratio ( or ) ( confidence interval [ ci ] 95% ) . the quantitative data were reported as mean standard deviation ( sd ) and the grouped variables as frequency and percentage . totally , 20750 patients were enrolled into the study from 208 countries across 31 provinces of iran . mean ( sd ) age at mi and mortality incidence was 61.2 ( 13.4 ) and 65.2 ( 15.2 ) years , respectively . 855 ( 34.1% ) of the mortality occurred in women and the rest in men . the individual characteristics of all patients and the deceased and survived patients after mi are shown in table 1 for mi outcome . characteristics of the study population 56.3% of the deceased patients and 44.9% of the survived were illiterate . relative frequency ( % ) of the academic education in the deceased and survived patients was , respectively , 5.6% and 6.2% . the prevalence of smoking , hypertension , and diabetes in the deceased patients was derived , respectively , 31% , 37.8% , and 24.3% . the prevalence of right bundle branch block ( rbbb ) , left bundle branch block , atrial fibrillation ( af ) , and ventricular tachycardia ( vt ) in the deceased patients was obtained , respectively , 3% , 3.1% , 4.6% , and 10.5% , all higher than survived patients . use of percutaneous coronary intervention ( pci ) was obtained 3.1% in the deceased patients , < 7.4% in the survived . descriptive characteristics of the second level ( district ) and third level ( province ) variables , such as temperature and humidity , are shown in table 2 . descriptive characteristics at the place of residence ( study level ) in patients with myocardial infarction the mean maximum temperature , relative humidity , precipitation , and altitude of the studied counties was 23.9c , 37.5% , 418.9 mm , and 1027.8 m. the prevalence of hypertension , obesity , type 2 diabetes , smoking , hookah smoking , physical inactivity , vegetables consumption , and fish consumption was obtained , respectively , 16.4% , 44.8% , 9.5% , 10.5% , 2.2% , 35.6% , 11.6% , and 27% . or ( ci 95% ) is shown in table 3 for the factors associated with the patients mortality at different levels of the analysis . for the avoidance of residual confounding , age was entered in the model as a quantitative rather than grouped variable . with an increase in mean age by one sd , the or for mortality increased by 1.54 . or for mortality in patients with primary and secondary school education was significantly less than illiterate patients , both less than the patients with academic education . the highest risk of hospital mortality was obtained for ischemic heart pain with chest pain resistant to treatment ( or = 5.2 : 4.635.9 ) . factors associated with the patients mortality at levels ( place of residence ) of analysis although hospital mortality due to mi was inversely correlated with increase in temperature ( or = 0.97 [ 95% ci : 0.931.02 ] ) and directly correlated with decreased temperature ( or = 1.04 [ 95% ci : 0.991.1 ] ) , the association was not derived as significant in view of the calculated or ( 95% ci ) . precipitation had a protective effect on the mortality due to mi ( or = 0.79 ) . in contrast , an increase in relative humidity was a risk factor of mortality due to mi . or ( ci 95% ) is shown in table 4 for risk factors associated with the patients mortality at a two - level model of the analysis . in this study , the association between hospital mortality due to mi and individual , group , and environmental variables was investigated by a multilevel analysis . the strengths of the present study were the avoidance of biases of selection and data , and conduction of a hospital - based , large study with the findings potentially generalizable to the whole country . as the study data were gathered from all iran 's provinces , they are generalizable to the whole country . a study in australia reported annual mortality due to mi in the aboriginals and nonaboriginals , respectively , 10.7% and 1.2% , both lower than 12.1% obtained in the present study . another study conducted in 24 countries between 1998 and 2000 to investigate and determine the real or unreal difference in the outcome of mi reported that the differences after mi were mainly due to the variables at individual levels . hospital - level factors and national level factors have a minor effect on the outcome of mi . , the variables age , gender , education , smoking , family history of heart disease , diabetes , chest pain , type of mi , heart failure , and the type of conducted therapies as significant variables at individual level and humidity and precipitation as collective and environmental variables were significantly associated with mortality from mi . out of the advantages of our study was the determination of the factors associated with hospital mortality in multilevel analysis . this analysis is much more accurate than the conventional logistic regression analysis . a hospital - based study by koren et al . , obtained the prevalence of hypertension , dyslipidemia , type 2 diabetes , and smoking 39% , 38% , 24% , and 52% , respectively , which is higher than the present study findings . in koren et al . study , the rate of coronary artery bypass grafting ( cabg ) was reported 7% , which is significantly higher than that in our study . willey et al . , compared hispanic and non - hispanic mi patients with regards to cardiac mortality . the mean age of patients was 68.8 years and the history of hypertension , diabetes , and smoking was obtained , respectively , 3.72% , 4.20% , and 9.17% . in willey et al . , study , the percentage of cardiac mortality was 7% , which is lower than that in our study . , in brazil reported that post - mi survival followed by thrombolytic therapy was higher in men than women . nicolau et al . , study reported a lower risk of mortality in men than women . after adjusting the relationship for age and other variables , in our study , the ratio of mortality for women in the presence of other variables was significant . gender was derived as a significant variable in the presence of other risk factors in our study , which is consistent with nicolau et al . , study . various studies reported age , gender , family history of heart disease , hyperlipidemia , hypertension , type 2 diabetes , smoking , educational level , obesity , and physical inactivity as risk factors of heart diseases , which confirms our findings . the results of our study are consistent with the studies reporting that type 2 diabetes , hypertension , and smoking were higher in deceased patients due to mi than survived patients . in the present study , age , education , lack of thrombolytic therapy , type 2 diabetes , chest pain before arriving at the hospital , heart failure , family history of cardiovascular disease , rbbb , vt , and stemi were determinants of hospital mortality due to mi , consistent with studies in other countries such as japan and korea . in our study , hypertension and hospital mortality were significant in univariate analysis , but the or for mortality , in multiple analysis , was lower in the patients with hypertension than without hypotension . it seems that the control of hypertension with treatment plays an important role in the mortality due to mi . in japan , hypertension - related mortality was significant in mi patients , which is inconsistent with our study . in thomas et al . study , age , lack of thrombolytic therapy and vt were the most important determinants of mortality in patients with mi , which is similar to our results . study were derived 49% , 53% , and 30% , respectively , and in the multiple regression model were significant risk factors for mortality . in a study in which 73% were male and the mean age was 68.1 years , the findings were similar to our findings . history of cabg , pci , and type 2 diabetes was reported 4.4% , 12.5% , and 25.3% , respectively , higher than the corresponding values , respectively , 2.6% , 3.2% , and 22.2% in our study . in our study , af was one of the determinants of hospital mortality , which is consistent with a study in france . in india , 30.4% of patients had type 2 diabetes , 37.7% hypertension , and 40% were smokers ; in the present study , the corresponding figures were obtained 22.2% , 5.35% , and 2.26% , respectively . in india , despite the high prevalence of risk factors , the rate of mortality was obtained 6.7% , which is lower than the mortality rate in our study . since 63.2% of the patients in iran with stemi diagnosis , higher than non - stemi , have been registered and the registered figures for diagnosis of the mi type in iran is inconsistent with those in the usa , under - registration of non - stemi cases is probable in iran , which could be explained by the fact that the majority of the patients with non - stemi deceased before arriving at the hospital and hence are not included in hospital figures . the findings indicated that the rate of use of common therapies for mi patients is lower than other countries and hence further use of these therapies , particularly pci , is recommended to reduce hospital mortality from mi , and training of the patients to refer early and undergo therapies at golden time is critical to preventing avoidable deaths . temperature , precipitation , and relative humidity were derived as the determinants of the mi mortality , which is consistent with the studies investigating the relationship alone and/or considering other variables . although the mechanism effect of temperature , precipitation , and humidity on heart disease has not been established well , temperature reduction leads to increased pressure , impaired inversion layer height , and increased concentration of pollutants in the confined space . high temperature causes increased ozone level and other air pollutants , with direct impact on the severity and worsening of cardiovascular diseases . moreover , the precipitation reduces air pollutants and their concentrations and is inversely correlated with cardiovascular morbidity and mortality . although air pollution has been already reported to have no effect on the outcome of mi in the investigation of the temperature and precipitation association with mi outcome , in our study , air pollution could be a confounding factor . due to lack of information needed to control air pollution as a confounding variable , it should be considered in future studies . the knowledge of the role of environmental and biological factors could be used to improve prevention measures and educational strategies , especially in people at risk of diseases . a limitation of the present study was failure to gather the data on ejection fraction as it is one of the predictive factors of mortality , needing to be addressed in future studies . prospective cohort study and follow - up of the patients for the event of interest at frequent intervals are recommended in future investigations . hence , individual interventions in healthcare centers , clinics , and community at large for lifestyle changes contribute importantly to preventing and controlling mortality . less importantly , the variables related to the living environment such as temperature , relative humidity , and precipitation may determine the mortality in patients . implementing educational strategies , motivating people to visit doctors early , and increasing access to treatment especially in the individuals at mi risk could reduce the mortality due to mi .
background : regarding failure to establish the statistical presuppositions for analysis of the data by conventional approaches , hierarchical structure of the data as well as the effect of higher - level variables , this study was conducted to determine the factors independently associated with hospital mortality due to myocardial infarction ( mi ) in iran using a multilevel analysis.methods:this study was a national , hospital - based , and cross - sectional study . in this study , the data of 20750 new mi patients between april , 2012 and march , 2013 in iran were used . the hospital mortality due to mi was considered as the dependent variable . the demographic data , clinical and behavioral risk factors at the individual level and environmental data were gathered . multilevel logistic regression models with stata software were used to analyze the data.results:within 1-year of study , the frequency ( % ) of hospital mortality within 30 days of admission was derived 2511 ( 12.1% ) patients . the adjusted odds ratio ( or ) of mortality with ( 95% confidence interval [ ci ] ) was derived 2.07 ( 95% ci : 1.52.8 ) for right bundle branch block , 1.5 ( 95% ci : 1.31.7 ) for st - segment elevation mi , 1.3 ( 95% ci : 1.11.4 ) for female gender , and 1.2 ( 95% ci : 1.11.3 ) for humidity , all of which were considered as risk factors of mortality . but , or of mortality was 0.7 for precipitation ( 95% ci : 0.70.8 ) and 0.5 for angioplasty ( 95% ci : 0.40.6 ) were considered as protective factors of mortality.conclusions:individual risk factors had independent effects on the hospital mortality due to mi . variables in the province level had no significant effect on the outcome of mi . increasing access and quality to treatment could reduce the mortality due to mi .
INTRODUCTION METHODS Study design and participants Variables assessment Statistical analysis RESULTS DISCUSSION CONCLUSIONS
PMC4886840
it is known that swimming is a non - weight - bearing activity that can increase aerobic capacity and lean body mass , however , it has no positive effects on bone mineral density ( bmd ) [ 1 - 5 ] . the loss of bone volume and quality due to a lack of mechanical stimuli can be explained by wolff s law . bone loss may induce osteoporosis later in life , a skeletal disease characterized by low bone mass and microarchitectural deterioration of bone tissue , leading to increased susceptibility to fractures . osteoporosis has become a poignant health problem , particularly for women , and it has long been known that female athletes , especially swimmers , have menstrual irregularity [ 9 - 11 ] . females are more vulnerable to bone loss as a result of having smaller skeletons compared with males , and the estrogen patterns that accompany menarche and menopause . hence , low bmd associated with menstrual irregularity in female athletes may cause severe health problems later in life . previous cross - sectional studies have compared the bmd of competitive swimmers to other competitive athletes , showing that the bmd of collegiate swimmers is significantly lower than that of their counterparts who participate in impact exercise and equal to or lower than that of non - athletic controls . the results of these studies indicate that swimming is not beneficial for the promotion , maintenance , or increase of bmd . conversely , several studies have addressed that there is no significant difference between swimmers and controls with respect to bmd [ 14 - 16 ] . as shown in a previous study , bone - exercise - nutrition interaction exists , and therefore , studies that evaluate the effect of dietary supplementation in swimmers are needed in order to find out the possible interaction between training and bone health . nutritional aspects such as calcium , magnesium , and vitamin d may also affect bone health in swimmers . to date , swim training still seems to have conflicting effects on bone health maintenance in female athletes . therefore , the present review focuses on swim training , dietary supplementation , and bmd in athletes from prior representative research from the 1990s to date , with a view to elucidating the effect of swim training on bmd in the athletic population . the international society of clinical densitometry ( iscd ) recommends the use of z - scores rather than t - scores , because they compare bmd with age- and sex - matched controls . iscd defines a z - score of -2 or below as low bmd under the expected age , and recommends that a diagnosis of osteoporosis be made if additional risk factors for poor bone health are identified . low bmd as a history of nutritional deficiencies , hypoestrogenism , stress fractures , and/or other secondary clinical risk factors for fracture , together with a z - score between -1.0 and -2.0 due to the fact that athletes tend to have 10 - 15% higher bmd than the non - athletic population . although low bmd has been defined as a z - score of -2 or below , there is no report of the characteristics of normal distribution , and therefore , the exact range of low bmd can not be obtained . moreover , a z - score between -1.0 and -2.0 can be applied to members of the entire population including well - trained athletes and untrained individuals . hence , bmd measurement in the athletic population needs to consider characteristics of the sport type and bone - loaded regions . according to wolff s law , in the healthy population bone mass can be increased by physical activities requiring high force and/or generating high impact on bone . in previous studies , several exercise training programs have been used to improve bmd [ 8 , 20 ] . in general , swimming is considered to be an unloaded exercise that has no impact on bmd , whereas exercises that include walking , jumping , running , and stopping such as basketball , running , and gymnastics are considered to be high impact sports that have a positive effect on bmd . however , bmd can be also be directly affected by weight bearing and indirectly affected by repeated muscle contraction . therefore , recent studies have focused on the relationship between bmd and the characteristics of muscle fibers based on genetic factors in animals and humans . although swim training may not actually cause positive phenotypic effects on bmd , regular and long - term exercise is repeatedly performed and thus may appear to exert positive changes in bmd . to support this suggestion , the joint effect of phenotypic activation , considering age and physiological characteristics and major and minor trace minerals affecting bone metabolism , should be conducted in order to elucidate an exact mechanism for bmd change . in the 1990 s , studies examined and compared bmd in swimmers and athletes who participated in impact sports , concluding that swimming does not promote high bmd , based on their attempts to mechanistically explain their results . prior cross - sectional studies have demonstrated higher bmd among athletes who engage in weight - bearing and impact sports when compared with their non - athlete counterparts [ 23 - 26 ] , as well as with their non - weight - bearing athlete counterparts . the goal of the study was to clarify the relationship between athletic training and bone density in young eumenorrheic athletes . swimmers had a significantly lower mean density in the lumbar spine compared with all other groups . the major conclusion of the paper is that the data support the concept that athletes in sports involving impact have a higher bmd than non - athletes and swimmers . taaffe et al . , examined the role of skeletal loading patterns on bmd in eumenorrheic athletes who chronically trained using various forms of skeletal loading ; the intensive impact of gymnastics , the non - weight - bearing of swimming , and the usual activity among controls . they examined 39 athletes who competed in their respective sports at the ncaa division 1 level and 19 non - athletes who exercised three hours or less a week . the study found no difference in bmd among the groups in either the lumbar spine or the whole body . the gymnasts and controls had greater bmd at the greater trochanter and femoral neck than the swimmers . the study concluded that high - impact weight - bearing activity is beneficial for bone accumulation , and the authors proposed that high ground reaction forces and extreme muscle contraction on bone may contribute to the higher bmd in gymnasts . , compared the bmd of collegiate female athletes who competed in impact - loading sports ( volleyball and gymnastics ) and swimming . impact - loading players had greater bmd than swimmers and controls , however , there was no significant difference in bmd between swimmers and controls , at any site . this study also concluded that the prevalence of menstrual dysfunction in some participants does not appear to negatively influence bmd . in the 2000 s , creighton et al . , examined bmd and markers of bone turnover in female athletes with different levels of impact in their sports . they determined the effect of regular training in three different types of competitive sport on bmd , bone formation , and bone resorption in young women . 50 women participated in the study and were separated into three groups based on the degree of impact , which was determined by ground reaction forces associated with their sport . the high impact group was found to have significantly higher bmd at the femoral neck , ward 's triangle , trochanter , and a higher total bmd than the medium impact , non - impact ( swimmers ) , and control groups . markers of bone formation were significantly lower in the swimming group compared with the high and medium impact groups . swimmers had high bone resorption and low bone formation markers compared with the high , medium , and control groups . the study concluded that female athletes involved in high impact sports have the greatest bmd at weight - bearing sites as well as the highest markers for bone formation . duncan and colleagues investigated the influence of different exercise types on bmd in elite female athletes including cyclists , runners , swimmers , triathletes , and controls ( 15 per group ) . total and areal bmd using dxa scans were conducted and it was found that running , a weight - bearing exercise , was associated with higher bmd than swimming or cycling , which is in accordance with previous studies . in 2007 , mudd and colleagues found that bone mass and sport type were important determinants of bone health in female athletes when comparing impact sports and swimming . this cross - sectional study recruited 99 collegiate female athletes in gymnastics , softball , cross country , field hockey , soccer , crew , and swimming / diving . runners had higher overall bmd values than other sports athletes , and swimmers and divers had a significantly lower leg bmd ( 1.1170.086 g / cm ) than every other sports athlete . this study concluded that greater differences among sports were seen when comparing lumbar , pelvis , and leg bmds , and recommended that a longitudinal study would need to be conducted in order to investigate bone changes in response to training over a longer period of time in female athletes . maimoun and colleagues investigated the bmd of gymnasts ( impact activity ) , swimmers ( non - impact activity ) , and controls using dxa for total and regional measurement . at baseline and the 12-month follow - up , gymnasts showed a significantly higher bmd than swimmers and controls . moreover , the total bmd of swimmers ( 0.9600.013 g / cm ) was lower than that of the controls ( 0.9950.013 g / cm ) at baseline and the 12-month follow - up ( 0.9910.014 g / cm and 1.0090.013 g / cm , respectively ) . in accordance with previous findings , this study demonstrated that the osteogenic effect of impact activity was greater than non - impact activity . summarizing previous studies , it has been shown that the loading - induced bmd of impact sports athletes seems to be higher compared with swimmers and/or the non - athletic population . thus , it appears that different bone - loading history and type may highly affect skeletal adaptation in athletes . future studies should consider this point in order to elucidate a better and clearer mechanism of the effect of swim training on bmd . several previous studies on bmd and other markers of bone strength have shown an advantage for swimmers compared with inactive controls and/or other sports athletes . some researchers insist that swim training increases muscle contraction and strain on the skeleton , leading to increased mechanical loading , which would have a positive effect on bmd despite the fact that swimming generally represents a non - weight - bearing sport . matsumoto et al . , examined biomarkers of bone metabolism and bmd in 103 male and female athletes . the subjects were japanese collegiate athletes who specialized in long distance running , judo , and swimming at the national level , however , no controls were used in the study . whole body bmd was measured using dxa , and urine samples were collected for pyriodinoline ( pyr ) and deoxypyridinoline ( dpd , bone resorption markers ) using high pressure liquid chromatography ( hplc ) . blood was collected for bone alkaline phosphatase ( balp ) and carboxyterminal propeptide of type one collagen ( picp , bone formation markers ) measurement . researchers found that the total body bmd was significantly higher in judo athletes compared with runners and swimmers . male runners had a significantly lower balp level than male judoists , and found in balp among the female athletes . the dpd level of male and female runners and swimmers was significantly lower than that of male and female judoists . however , the pyd level of female swimmers was not significantly different from that of runners or judoists . the conclusion by the researchers in this study was that differences in total body bmd are in part due to the demands of the specific sport , and that they are reflected in the levels of bone metabolic markers . this was the first study of its kind to examine biomarkers of bone metabolism among various types of athletes , although its results are limited and there were no control subjects . biomarkers are best if assessed over time to determine change in bone metabolism with a given stimulus , and bmd is best when examining clinically relevant sites such as the hip and spine . similar to this study , summarizing previous studies shows that swimmers seem to have higher levels of bone turnover biomarkers [ 33 , 34 ] , although they do not have a higher bmd compared with other sports athletes and/or controls . greenway et al . , examined the negative effect of long - term swim training participation in 43 swimmers compared with 44 controls . swimmers had a swimming career of over 5 years , and dxa was used to determine the total and regional bmd . there were no differences in bone mass or bmd between the two groups , at any site . the total body z - score of swimmers was 1.24 and of the controls was 1.18 . this study concluded that long - term swim training participation did not compromise regional bmd , and swim training coupled with weight - bearing activities may induce positive effects on bmd . it appears that longitudinal swim study investigating other physical activity history is needed to verify the exact effect of swim training on bmd . in 2014 , stanforth and colleagues compared the bmd in female athletes of various sports ( basketball , soccer , swimming , volleyball ) aged 18 - 23 . they reported that female swimmers in this study had an increased bmd from baseline to year 3 post - season , although the increased amount was still smaller than that of impact sports athletes . they concluded that differences in bmd between impact and non - impact sports are large , however , this study provides evidence that swim training may increase bmd in female swimmers . akgl and colleagues examined the effect of swim training on bmd in 79 swimmers . all swimmers had engaged in at least 2 years swim training , and dxa was used to measure the total and regional bmd . the bmd of 68 swimmers ( 86% ) was normal and a total of 9 swimmers ( 11.4% ) had low bmd . this study implies that swim training may positively affect bmd , although it did not induce increased bmd in female swimmers . summarizing previous studies , it has been shown that swimming may be preferable for maintaining bone health , although swimming has not been shown to result in an improvement in bmd like impact sports . therefore , swim training combined with other types of training would be beneficial on the bone health of swimmers , in order to ensure a better health condition . it has been stressed that reliable reports concerning the dietary habits of athletes , especially elite swimmers , are needed . several studies have shown significantly higher calcium intake in swimmers than in controls [ 36 - 38 ] . other nutritional aspects such as magnesium , iron , and vitamin d may also affect the maintenance of bone health in swimmers . a healthy and well - balanced diet , including the proper amount of calcium , vitamin d , magnesium , and iron , may provide sufficient nutrients . in 2009 , hoogenboom et al . , determined the nutritional knowledge and eating behaviors of female collegiate swimmers . they proposed that swimming , due to the importance of a lean body weight , is associated with nutritional deficiency , and that this would lead to the development of the female athletic triad ; osteoporosis , menstrual dysfunction , and eating disorders . to determine nutritional intake , they used a 24-hour recall food survey , with 85 collegiate female swimmers participating in the study . they reported a mean daily calcium intake of 1578.88 mg in all participants , which was a higher level compared with the recommended dietary allowance ( rda ) of calcium ( 1200 mg / day ) . dugocka and colleagues investigated bmd using dxa in young female swimmers ( n=41 , aged 11 - 14 years ) , considering nutritional aspects using a 3-day food intake recall survey . they reported that swimmers had a higher intake of calcium ( 602229 mg vs. 454236 mg ) and phosphorus ( 1390420 mg vs. 1027292 mg ) compared with non - athletic controls , although both groups had a deficiency in average calcium intake . however , they found that the mean value of bmd in both groups did not differ , indicating that in this study , a higher intake of calcium and phosphorus in swimmers may positively affect bone health when compared to controls who were not active in sports . czeczuk et al . , investigated the dietary habits ( calcium intake ) of 18 former swimmers and 18 current swimmers , reporting that calcium intake in both groups was sufficient and did not exceed 3/4 of the daily normal . these previous studies reported that calcium consumption in swimmers was higher than the rda and/or controls . in contrast , several previous studies have shown that female swimmers have calcium intakes below the rda and/or other sports athletes and controls . berning et al . , investigated the dietary food records of adolescent male and female swimmers , and reported that over 50% of elite adolescent female swimmers consumed lower calcium and iron intake compared with the rda and other sports athletes . similar to this study , hawley and williams investigated 20 elite swimmers ( 11 females , 9 males ) using a 4-day food intake survey , and found that 55% of swimmers consumed an intake of calcium below the rda and 65% of swimmers had a lower intake of iron . moreover , the mean iron intake of female swimmers was significantly lower than the rda , and 82% of female swimmers consumed a low iron intake in this study . based on previous studies [ 42 - 44 ] , a low iron intake is common in female athletes . greenway et al . , examined the calcium intake for the previous 12 months through a questionnaire , and they also found that calcium intake was low in swimmers and controls . swimmers achieved 83% of the recommended daily intake ( rdi ) , however , when compared with the controls ( achieved 66% of the rdi ) , swimmers consumed a higher amount of calcium . czeczelewski and colleagues conducted a 3-year monitoring of bmd and nutrient intakes in adolescent female swimmers ( n=20 ) and non - athletic controls ( n=20 ) . the calcium intake of swimmers was below the recommended values , and throughout the 3 years , the bmd of swimmers decreased , suggesting that this could have been due to insufficient calcium intake . akgl and colleagues examined the dietary information of 79 swimmers using a 3-day food diary , and also reported low calcium intake in the swimmers . moreover , they found that swimmers with a normal bmd consumed a higher amount of calcium ( median 700 mg ) than the swimmers with a low bmd ( median 600 mg ) . furthermore , 31 ( 39.2% ) swimmers in this study had a vitamin d deficiency . vitamin d supplementation is also becoming more popular , especially in the athletic population , because it may reduce the incidence of stress fractures when combined with calcium , and may also have an ergogenic factor for athletes . summarizing previous studies , many authors have reported common dietary irregularities concerning female adolescent and adult swimmers in relation to swim training . therefore , to elucidate the exact mechanism and association between dietary supplementation and swim training , the international society of clinical densitometry ( iscd ) recommends the use of z - scores rather than t - scores , because they compare bmd with age- and sex - matched controls . iscd defines a z - score of -2 or below as low bmd under the expected age , and recommends that a diagnosis of osteoporosis be made if additional risk factors for poor bone health are identified . the american college of sports medicine defines the term low bmd as a history of nutritional deficiencies , hypoestrogenism , stress fractures , and/or other secondary clinical risk factors for fracture , together with a z - score between -1.0 and -2.0 due to the fact that athletes tend to have 10 - 15% higher bmd than the non - athletic population . although low bmd has been defined as a z - score of -2 or below , there is no report of the characteristics of normal distribution , and therefore , the exact range of low bmd can not be obtained . moreover , a z - score between -1.0 and -2.0 can be applied to members of the entire population including well - trained athletes and untrained individuals . hence , bmd measurement in the athletic population needs to consider characteristics of the sport type and bone - loaded regions . according to wolff s law , in the healthy population , bone usually responds to stress by increasing its mass and bmd . bone mass can be increased by physical activities requiring high force and/or generating high impact on bone . in previous studies , several exercise training programs have been used to improve bmd [ 8 , 20 ] . in general , swimming is considered to be an unloaded exercise that has no impact on bmd , whereas exercises that include walking , jumping , running , and stopping such as basketball , running , and gymnastics are considered to be high impact sports that have a positive effect on bmd . however , bmd can be also be directly affected by weight bearing and indirectly affected by repeated muscle contraction . therefore , recent studies have focused on the relationship between bmd and the characteristics of muscle fibers based on genetic factors in animals and humans . although swim training may not actually cause positive phenotypic effects on bmd , regular and long - term exercise is repeatedly performed and thus may appear to exert positive changes in bmd . to support this suggestion , the joint effect of phenotypic activation , considering age and physiological characteristics and major and minor trace minerals affecting bone metabolism , should be conducted in order to elucidate an exact mechanism for bmd change . in the 1990 s , studies examined and compared bmd in swimmers and athletes who participated in impact sports , concluding that swimming does not promote high bmd , based on their attempts to mechanistically explain their results . prior cross - sectional studies have demonstrated higher bmd among athletes who engage in weight - bearing and impact sports when compared with their non - athlete counterparts [ 23 - 26 ] , as well as with their non - weight - bearing athlete counterparts . the goal of the study was to clarify the relationship between athletic training and bone density in young eumenorrheic athletes . swimmers had a significantly lower mean density in the lumbar spine compared with all other groups . the major conclusion of the paper is that the data support the concept that athletes in sports involving impact have a higher bmd than non - athletes and swimmers . taaffe et al . , examined the role of skeletal loading patterns on bmd in eumenorrheic athletes who chronically trained using various forms of skeletal loading ; the intensive impact of gymnastics , the non - weight - bearing of swimming , and the usual activity among controls . they examined 39 athletes who competed in their respective sports at the ncaa division 1 level and 19 non - athletes who exercised three hours or less a week . the study found no difference in bmd among the groups in either the lumbar spine or the whole body . the gymnasts and controls had greater bmd at the greater trochanter and femoral neck than the swimmers . the study concluded that high - impact weight - bearing activity is beneficial for bone accumulation , and the authors proposed that high ground reaction forces and extreme muscle contraction on bone may contribute to the higher bmd in gymnasts . , compared the bmd of collegiate female athletes who competed in impact - loading sports ( volleyball and gymnastics ) and swimming . impact - loading players had greater bmd than swimmers and controls , however , there was no significant difference in bmd between swimmers and controls , at any site . this study also concluded that the prevalence of menstrual dysfunction in some participants does not appear to negatively influence bmd . in the 2000 s , creighton et al . , examined bmd and markers of bone turnover in female athletes with different levels of impact in their sports . they determined the effect of regular training in three different types of competitive sport on bmd , bone formation , and bone resorption in young women . 50 women participated in the study and were separated into three groups based on the degree of impact , which was determined by ground reaction forces associated with their sport . the high impact group was found to have significantly higher bmd at the femoral neck , ward 's triangle , trochanter , and a higher total bmd than the medium impact , non - impact ( swimmers ) , and control groups . markers of bone formation were significantly lower in the swimming group compared with the high and medium impact groups . swimmers had high bone resorption and low bone formation markers compared with the high , medium , and control groups . the study concluded that female athletes involved in high impact sports have the greatest bmd at weight - bearing sites as well as the highest markers for bone formation . duncan and colleagues investigated the influence of different exercise types on bmd in elite female athletes including cyclists , runners , swimmers , triathletes , and controls ( 15 per group ) . total and areal bmd using dxa scans were conducted and it was found that running , a weight - bearing exercise , was associated with higher bmd than swimming or cycling , which is in accordance with previous studies . in 2007 , mudd and colleagues found that bone mass and sport type were important determinants of bone health in female athletes when comparing impact sports and swimming . this cross - sectional study recruited 99 collegiate female athletes in gymnastics , softball , cross country , field hockey , soccer , crew , and swimming / diving . runners had higher overall bmd values than other sports athletes , and swimmers and divers had a significantly lower leg bmd ( 1.1170.086 g / cm ) than every other sports athlete . this study concluded that greater differences among sports were seen when comparing lumbar , pelvis , and leg bmds , and recommended that a longitudinal study would need to be conducted in order to investigate bone changes in response to training over a longer period of time in female athletes . maimoun and colleagues investigated the bmd of gymnasts ( impact activity ) , swimmers ( non - impact activity ) , and controls using dxa for total and regional measurement . at baseline and the 12-month follow - up , gymnasts showed a significantly higher bmd than swimmers and controls . moreover , the total bmd of swimmers ( 0.9600.013 g / cm ) was lower than that of the controls ( 0.9950.013 g / cm ) at baseline and the 12-month follow - up ( 0.9910.014 g / cm and 1.0090.013 g / cm , respectively ) . in accordance with previous findings , this study demonstrated that the osteogenic effect of impact activity was greater than non - impact activity . summarizing previous studies , it has been shown that the loading - induced bmd of impact sports athletes seems to be higher compared with swimmers and/or the non - athletic population . thus , it appears that different bone - loading history and type may highly affect skeletal adaptation in athletes . future studies should consider this point in order to elucidate a better and clearer mechanism of the effect of swim training on bmd . several previous studies on bmd and other markers of bone strength have shown an advantage for swimmers compared with inactive controls and/or other sports athletes . some researchers insist that swim training increases muscle contraction and strain on the skeleton , leading to increased mechanical loading , which would have a positive effect on bmd despite the fact that swimming generally represents a non - weight - bearing sport . matsumoto et al . , examined biomarkers of bone metabolism and bmd in 103 male and female athletes . the subjects were japanese collegiate athletes who specialized in long distance running , judo , and swimming at the national level , however , no controls were used in the study . whole body bmd was measured using dxa , and urine samples were collected for pyriodinoline ( pyr ) and deoxypyridinoline ( dpd , bone resorption markers ) using high pressure liquid chromatography ( hplc ) . blood was collected for bone alkaline phosphatase ( balp ) and carboxyterminal propeptide of type one collagen ( picp , bone formation markers ) measurement . researchers found that the total body bmd was significantly higher in judo athletes compared with runners and swimmers . male runners had a significantly lower balp level than male judoists , and found in balp among the female athletes . the dpd level of male and female runners and swimmers was significantly lower than that of male and female judoists . however , the pyd level of female swimmers was not significantly different from that of runners or judoists . the conclusion by the researchers in this study was that differences in total body bmd are in part due to the demands of the specific sport , and that they are reflected in the levels of bone metabolic markers . this was the first study of its kind to examine biomarkers of bone metabolism among various types of athletes , although its results are limited and there were no control subjects . biomarkers are best if assessed over time to determine change in bone metabolism with a given stimulus , and bmd is best when examining clinically relevant sites such as the hip and spine . similar to this study , summarizing previous studies shows that swimmers seem to have higher levels of bone turnover biomarkers [ 33 , 34 ] , although they do not have a higher bmd compared with other sports athletes and/or controls . greenway et al . , examined the negative effect of long - term swim training participation in 43 swimmers compared with 44 controls . swimmers had a swimming career of over 5 years , and dxa was used to determine the total and regional bmd . there were no differences in bone mass or bmd between the two groups , at any site . the total body z - score of swimmers was 1.24 and of the controls was 1.18 . this study concluded that long - term swim training participation did not compromise regional bmd , and swim training coupled with weight - bearing activities may induce positive effects on bmd . it appears that longitudinal swim study investigating other physical activity history is needed to verify the exact effect of swim training on bmd . in 2014 , stanforth and colleagues compared the bmd in female athletes of various sports ( basketball , soccer , swimming , volleyball ) aged 18 - 23 . they reported that female swimmers in this study had an increased bmd from baseline to year 3 post - season , although the increased amount was still smaller than that of impact sports athletes . they concluded that differences in bmd between impact and non - impact sports are large , however , this study provides evidence that swim training may increase bmd in female swimmers . akgl and colleagues examined the effect of swim training on bmd in 79 swimmers . all swimmers had engaged in at least 2 years swim training , and dxa was used to measure the total and regional bmd . the bmd of 68 swimmers ( 86% ) was normal and a total of 9 swimmers ( 11.4% ) had low bmd . this study implies that swim training may positively affect bmd , although it did not induce increased bmd in female swimmers . summarizing previous studies , it has been shown that swimming may be preferable for maintaining bone health , although swimming has not been shown to result in an improvement in bmd like impact sports . in addition to swim training , extra strength training may also positively affect bmd . therefore , swim training combined with other types of training would be beneficial on the bone health of swimmers , in order to ensure a better health condition . it has been stressed that reliable reports concerning the dietary habits of athletes , especially elite swimmers , are needed . several studies have shown significantly higher calcium intake in swimmers than in controls [ 36 - 38 ] . other nutritional aspects such as magnesium , iron , and vitamin d may also affect the maintenance of bone health in swimmers . a healthy and well - balanced diet , including the proper amount of calcium , vitamin d , magnesium , and iron , may provide sufficient nutrients . in 2009 , hoogenboom et al . , determined the nutritional knowledge and eating behaviors of female collegiate swimmers . they proposed that swimming , due to the importance of a lean body weight , is associated with nutritional deficiency , and that this would lead to the development of the female athletic triad ; osteoporosis , menstrual dysfunction , and eating disorders . to determine nutritional intake , they used a 24-hour recall food survey , with 85 collegiate female swimmers participating in the study . they reported a mean daily calcium intake of 1578.88 mg in all participants , which was a higher level compared with the recommended dietary allowance ( rda ) of calcium ( 1200 mg / day ) . dugocka and colleagues investigated bmd using dxa in young female swimmers ( n=41 , aged 11 - 14 years ) , considering nutritional aspects using a 3-day food intake recall survey . they reported that swimmers had a higher intake of calcium ( 602229 mg vs. 454236 mg ) and phosphorus ( 1390420 mg vs. 1027292 mg ) compared with non - athletic controls , although both groups had a deficiency in average calcium intake . however , they found that the mean value of bmd in both groups did not differ , indicating that in this study , a higher intake of calcium and phosphorus in swimmers may positively affect bone health when compared to controls who were not active in sports . czeczuk et al . , investigated the dietary habits ( calcium intake ) of 18 former swimmers and 18 current swimmers , reporting that calcium intake in both groups was sufficient and did not exceed 3/4 of the daily normal . these previous studies reported that calcium consumption in swimmers was higher than the rda and/or controls . in contrast , several previous studies have shown that female swimmers have calcium intakes below the rda and/or other sports athletes and controls . berning et al . , investigated the dietary food records of adolescent male and female swimmers , and reported that over 50% of elite adolescent female swimmers consumed lower calcium and iron intake compared with the rda and other sports athletes . similar to this study , hawley and williams investigated 20 elite swimmers ( 11 females , 9 males ) using a 4-day food intake survey , and found that 55% of swimmers consumed an intake of calcium below the rda and 65% of swimmers had a lower intake of iron . moreover , the mean iron intake of female swimmers was significantly lower than the rda , and 82% of female swimmers consumed a low iron intake in this study . based on previous studies [ 42 - 44 ] , a low iron intake is common in female athletes . , examined the calcium intake for the previous 12 months through a questionnaire , and they also found that calcium intake was low in swimmers and controls . swimmers achieved 83% of the recommended daily intake ( rdi ) , however , when compared with the controls ( achieved 66% of the rdi ) , swimmers consumed a higher amount of calcium . czeczelewski and colleagues conducted a 3-year monitoring of bmd and nutrient intakes in adolescent female swimmers ( n=20 ) and non - athletic controls ( n=20 ) . the calcium intake of swimmers was below the recommended values , and throughout the 3 years , the bmd of swimmers decreased , suggesting that this could have been due to insufficient calcium intake . akgl and colleagues examined the dietary information of 79 swimmers using a 3-day food diary , and also reported low calcium intake in the swimmers . moreover , they found that swimmers with a normal bmd consumed a higher amount of calcium ( median 700 mg ) than the swimmers with a low bmd ( median 600 mg ) . furthermore , 31 ( 39.2% ) swimmers in this study had a vitamin d deficiency . vitamin d supplementation is also becoming more popular , especially in the athletic population , because it may reduce the incidence of stress fractures when combined with calcium , and may also have an ergogenic factor for athletes . summarizing previous studies , many authors have reported common dietary irregularities concerning female adolescent and adult swimmers in relation to swim training . therefore , to elucidate the exact mechanism and association between dietary supplementation and swim training , the close and precise monitoring of nutritional habits of swimmers should be conducted . upon the review of previous studies , it is obvious that the majority of studies did not collect physical activity data on swimmers activities outside of their swimming activities . these extra activities may have some influence on the bmd of swimmers , and therefore , future studies need to examine additional physical activity history data in addition to swim training . this additional information may help explain why swimmers ' bmd tends to be lower than the bmd of the controls in many studies . if a swimmer participated at a young age in impact - based sports , such as running or gymnastics , this may be reflected in their current bmd . a thorough knowledge of swimmers ' past physical activity can help to better understand the results seen with dxa and biomarker analysis . since bone adaptation to exercise is limited to loaded regions , exercise types should be carefully chosen . nutritional intake of calcium , magnesium , and vitamin d for swimmers also needs to be considered when conducting training and bmd interaction studies . the compilation of results in this review suggests that further exercise intervention studies are needed in the attempt to introduce various exercise programs to female swimmers , in order to determine the optimal exercise prescription for bone health . moreover , longitudinal studies and randomized control trials are necessary to better understand the association of swim training with bmd in female athletes .
[ purpose]the present paper reviews the physiological adaptation to swim training and dietary supplementation relating to bone mineral density ( bmd ) in female swimmers . swim training still seems to have conflicting effects on bone health maintenance in athletes.[methods]this review article focuses on swim training combined with dietary supplementation with respect to bmd in female athletes.[results]upon review of previous studies , it became obvious that the majority of studies did not collect physical activity data on the swimmers outside of their swimming activities . these activities may have some influence on the bmd of swimmers and therefore , future studies need to examine additional physical activity history data as well as swim training . this additional information may help to explain why swimmers ' bmd tends to be lower than the bmd of control individuals in many studies . moreover , dietary supplementation such as calcium , magnesium , and vitamin d also affect bone health in swimmers , and it is extremely important to evaluate bmd in the context of dietary supplementation.[conclusion]a review of the literature suggests that exercise intervention studies , including longitudinal and randomized control trials , need to attempt to introduce various exercise programs to female swimmers in order to determine the optimal exercise prescription for bone health .
INTRODUCTION Definition of low bone mineral density in athletes Effect of exercise on bone Negative effects of swim training on bone mineral density Positive effects of swim training on bone mineral density The effect of dietary supplementation on bone mineral density CONCLUSION
PMC5253803
hole argument is the english translation of the german phrase lochbetrachtung , used by albert einstein to describe his argument against the possibility of generally - covariant equations for the gravitational field , developed in late 1913 and accepted until late 1915 . einstein realized the desirability of general covariance , and showed that it was easily implemented for the rest of physics ; but the hole argument purported to show why it could not be demanded of the gravitational field equations he was trying to formulate for the metric tensor.1 this article is a historical - critical study , in ernst mach s sense.2 it includes a review of the literature on the hole argument that concentrates on the interface between historical , philosophical and physical approaches . although recounting the history of the hole argument , the primary purpose is to discuss its contemporary significance in both physics and philosophy for the study of space - time structures . like mach , while presenting various other viewpoints , i have not hesitated to advocate my own . in physics , i believe the main lesson of the hole argument is that any future fundamental theory , such as some version of quantum gravity , should be background independent , with basic elements obeying the principle of maximal permutability . in the philosophy of space - time , this leads me to advocate a third way that i call dynamic structural realism , which differs from both the traditional absolutist and relationalist positions . one of the most crucial developments in theoretical physics was the move from theories dependent on fixed , non - dynamical background space - time structures to background - independent theories , in which the space - time structures themselves are dynamical entities . this move began in 1915 when einstein stated the case against his earlier hole argument . even today , many physicists and philosophers do not fully understand the significance of this development , let alone accept it in practice . so it is of more than historical interest for physicists and philosophers of science to understand what initially motivated this move , as well as the later developments stemming from it . einstein s starting point was the search for a generalization of the special theory that would include gravitation . he quickly realized that the equivalence principle compelled the abandonment of the privileged role of inertial ( i.e. , non - accelerated ) frames of reference , and started to investigate the widest class of accelerated frames that would be physically acceptable . his first impulse was to allow all possible frames of reference ; since he identified frames of reference and coordinate systems , this choice corresponds mathematically to a generally - covariant theory . but he soon developed an argument the hole argument purporting to show that generally - covariant equations for the metric tensor are incompatible with his concept of causality for the gravitational field . the argument hinged on his tacit assumption that the points of space - time are inherently individuated , quite apart from the nature of the metric tensor field at these points . only two years later , after other reasons compelled him to reconsider general covariance , did einstein finally recognize the way out of his dilemma : one must assume that , in an empty region of space - time , the points have no inherent individuating properties nor indeed are there any spatio - temporal relations between them that do not depend on the presence of some metric tensor field . thus , general relativity became the first fully dynamical , background - independent space - time theory . without some knowledge of this historical background , it is difficult to fully appreciate either the modern significance of the hole argument , or the compelling physical motives for the requirement of background independence . einstein s starting point in his search for a theory of gravitation was the theory we now call special relativity . from a contemporary viewpoint , its most important feature is that it has two fixed , kinematical space - time structures the chrono - geometry embodied in the minkowski metric tensor field and the inertial field embodied in the associated flat affine connection both of which are invariant under the ten - parameter lie group now called the poincare or inhomogenous lorentz group.3 in minkowski space - time , all dynamical theories must be based on geometric objects that form a representation ( or more generally , a realization ) of this group . there is a preferred class of spatial frames of reference in minkowski space - time , the inertial frames . einstein had shown how to define a class of physically preferred coordinate systems for each inertial frame of reference ; in particular , he defined a clock synchronization procedure that provides a preferred global time for each frame . this enabled him to show how the principle of relativity of all inertial frames could be reconciled with the universal properties of light propagation in vacuum . the lesson he drew was the need to find a physical interpretation of the coordinates associated with an inertial frame of reference a lesson that had to be painfully unlearned in his search for a generalized theory of relativity . in large part , the history of the hole argument is the story of that unlearning process . the end result was the formulation of the general theory of relativity , the first background - independent physical theory turning all space - time structures into dynamical fields . this was such a revolutionary break with all previous physical theories , in which space - time structures constitute a fixed , non - dynamical background , that its ultimate significance is still debated by physicists.4 understanding the hole argument in both its historical and contemporary aspects can help to clarify the issues at stake in this debate . the basic issue can be stated as follows : given a physical theory , when should an equivalence class of mathematically distinct models of the theory be identified as corresponding to single , unique physical model ? the hole argument shows that , for any theory defined by a set of generally - covariant field equations , the only way to make physical sense of the theory is to assume that the entire equivalence class of diffeomorphically - related solutions to the field equations represent a single physical solution . as will be seen later , mathematically this result can best be stated in the language of natural bundles . but a similar result holds for the even broader class of all gauge - invariant field theories , notably yang - mills theories : an equivalence class of gauge - related models of any such theory must be physically identified . mathematically , broadening the question in this way requires the language of gauge - natural bundles . general relativity itself may also be treated by the use of gauge - natural bundle techniques : its similarities to and differences from gauge theories of the yang - mills type will also be discussed . this move to natural and gauge - natural formulations of field theories also has important implications for the philosophy of space and time . the old conflict between absolute and relational interpretations of space and then space - time has been renewed on this new ground . but i shall argue that this reformulation of the question suggests a third position , around which a consensus is forming . this position has been given various names , but i prefer dynamic structural realism . sections 2.12.5 recount the developments leading up to einstein s adoption of the hole argument against general covariance in 1913 , how it misled him for over two years , the reasons for his rejection of it in late 1915 , and its replacement by the point - coincidence argument for general covariance . section 2.6 discusses kretschmann s 1917 critique of the concept of general covariance and einstein s 1918 reply ; decades later this debate led komar to propose the use of what are now called kretschmann - komar coordinates as a way of resolving the hole argument . finally , section 2.7 discusses hilbert s 1917 reformulation of the hole argument : he replaced the four - dimensional hole in space - time with a space - like hypersurface , on which he posed an initial value problem for the field equations ; this was the first step in a series of developments culminating a decade later in a fully satisfactory formulation by darmois of the general - relativistic cauchy problem . section 3 discusses the revival of interest in the hole argument in the 1970s , which grew out of an attempt to answer a historical question : why did three years ( 19121915 ) elapse between einstein s adoption of the metric tensor to represent the gravitational field and his adoption of what are now called the einstein equations for this field ? some highlights of this discussion are recalled , from the post - world war ii revival of interest in general relativity up to the present . section 4 presents a modern version of the hole argument in general relativity , and its generalization from metric theories of gravitation to gauge - natural field theories . by abstraction from continuity and differentiability , the concept of general covariance of a field theory is similarly extended to general permutability , a concept wide enough to include theories based on relations between the elements of any set . sections 5 and 6 focus on current discussions of philosophical and physical implications of the hole argument , respectively ; no attempt is made to rigidly separate issues that overlap both areas . section 5 discusses such issues as : the range of applicability of the hole argument , the correct mathematical definition of general covariance and its physical significance , the controversy between relationalists and substantivalists in discussions of space - time structures . the arguments of ear - man , pooley and stachel are reviewed , and their convergence on a third alternative , which i call dynamic structural realism , is stressed . section 6 discusses such issues as partially background - independent theories , including mini- and midi - solutions to the einstein field equations ; the reformulation of general relativity as a gauge natural theory ; and some implications of the hole argument for attempts to formulate a quantum theory of gravity . einstein attributed his success in formulating the special theory in 1905 in no small measure to his insistence on defining coordinate systems that allowed him to attach physical significance to spatial and temporal coordinate intervals : the theory to be developed like every other electrodynamics is based on the kinematics of rigid bodies , since the propositions of any such theory concern relations between rigid bodies ( coordinate systems ) , clocks and electromagnetic processes . not taking this into account insufficiently is the root of the difficulties , with which the electrodynamics of moving bodies currently has to contend ( einstein , 1905 , my translation ) . the theory to be developed like every other electrodynamics is based on the kinematics of rigid bodies , since the propositions of any such theory concern relations between rigid bodies ( coordinate systems ) , clocks and electromagnetic processes . not taking this into account insufficiently is the root of the difficulties , with which the electrodynamics of moving bodies currently has to contend ( einstein , 1905 , my translation ) . his subsequent attempt to include gravitation in his theory focused on the equality of gravitational and inertial mass , and led him to adopt the equivalence principle : inertia and gravitation are wesensgleich ( the same in essence ) , and must be represented by a single inertio - gravitational field.5 the distinction between the two is not absolute ( i.e. , frame independent ) , but depends on the frame of reference adopted . in particular , he noted that a linearly accelerated ( rigid ) frame of reference in a space - time without a gravitational field is physically equivalent to an inertial frame of reference , in which there is a uniform , constant gravitational field : both result in equal acceleration of bodies moving relative to their respective frames . he concluded that , in order to include gravitation , one must go beyond the special theory , with its privileged role for inertial frames , and look for a generalized ( verallgemeinerte ) theory of relativity . in the simplest case , linearly accelerated frames in minkowski space , the usual time coordinate loses its direct physical significance ; and in uniformly rotating frames , a global time can not even be defined . in the latter case , the spatial coordinates also lose their direct significance : the measured spatial geometry is no longer flat . i soon saw that , according to the point of view about non - linear transformations required by the equivalence principle , the simple physical interpretation of the coordinates had to be abandoned . this recognition tormented me a great deal because for a long time i was not able to see just what are coordinates actually supposed to mean in physics ? ( einstein 1933 , translation from stachel , 2007 , p. 86 ) . i soon saw that , according to the point of view about non - linear transformations required by the equivalence principle , the simple physical interpretation of the coordinates had to be abandoned . this recognition tormented me a great deal because for a long time i was not able to see just what are coordinates actually supposed to mean in physics ? ( einstein 1933 , translation from stachel , 2007 , p. 86 ) . the equivalence principle made it not only probable that the laws of nature must be invariant with respect to a more general group of transformations than the lorentz group ( extension of the principle of relativity ) , but also that this extension would lead to a more profound theory of the gravitational field . first of all , elementary arguments showed that the transition to a wider group of transformations is incompatible with a direct physical interpretation of the space - time coordinates , which had paved the way for the special theory of relativity . further , at the outset it was not clear how the enlarged group was to be chosen ( einstein , 1956 , my translation ) . made it not only probable that the laws of nature must be invariant with respect to a more general group of transformations than the lorentz group ( extension of the principle of relativity ) , but also that this extension would lead to a more profound theory of the gravitational field . first of all , elementary arguments showed that the transition to a wider group of transformations is incompatible with a direct physical interpretation of the space - time coordinates , which had paved the way for the special theory of relativity . further , at the outset it was not clear how the enlarged group was to be chosen ( einstein , 1956 , my translation ) . einstein first attempted to develop a theory of the gravitational field produced by a static source , still based on the idea of a scalar gravitational potential . his earlier work had led him to consider non - flat spaces ; this work led him to consider non - flat space - times : he found that his equation of motion for a test particle in a static field can be derived from a variational principle : 1\documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$$\delta \int { { { \{{{[c(x , y , z)]}^2}d{t^2 } - [ d{x^2 } + d{y^2 } + d{z^2}]\}}^{1/2 } } = 0,}$$\end{document } where he interpreted c(x , y , z ) as a spatially - variable speed of light.6 already familiar with minkowski s four - dimensional formulation of the special theory , he realized that this variational principle could be interpreted as the equation for a geodesic ( i.e , an extremal ) in a non - flat space - time with ds = { [ c(x , y , z ) ] dt [ dx + dy + dz ] } as its line element . by explicitly introducing the flat minkowski pseudo - metric , einstein then made a big leap : he generalized the geodesic equation using a non - flat riemannian pseudo - metric g , and assumed that it would still describe the path of a test particle in an arbitrary non - static gravitational field . the gravitational theory he was seeking must be based on such a non - flat metric , which should both : determine the line element , ds = g dx dx representing the chrono - geometry of space - time;serve as the potentials for the inertio - gravitational field . determine the line element , ds = g dx dx representing the chrono - geometry of space - time ; serve as the potentials for the inertio - gravitational field . while a student at the swiss federal polytechnic , einstein had learned about gauss theory of non - flat surfaces , and realized he needed a four - dimensional generalization . his old classmate and new colleague , the mathematician marcel grossmann , told einstein about riemann s generalization of gauss theory and about the tensor calculus ( absolute differential calculus ) , developed by ricci and levi - civita to facilitate calculations in an arbitrary coordinate system . still identifying a coordinate system with a physical frame of reference , his goal of extending the principle of the relativity led einstein to investigate the widest possible group of coordinate transformations . with grossman s help , he succeeded in formulating the influence of the inertio - gravitational field on the rest of physics by putting these equations into a generally covariant form . the one exception was the gravitational field equations , the problem to which they now turned . general covariance then meant covariance under arbitrary coordinate transformations,7 the concepts of covariant derivative and riemann tensor were based on the theory of differential invariants , and lacked a simple geometrical interpretation.8 nevertheless , einstein seriously considered the ricci tensor , the only second rank contraction of the riemann tensor , for use in the gravitational field equations . he tried , in linear approximation , setting it equal to the stress - energy - momentum tensor of the sources of the gravitational field ; and even realized that , in order to obtain consistency with the vanishing divergence of the source tensor , the ricci tensor would have to be modified by a trace term.9 however , after coming so close to the final form of field equations of gr , he retreated . his earlier work on static fields led him to conclude that , in adapted coordinates , the spatial part of the metric tensor must remain flat [ see eq . ( 1 ) ] , which is easily shown to be incompatible with field equations based on the ricci tensor . so , as he later put it , he abandoned these equations with a heavy heart , and began to search for non - generally - covariant field equations . einstein soon developed a meta - argument against a generally covariant set of field equations for the metric tensor . why did he formulate this argument in terms of a hole a finite region of space - time devoid of all non - gravitational sources?10 it was probably the influence of mach s ideas . one of einstein s main motivations in the search for a generalized theory of relativity was his interpretation of mach s critique of newtonian mechanics . according to mach , space is not absolute : it does not have any inherent properties of its own , and its apparent influence on the motion of a body as manifested in the law of inertia , for example must result from an interaction between the moving body and all the rest of the matter in the universe . mach suggested that the inertial behavior of matter in the region of empty space around us is the effect of all the matter in the universe that surrounds this region , or when einstein adopted the metric tensor as the representation of the potentials of the inertiogravitational field , he interpreted mach s idea as the requirement that metric field in such a hole be entirely determined by its sources that is by the stress - energy tensor of the surrounding matter.11 the hole argument purports to show that , if the field equations are generally covariant , this requirement can not be satisfied : even if the field and all its sources outside of and on the boundary of the hole are fully specified , such equations can not determine a unique field in the interior of the hole . we present the argument here in einstein s original coordinate - based formulation ( see section 4 for a modern , coordinate - free form ) . let the metric tensor be symbolized by the single letter g , and a given four - dimensional coordinate system by a single letter x. suppose g(x ) is a solution to a set of field equations . if both the coordinates and components of the metric tensor are subject to the transformation 2\documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$$x \rightarrow x^{\prime } = f(x)$$\end{document } from coordinates x to another coordinate system x , then g(x ) represents the same solution in the new coordinate system . this presents no problem einstein is quite clear on this point.12 but if the field equations are generally covariant , einstein noted , it follows that g(x ) must also be a solution . he emphasized clearly that g(x ) and g(x ) represent two mathematically distinct solutions in the same coordinate system.13 now consider a hole h a bounded , closed region of space - time , in which all non - gravitational sources of the field represented by the stress - energy tensor t vanish ; and suppose the field g(x ) and all its sources t are specified everywhere outside the hole and on its boundary , together with any finite number of normal derivatives of field on the boundary . for there are coordinate transformations x x that reduce to the identity outside of and on the boundary of h , together with all their derivatives up to any finite order ; yet which differ from the identity inside h. such coordinate transformations will leave t unchanged , and the resulting g(x ) will still equal g(x ) outside of and on the boundary of h ; but g(x ) will differ from g(x ) inside h. in short , if the field equations are generally covariant , then specification of the gravitational field together with its sources outside of and on the boundary of such a hole do not suffice to determine the field inside . einstein concluded that generally - covariant field equations could not be used to describe the metric tensor field , and began a search for non - covariant field equations . now the question became : if lorentz invariance is too little ( equivalence principle ) and general covariance is too much ( hole argument ) , what is the widest possible group ( if it is a group ! ) of coordinate transformations , under which one can demand the invariance of such equations ? the hole argument is not valid for the inhomogeneous lorentz ( poincar ) group ; and einstein concluded that the invariance group of the field equations should be extended only up to but not including the invariance group , for which the hole argument becomes valid in coordinates adapted to this group . because of problems unrelated to the hole argument , in mid-1915 einstein abandoned the search for a non - covariant theory of gravitation and returned to the riemann tensor . after several intermediate steps , by november of that year he adopted the set of generally - covariant equations now known as the einstein equations . his successful explanation of the anomalous perihelion advance of mercury convinced him and many others of the profound significance of the resulting theory , known today as general relativity . what about the hole argument ? einstein realized that , to avoid it , he had only to drop one of the premises that he had tacitly adopted : the assumption that the points of space - time in the hole are individuated independently of the metric field . if that assumption is dropped , it follows that when the metric is dragged - along by a coordinate transformation , all the individuating properties and relations of the points of space - time are dragged along too.14 while g(x ) does differ mathematically from g(x ) inside h , they are merely different representations of the same physical solution . properly - specified conditions outside the hole will suffice to specify a unique physical solution inside the hole . in order to better illustrate the flaw in the hole argument , einstein developed a counter - argument , the point - coincidence argument.15 there are actually two versions of this argument , which have been called the private and the public one.16 first i shall cite the private version . in letters to friends , einstein explained why the argument no longer applies to general relativity : everything in the hole argument was correct up to the final conclusion . it has no physical content if , with respect to the same coordinate system k , two different solutions g(x ) and g(x ) exist [ see section 2.3 ] . to imagine two solutions simultaneously on the same manifold has no meaning , and indeed the system k has no physical reality . if , for example , all physical events were to be built up from the motions of material points alone , then the meetings of these points , i.e. , the points of intersection of their world lines , would be the only real things , i.e. , observable in principle . these points of intersection naturally are preserved during all [ coordinate ] transformations ( and no new ones occur ) if only certain uniqueness conditions are observed . it is therefore most natural to demand of the laws that they determine no more than the totality of space - time coincidences . from what has been said , this is already attained through the use of generally covariant equations ( letter to michele besso , 3 january 1916 , in schulmann et al . , 1998 , it has no physical content if , with respect to the same coordinate system k , two different solutions g(x ) and g(x ) exist [ see section 2.3 ] . to imagine two solutions simultaneously on the same manifold has no meaning , and indeed the system k has no physical reality . if , for example , all physical events were to be built up from the motions of material points alone , then the meetings of these points , i.e. , the points of intersection of their world lines , would be the only real things , i.e. , observable in principle . these points of intersection naturally are preserved during all [ coordinate ] transformations ( and no new ones occur ) if only certain uniqueness conditions are observed . it is therefore most natural to demand of the laws that they determine no more than the totality of space - time coincidences . from what has been said , this is already attained through the use of generally covariant equations ( letter to michele besso , 3 january 1916 , in schulmann et al . , 1998 , einstein s argument consists of three points ; in modern language , they are : if two metrics in their respective different coordinate systems differ only in that one is the carry - along of the other , then physically there is no distinction between them.generally covariant equations have the property that , given a solution , any carry - along of that solution in the same coordinate system is also a solution to these equations.in the absence of a metric tensor field , a coordinate system on a differentiable manifold has no intrinsic significance.17 if two metrics in their respective different coordinate systems differ only in that one is the carry - along of the other , then physically there is no distinction between them . generally covariant equations have the property that , given a solution , any carry - along of that solution in the same coordinate system is also a solution to these equations . in the absence of a metric tensor field , a coordinate system on a differentiable manifold has no intrinsic significance.17 note that points 1 ) and 2 ) were included in the original hole argument ( see section 2.3 ) . it follows from the three points that the entire equivalence class of carry - alongs of a given solution in the same coordinate system corresponds to one physical gravitational field . thus , the hole argument fails . as will be seen in section 4 , point 1 ) constitutes a coordinate - dependent version , applied to the metric tensor , of what i call the basic or trivial identity ; point 2 ) constitutes the coordinate - dependent version , applied to the metric tensor , of my definition of covariant theories . i would apply the term generally covariant to the conclusion that an entire equivalence class of carry alongs corresponds to one physical solution . the coordinate - independent versions of all three concepts are obtained by substituting basis vectors for coordinates and diffeomorphisms for coordinate transformations . einstein s 1916 review paper presents the public version of the argument to justify the requirement that any physical theory be invariant under all coordinate transformations : our space - time verifications invariably amount to a determination of space - time coincidences . if , for example , events consisted merely in the motion of material points , then ultimately nothing would be observable but the meetings of two or more of these points . moreover , the results of our measurements are nothing but verifications of such meetings of the material points of our measuring instruments with other material points , coincidences between the hands of a clock and points on the clock - dial , and observed point - events happening at the same place at the same time.the introduction of a system of reference serves no other purpose than to facilitate the description of the totality of such coincidences . we allot to the universe four space - time variables x , x , x , x in such a way that for every point - event there is a corresponding system of values of the variables x x. to two coincident point - events there corresponds one system of values of the variables x x , i.e. , coincidence is characterized by the identity of the co - ordinates . if , in place of the variables x x , we introduce functions of them , x , x , x , x , as a new system of co - ordinates , so that the systems of values are made to correspond to one another without ambiguity , the equality of all four co - ordinates in the new system will also serve as an expression for the space - time coincidence of the two point - events . as all our physical experience can be ultimately reduced to such coincidences , there is no immediate reason for preferring certain systems of coordinates to others , that is to say , we arrive at the requirement of general covariance ( einstein , 1916 , pp . 776777 , reprinted in kox et al . , 1996 , pp . our space - time verifications invariably amount to a determination of space - time coincidences . if , for example , events consisted merely in the motion of material points , then ultimately nothing would be observable but the meetings of two or more of these points . moreover , the results of our measurements are nothing but verifications of such meetings of the material points of our measuring instruments with other material points , coincidences between the hands of a clock and points on the clock - dial , and observed point - events happening at the same place at the same time . the introduction of a system of reference serves no other purpose than to facilitate the description of the totality of such coincidences . we allot to the universe four space - time variables x , x , x , x in such a way that for every point - event there is a corresponding system of values of the variables x x. to two coincident point - events there corresponds one system of values of the variables x x , i.e. , coincidence is characterized by the identity of the co - ordinates . if , in place of the variables x x , we introduce functions of them , x , x , x , x , as a new system of co - ordinates , so that the systems of values are made to correspond to one another without ambiguity , the equality of all four co - ordinates in the new system will also serve as an expression for the space - time coincidence of the two point - events . as all our physical experience can be ultimately reduced to such coincidences , there is no immediate reason for preferring certain systems of coordinates to others , that is to say , we arrive at the requirement of general covariance ( einstein , 1916 , pp . 776777 , reprinted in kox et al . , 1996 , pp . indeed , he proceeds to illustrate it with a version of the trivial identity applied to a system of particles , rather than fields , the model being any set of particle world lines , without any requirement that they satisfy equations of motion . einstein also mentions the requirement of general covariance ; but here it amounts basically to point 3 ) together with a generalization of point 1 ) to any objects used in a physical theory , whether or not they obey any field equations . it is essentially a coordinate - dependent version of the basic identity , extended from metrics to all geometric object fields of a certain type ( see section 4.2 ) . we see here the origins of two differing usages of the term general covariance one involves the field equations , the other does not . he wrote to schlick : generally considered , your presentation of the [ point coincidence ] argument does not correspond with my conception of it since i find your entire conception too positivistic , so to speak . physics is an attempt at the conceptual construction of a model of the real world , as well as of its lawful structure . indeed it must represent exactly the empirical relations between the sense experiences that are accessible to us ; but only in this way is it linked to the latter ( einstein to moritz schlick , 28 november 1930 ; cited from engler and renn , 2013 , p. 18 generally considered , your presentation of the [ point coincidence ] argument does not correspond with my conception of it since i find your entire conception too positivistic , so to speak . physics is an attempt at the conceptual construction of a model of the real world , as well as of its lawful structure . indeed it must represent exactly the empirical relations between the sense experiences that are accessible to us ; but only in this way is it linked to the latter ( einstein to moritz schlick , 28 november 1930 ; cited from engler and renn , 2013 , p. 18 in 1915 , even before einstein completed the general theory of relativity , erich kretschmann ( 1915a , b ) had undertaken an investigation that led him to a version of the trivial identity . kretschmann ( 1917 ) uses einstein s public point coincidence argument to conclude that any theory could be put into a form satisfying einstein s principle of general covariance . einstein ( 1918 ) concedes the point , but argued , not very successfully,18 that an added criterion of simplicity gives the principle a heuristic significance . evidently , he was not himself clear on the difference between his two arguments : while the public point coincidence argument does not provide a criterion for singling out theories , the criterion of general covariance in the private argument does.19 of greater future significance was kretschmann s suggestion : use four invariants of the riemann tensor to fix a unique coordinate system ( an individuating field in my terminology ) . section of kretschmann ( 1917 ) discusses the use of the principal directions of the riemann tensor to fix the coordinate directions;20 and section iii.2 proposes the use of four mutually - independent invariants of the riemann tensor and metric as the space - time coordinates.21 apparently unaware of kretschmann ( 1917 ) , arthur komar ( 1958 ) also suggested the use of four invariants of the riemann tensor as coordinates . in subsequent discussions of the problem of true observables in general relativity , they are often referred to as kretschmann - komar coordinates . stachel ( 1989 , 1993 ) noted their use as a way of individuating the points of space - time , and they have subsequently figured in many discussions of the hole argument . kretschmann ( 1917 ) notes that : this system of [ principal ] directions naturally may be indeterminate or otherwise degenerate ; and that the four invariants may be used as coordinates only by postulating that in no finite four - dimensional region are [ they ] mutually dependent . section 6.1 discusses the treatment of such cases , in which the symmetry or isometry group of an equivalence class of metrics is non - trivial . as we have seen , in 1913 einstein formulated his argument against generally covariant equations in terms of the non - uniqueness of the field in a hole in space - time . david hilbert , the renowned mathematician , became interested in the problem of a unified gravitational - electromagnetic theory and followed einstein in arguing against generally covariant field equations . instead of a hole , however , he formulated the argument in a mathematically more sophisticated way , using a spacelike hypersurface.22 he showed that there is no well - posed cauchy problem for generally covariant equations ; i.e. , no finite set of initial values on such a hypersurface can determine a unique solution to these equations off the initial hypersurface.23 after einstein returned to generally covariant field equations , hilbert dropped this argument against them , and hilbert ( 1917 ) is the first discussion of the cauchy problem in general relativity ; but the analysis is far from complete.24 it was not until 1927 that georges darmois gave a reasonably complete treatment.25 his discussion included the role of null hypersurfaces as characteristics , the use of the first and second fundamental forms on a space - like hypersurface as initial data , and the division of the ten field equations into four constraints on the initial data and six evolution equations . most post world - war ii discussions of the cauchy problem in general relativity are based on the work of andre lichnerowicz,26 but he acknowledges his debt to darmois : in 1926 in belgium , darmois gave a course of four lectures on the equations of einsteinian gravitation in the presence of de donder . the monograph version ( darmois , 1927 ) became my bedside reading . in this book is the first rigorous analysis of the hyperbolic nature of the einstein equations , i.e. , the foundation of the relativistic theory of gravitation as a theory of wave propagation , with profound understanding , the splitting of the einstein equations relative to the cauchy problem into two sets is clearly discussed : one treats the initial conditions , and the other deals with time evolution ( lichnerowicz , 1992 , p. 104 ) . in 1926 in belgium , darmois gave a course of four lectures on the equations of einsteinian gravitation in the presence of de donder . the monograph version ( darmois , 1927 ) is the first rigorous analysis of the hyperbolic nature of the einstein equations , i.e. , the foundation of the relativistic theory of gravitation as a theory of wave propagation , with profound understanding , the splitting of the einstein equations relative to the cauchy problem into two sets is clearly discussed : one treats the initial conditions , and the other deals with time evolution ( lichnerowicz , 1992 , p. 104 ) . many current discussions of the non - uniqueness problem in general relativity are formulated in terms of the cauchy problem rather than the original hole argument ( see , e.g. , belot and earman , 2001 ; rickles , 2005 ; lusanna and pauri , 2006 ) . its modern revival came about as the result of debates about the reason for the delay of over two years between einstein s adoption of the metric tensor in 1913 and his formulation of the generally - covariant field equations for the metric at the end of 1915 ( see section 2 ) . the answer to this question hinges on the answer given to the question of why einstein formulated the hole argument and held to it during this entire period . in 1982 , pais summarized the generally - accepted view : in 1914 not only did he [ einstein ] have some wrong physical ideas about causality but in addition he did not yet understand some elementary mathematical notions about tensors ( pais , 1982 , p. 224).27einstein still had to understand that this freedom [ to make an arbitrary coordinate transformation ] expresses the fact that the choice of coordinates is a matter of convention without physical content ( ibid . , p. 222).28 in 1914 not only did he [ einstein ] have some wrong physical ideas about causality but in addition he did not yet understand some elementary mathematical notions about tensors ( pais , 1982 , p. 224).27 einstein still had to understand that this freedom [ to make an arbitrary coordinate transformation ] expresses the fact that the choice of coordinates is a matter of convention without physical content ( ibid . stachel ( 1979 ) presented a version of the standard account , but by the following year it had become evident that this account was incorrect . at the 1980 jena meeting of the grg society , he presented a detailed analysis of the hole argument and its refutation by the point coincidence argument ; it circulated as a preprint , but was not published until 1989 ( stachel , 1989 ) . however , torretti ( 1983 , chapter 5.6 ) gives a detailed account of the hole argument based on it;29 and norton ( 1984 ) , the first detailed analysis of einstein s 1913 zurich notebook,30 also summarizes stachel s account . stachel ( 1987 ) contains a historical - critical account of the hole argument , and stachel ( 1986 ) uses the fiber bundle formalism to generalize the argument to any geometric object field obeying generally - covariant equations . these two talks helped to stimulate renewed interest in the meaning of diffeomorphism invariance among relativists , especially those working on quantum gravity ( see , e.g. , rovelli , 1991 ) . earman and norton s presentations of the hole argument earman and norton ( 1987 ) ; earman ( 1989 ) provoked renewed discussion of absolute versus relational theories of space - time among philosophers of science , a discussion that continues to this day . 31 section 5 shows how several initially - different positions have converged on an approach that gives precise meaning to einstein s vision of general relativity , and section 6 reviews some physical topics related to the hole argument . as we have seen already , einstein often posed a problem , the solution to which required mathematical tools that went far beyond his current knowledge . another example is the vision of the nature of general relativity that replaced his earlier faith in mach s principle ( see section 2.3 ) . as we shall see , this new vision requires the theory of fiber bundles for its appropriate mathematical formulation . when asked by a reporter to sum up the theory of relativity in a sentence , einstein said , half jokingly : before my theory , people thought that if you removed all the matter from the universe , you would be left with empty space . my theory says that if you remove all the matter , space disappears , too ! 32 before my theory , people thought that if you removed all the matter from the universe , you would be left with empty space . my theory says that if you remove all the matter , space disappears , too ! ( einstein , 1931 ) . 32 in 1952 , he developed the same idea at greater length : on the basis of the general theory of relativity space as opposed to if we imagine the gravitational field , i.e. , the functions gik to be removed , there does not remain a space of the type ( 1 ) [ minkowski space - time ] , but absolutely nothing , and also no topological space . there is no such thing as an empty space , i.e. , a space without field . space - time does not claim existence on its own , but only as a structural quality of the field ( einstein , 1952 , p. 155 ) . on the basis of the general theory of relativity space as opposed to if we imagine the gravitational field , i.e. , the functions gik to be removed , there does not remain a space of the type ( 1 ) [ minkowski space - time ] , but absolutely nothing , and also no topological space . there is no such thing as an empty space , i.e. , a space without field . space - time does not claim existence on its own , but only as a structural quality of the field ( einstein , 1952 , p. 155 ) . it is evident that this new approach to general relativity completely reverses his original machian vision . now the field is primary , and matter like everything else must be treated as an aspect of the field . einstein s comment occurs in the course of a discussion of the age - old conflict between absolute33 and relational interpretations of space , which relativity theory metamorphosed into a conflict between interpretations of space - time.34 the quotation above stresses the role of the metric tensor , but elsewhere einstein emphasizes the role of the affine connection , which he calls a displacement field : it is the essential achievement of the general theory of relativity that it freed physics from the necessity of introducing the inertial system ( or inertial systems ) the development of the mathematical theories essential for the setting up of general relativity had the result that at first the riemannian metric was considered the fundamental concept on which the general theory of relativity and thus the avoidance of the inertial system were based . later , however , levi - civita rightly pointed out that the element of the theory that makes it possible to avoid the inertial system is rather the infinitesimal displacement field \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$\gamma _ { jk}^i$\end{document}. the metric or the symmetric tensor field gik which defines it is only indirectly connected with the avoidance of the inertial system in so far as it determines a displacement field ( einstein , 1955 , pp . it is the essential achievement of the general theory of relativity that it freed physics from the necessity of introducing the inertial system ( or inertial systems ) the development of the mathematical theories essential for the setting up of general relativity had the result that at first the riemannian metric was considered the fundamental concept on which the general theory of relativity and thus the avoidance of the inertial system were based . later , however , levi - civita rightly pointed out that the element of the theory that makes it possible to avoid the inertial system is rather the infinitesimal displacement field \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$\gamma _ { jk}^i$\end{document}. the metric or the symmetric tensor field gik which defines it is only indirectly connected with the avoidance of the inertial system in so far as it determines a displacement field ( einstein , 1955 , pp . einstein s vision can be summed in the sentence : space - time does not claim existence on its own , but only as a structural quality of the field . the two main elements of this vision are : if there is no field , there can be no space - time manifold.the spatio - temporal structural qualities of the field include the affine connection , which is actually of primary significance as compared to the metric tensor field . the spatio - temporal structural qualities of the field include the affine connection , which is actually of primary significance as compared to the metric tensor field . up until quite recently , the standard formulations of general relativity did not incorporate this vision . they start by postulating a four - dimensional differentiable manifold , which is described as a space - time before any metric tensor field is defined on it ; and all other space - time structures , such as the levi - civita connection , are defined in terms of this one field.35 but the concepts of fiber bundles and sheaves enable a mathematical formulation of general relativity consistent with einstein s vision36 ( see section 4.3 ) : if there is no total space for the fields , then there is no base manifold.the conceptual distinction between the roles of the metric and connection becomes evident : the metric lives on the vertical fibers of the total space ; while the connection lives on the horizontal directions of the total space , connecting the fibers with each other . if there is no total space for the fields , then there is no base manifold . the conceptual distinction between the roles of the metric and connection becomes evident : the metric lives on the vertical fibers of the total space ; while the connection lives on the horizontal directions of the total space , connecting the fibers with each other . clearly , einstein s vision favors a non - absolutist view of space - time.37 while no formalism can resolve a philosophical issue , the traditional approach that starts from a manifold m and defines various geometric object fields over it gives manifold substantivalists an initial advantage : opponents must explain away somehow the apparent priority of m. the modern approach starts from a principal fiber bundle p with total space e and structure group g , and defines m as the quotient e / g ; this gives non - substantivalists an initial advantage : the whole bundle ( pun intended ) , which includes some geometric object field , a connection and a manifold , is there from the start ; a manifold substantivalist must justify giving priority to m. after defining geometric and algebraic structures , a space is defined as a set of points with a geometric structure that is invariant under some group of transfomations of its points . then i discuss product and quotient spaces , fibered spaces , and theories based on these spaces , in particular permutable and generally permutable theories ( section 4.1 ) . up to this point but all definitions are still applicable appropriately modified , of course when additional structures are introduced . in particular , the case of most physical interest is that of geometric object fields defined on a differentiable manifold ( section 4.2 ) . they provide the framework for coordinate - independent definitions of covariant and generally covariant theories , followed by a precise formulation of the of the original hole argument against general covariance and of the way to avoid its conclusion , discussed informally in section 2 . then i discuss fiber bundles , which consist of a total space , a base space , and a projection operator . under certain circumstances , the base space may be defined as the quotient of the total space divided by an equivalence relation defining its fibers ( section 4.3 ) this approach allows a more precise formulation of einstein s vision of general relativity , discussed informally in section 3.2 . finally , the distinction between natural and gauge natural bundles is discussed , and between the concepts of covariance and general covariance when applied to theories defined on each type of bundle ( section 4.4 ) . a number of philosophical concepts used but not defined in this section , such as : intrinsic and extrinsic properties , internal and external relations , and quiddity and haecceity , are discussed in the appendix b. consider a set s of elements x , y , z , etc . , together with a set of relations r between its elements.39 there is a major distinction between a geometry and an algebra : in a geometry , the elements of s ( hereafter called points and symbolized by p , q , etc.)40 all are of the same quiddity ( i.e. , of the same nature ) but lack haecceity ( i.e. , are not inherently individuated ) : the only distinctions between the points arise from a set of internal relations r between them . if we abstract from these relations , the set s is invariant under perm(s ) , the group of all permutations of the points of s.41 the set of relations defining a geometry structure or geometry the maximal subgroup aut rg(s ) of perm(s ) that preserves all these relations between the points of s is called the symmetry or automorphism group of this geometry , and could just as well be used to define it.42 obviously , perm(s ) is the maximal possible automorphism group ; so a study of its subgroups and their relation to each other is equivalent to a study of all possible geometries on s and their relation to each other . in contrast to a geometry , in an algebra each element ( symbolized by a , b , etc . ) in addition to having the same quiddity also has an intrinsic haecceity ( individuality ) . an algebraic structure or algebra a on a set is also defined by a set of relations ra between its elements ; but these are external relations , which do not affect the intrinsic individuality of each element.43 since descartes introduced analytic geometry , it has proved convenient and often necessary to apply algebraic methods in the solution of geometrical problems . this is done by a coordinatization of the geometry ( see weyl , 1946 , for this term ) : a one - one correspondence is set up between the points of the geometry and certain elements of an appropriately chosen algebra . this coordinatization assigns to each point p of the geometry an element a of the algebra , called its coordinate and symbolized by a(p ) . but , by individuating the points of a geometry , a coordinatization negates their homogeneity , turning the geometry into an algebra . the only way to restore their homogeneity is to negate the coordinatization as follows : introduce the class of all admissible coordinatizations of the geometry44 based on the given algebra , so that each point of the geometry will have every admissible element of the algebra as its coordinate in at least one admissible coordinate system . transformations between two admissible coordinate systems are called admissible coordinate transformations ; they usually form a group that includes a subgroup isomorphic to the automorphism group of the geometry . there are two distinct ways in which the assignment of all admissible coordinates to each point of a geometry may be accomplished . active point transformations : keep the coordinate system fixed , a a , and permute the points of the geometry : p q , a(p ) a(q).passive coordinate transformations : keep the points of the geometry fixed , p p , and carry out an admissible coordinate transformation of the elements of the algebra : a b , a(p ) b(p ) . active point transformations : keep the coordinate system fixed , a a , and permute the points of the geometry : p q , a(p ) a(q ) . passive coordinate transformations : keep the points of the geometry fixed , p p , and carry out an admissible coordinate transformation of the elements of the algebra : a b , a(p ) b(p ) . the terms active and passive refer to the effects of a transformation on the points of the geometry . a passive coordinate transformation is an active transformation of the elements of the algebra.45 two active permutation groups of any geometry have already been introduced : perm(s ) , the group of all permutations of the elements of s;aut rg(s ) , the subgroup of perm(s ) consisting of the permutations that belong to the automorphism group rg of a particular geometry g. perm(s ) , the group of all permutations of the elements of s ; aut rg(s ) , the subgroup of perm(s ) consisting of the permutations that belong to the automorphism group rg of a particular geometry g. relations may also be permuted . let r(p ) symbolize a relation between the set x of all points of s.46 consider a permutation px of the elements of s. define the permuted relation pr as follows : pr(x ) holds iff r(px ) does . when a permutation px is carried out , the relation r will be said to be carried along if it is also permuted into the relation pr.47 it follows that , if r(x ) is valid , then so is pr(px ) . by virtue of the intrinsic homogeneity of its points , a geometry g remains unchanged if , for each permutation p of perm(s ) , the corresponding permutation prg of the set of relations rg defining g is also carried out . for any relation rg rg , it is clear that prg(pp ) holds if and only if rg(p ) holds , so prg rg ; thus rg and prg describe the same geometry . i shall refer to this result as the basic or trivial identity for the group perm(s ) : it holds for any geometry based on a subgroup of perm(s ) . an equivalence relation req on any set s is a two - place relation having the following three properties : for all x , y , z in s , it is reflexive : req(x , x ) holds ; symmetric : if req(x , y ) holds , then so does req(x , y ) ; and transitive : if req(x , y ) and req(y , z ) both hold , then so does req(x , z ) . if the context is clear , one often abbreviates req(x , y ) by x y. an equivalence relation divides s into equivalence classes sr , often also called its orbits ( see neumann et al . , 1994 , chapter v ) . the quotient set of s by req , often called the orbit space and abbreviated sq = s / req , is defined by the condition that each element of the quotient set corresponds to one and only one such equivalence class . given two sets a and b , we can form the product set a b , consisting of all pairs of elements ( x , y ) , with x a and y b. a mapping from the domain a to the range or codomain b ( see lawvere and schanuel , 1997 , pp . 1314 ) , often symbolized : a b , is defined as a subset of a b , such that for each x in the domain there is one and only one y in its range . in various contexts , mappings might also be called functions , transformations , or operators . homomorphisms , isomorphisms , homeomorphisms , diffeomorphisms , continuous or differentiable maps will be more attached to certain classes of mappings , which certain structures on the sets which are their domains and ranges ( hermann , 1973 , p. 3 ) . in various contexts , mappings might also be called functions , transformations , or operators . homomorphisms , isomorphisms , homeomorphisms , diffeomorphisms , continuous or differentiable maps will be more attached to certain classes of mappings , which preserve certain structures on the sets which are their domains and ranges ( hermann , 1973 , p. 3 ) . the mapping is surjective if , for every element y b , there is at least - one element x a that maps onto y. if both mappings a b and b a are surjective , the mapping is called bijective . if the bijective map preserves all structures on a and b , a and b are said to be isomorphic . if the mapping : a b is surjective , the set b is isomorphic to the quotient set a/ ; so b can actually be defined as the quotient set . this passing to the quotient is a way of defining new spaces and mappings that is very important in all of mathematics , particularly in algebra and differential geometry . this passing to the quotient is a way of defining new spaces and mappings that is very important in all of mathematics , particularly in algebra and differential geometry . this possibility allows us to realize einstein s vision of general relativity ( see section 3.2 ) , which in this context is simply : if no a , then no b. we can define the mapping or morphism from s to sq , : s sq , which projects each element x of s into the element (x ) of sq corresponding to the equivalence class that includes x. conversely a section of s is an inverse mapping from the point (x ) of sq to a unique point y in the equivalence class of s that maps into that point of sq : (x ) = y , with x y. so far , these concepts can be applied to any set . if s is a geometry with automorphism group g , it is referred to as a g - space ( see neumann et al . , 1994 , pp . the equivalence relation is said to be g - invariant if , whenever p q holds for two points of s , then it follows that g(p ) g(q ) for all g g. consequently , the action of an element g g on s preserves the orbits of s but permutes them ; so all orbits , henceforth called fibers , must be isomorphic to what is called a typical fiber . the quotient set sq = s / req is itself a g - space called the quotient space . a fibered space consists of a total space e ; a base space b ; and a projection operator : e b that is a surjective mapping , as defined above . the fiber fb over each point b b is the set of all inverse elements (b ) e ; that is , all elements p e such that (p ) = b. a section of a fiber space is a choice of one element on each fiber fb for every b b. to convert a homogeneous set s with an equivalence relation into a fibered space , let s constitute the total space e ; then sq forms the base space b , and the mapping becomes the projection operator . if s is a geometry with automorphism group g , then g preserves the equivalence classes ; so all the fibers are isomorphic , resulting in a fiber bundle : a fiber bundle ( e , b , ) consists of : 1 ) a total space e , divided into fibers by an equivalence relation , all of these fibers being isomorphic to a typical fiber ; 2 ) a base space b that is isomorphic to the quotient e/ ; and 3 ) a projection operator : e b that takes each point of its domain e into the point p of its range b that corresponds to the fiber including p. a section of the bundle is a mapping that takes each point p of its domain b into a unique point of its range , consisting of the set of fibers of e. the point of on the fiber p over p is symbolized by p( ) . if e has the automorphism group g , the action of an element g g on the points of any section will result in a new section ; symbolically : g( ) = . so , given one section , the action of the elements of g produces a whole equivalence class of sections g( ) . a theory of a certain type is a procedure for producing models of that type . a particular theory of that type is a rule for selecting a subset of these models . one type of theory is defined by the choice of a fiber bundle with automorphism group g ; its models are the sections of this bundle . a particular theory is a rule for choosing a subset of sections of the bundle as models . if the rule is such that , when is a model , then so is the entire equivalence class of sections g( ) , the theory is permutable . it is generally permutable if this entire equivalence class is interpreted as a single model of the theory . in terms of the distinction between syntax and semantics , one may say : while each section of a theory is always syntactically distinguishable from the others , in a permutable theory they may also be semantically distinguishable . however , in a generally permutable theory they are not ; only an entire equivalence class of sections has a unique semantic interpretation . all assertions about geometric figures , such as right triangles , rectangles , circles , etc . , are invariant under its automorphism group , which consists of translations and rotations ; so it is certainly a permutable theory . but these assertions actually apply to the whole equivalence class of geometric figures satisfying any of these definitions ; so it is a generally permutable theory . on the other hand , plane analytic geometry includes a choice of origin , unit of length , and a pair of rectangular axes . so all of its assertions are still permutable ; but some of them include references to the origin , axes , etc . we can distinguish , for example , between a circle of radius r centered at the origin , and one of the same radius centered at some other point . the reason , of course , is that the choice of a unique preferred coordinate system converts the euclidean plane from a geometry into an algebra . for the space - time theories forming the main topic of this review , s is often a four - dimensional differentiable manifold m ; and the analogue of perm(s ) is diff(m ) , the diffeomorphism group consisting of all differentiable point transformations of the points x of m. any given , fixed geometric structures defined on m , such as a metric tensor field , will be symbolized by (x ) ; they represent the analogue of the relations rg . the -geometry of m is also defined by the invariance of these (x ) under the action of some lie subgroup g of diff(m ) . in other words , g plays the role , analogous to that of aut(rg ) , of the automophism group aut(m , ) of the -geometry of m. and just as in that case , here every g - space can also be defined as a quotient or orbit space : every g - space can be expressed as in just one way as a disjoint union of a family of orbits . p. 51 ) every g - space can be expressed as in just one way as a disjoint union of a family of orbits . p. 51 ) just as in analytic geometry , one may set up ordinary and partial differential equations for various particles and fields on m. denote a set of such geometric object fields on m collectively by the symbol (x ) , and consider the effect of an element g(x ) g on (x).50 from the definition of a geometric object ( see schouten , 1954 , pp . 6768 ) it follows that if x g(x ) = x , then (x ) (x ) . their transformation law under g(x ) is linear and homogeneous in the components of (x ) and homogeneous in the derivatives of g(x ) . in both galilean space - time ( see yaglom , 1979 ) and in special relativistic space - time ( minkowski space ) , g is a ten parameter lie group . four of these parameters generate spatial and temporal translations of the points , making these space - time geometries homogeneous . and in both , the six remaining parameters act at each point of space - time : three generate spatial rotations and three generate but they do so in different ways because their boosts differ : for galilean space time , they are galilei transformations that preserve the invariance of the absolute time . for minkowski space - time , they are lorentz transformations that combine spatial and temporal intervals into an invariant , truly four - dimensional space - time interval . both of these groups are subgroups of sl(4 , r ) , the group of four - volume - preserving transformations.51 and both theories have a homogeneous , flat affine connection in common that is the mathematical expression of the newton s first law of inertia . its invariance group is al(4 , r ) , which is a subgroup of sl(4 , r ) . newtonian gravitational theory , in the form which incorporates the equivalence principle , preserves the global space - time structure of galilean space - time , but abandons the homogeneous flatness of the affine connection in favor of a non - flat affine connection that is the mathematical expression of the dynamical inertia - gravitational field . this field is non - homogeneous , but its compatibility with the space - time structure requires that locally it remain invariant under al(4 , r ) , which means that its globally automorphism group must be sdiff(m ) , the group of unimodular diffeomorphisms . general relativity similarly abandons the homogeneous flatness of the affine connection in favor of a non - flat affine connection that is the mathematical expression of the dynamical inertia - gravitational field . but , in order to preserve its compatibility with the special - relativistic chrono - geometry expressed by the metric tensor , the latter must also become a dynamical field . it preserves the local space - time structure of the special theory at each point . but globally both dynamical fields must have automorphism groups consisting of diffeomorphisms of m , the space - tme manifold , now itself no longer globally fixed . traditionally , diff(m ) , the full diffeomorphism group , has been assumed to be the correct automorphism group for general - relativistic theories . however , there are good arguments for restricting this group to sdiff(m ) , the group of unimodular diffeomorphisms with determinant one . but first some definitions are needed ( see , e.g. , wikipedia : group action ) . the action of g is said to be effective if its identity element is the only one that takes each point into itself : that is , if g g , x m and : x g(x ) is such that g(x ) = x for all x , then g = e , the identity element of g. the action is transitive if is a map onto m that connects any two of its points : that is for any two points x , y m , there is always a g g for which g(x ) = y. the stabilizer group hx at a point x of m is the subgroup of transformations of g that leave the point x invariant : that is , g hx if and only if g(x ) = x.52 since g is a lie group , hx is a closed subgroup at each point of g and these stabilizer groups are conjugate subgroups of g. indeed , m is isomorphic to g / hx ; so one may actually define a geometry by the pair ( g , h ) , where h is a closed subgroup of g. the action of g is free or semiregular if its stabilizer group is the identity : that is , if gx = x for some point x , then g = e , the identity element of g ; equivalently , if gx = hx for some x , then g = h. for example , the translation groups discussed above act freely on galilean and special relativistic space - times . now we are ready to return to the question of automorphism groups for general relativistic theories . the action of the stabilizer of diff(m ) on the tangent space at each point x of m is lx = gl(n ) , the group of all linear transformations at x. but consider the objects defining the geometry of a general - relativistic space - time with n = 4 : again , if one wants to preserve the four - volumes of space - time , which are needed to formulate meaningful physical averages , one must restrict these transformations to sl(4 ) , the group of 23 special linear transformations with unit determinant . the linear affine connection at x , which represents the inertio - gravitational field , is only invariant under the subgroup asl(4 ) , the group of affine transformations with unit determinant . and the invariance group of the metric tensor , which represents the chrono - geometry , is even further restricted to the pseudo - orthogonal subgroup so(3,1 ) of sl(4 ) . in short , globally physical considerations suggest the need to start from sdiff(m ) as the automorphism group of general - relativistic theories . so physically , diff(m ) overshoots the mark by allowing non - unimodular transformations , i.e , transformations with any value of their determinant at a point . geometrically , they correspond to similarity transformations , which preserve the shape but not the size of four - volumes in space - time . usually , one compensates for this unwanted change of size by introducing tensor densities : when appropriate weights are introduced for various tensors , these densities can undo the effects of the size changes produced by non - unimodular transformations . the action of its stabilizer on the tangent space at each point of m is slx , the maximal symmetry group that preserves the size of four - volumes , thus avoiding the need to introduce densities , among its many other advantages ( see stachel , 2011 ; bradonji and stachel , 2012 ) . for much of the following discussion , however , the distinction between diff(m ) and sdiff(m ) is inessential , so i shall continue to discuss diff(m ) , and only point out the distinction at some places where it is really important . by definition , aut rg(s ) , the group of permutations of the points of s defining the geometry g , leaves the relations rg ( which equally well define the geometry of s ) unchanged ; so the rg do not need to be permuted when the points of s are . whichever lie subgroup g of diff(m ) is chosen as the automorphism group defining the geometry of a differentiable manifold m , similar comments hold for it . as we shall see , the important difference for the hole argument is that between geometries based on finite - parameter lie groups and those based on lie groups depending on one or more functions ( functional lie groups ) . since it is no more than a relabeling of its points , any admissible passive coordinate transformation has no effect on a geometry ( see section 4.1 ) however , if one restricts the group of coordinate transformations to a subgroup of those corresponding elements of the automorphism group of the geometry , then there is an isomorphism between this subgroup of passive coordinate transformations and the group of active point transformations defining the geometry . hence , it is possible to reformulate any statement about the geometry in terms of relations between the coordinate components of the geometric object fields that are invariant under this subgroup of restricted coordinate transformations . in the past , this is how coordinate - dependent techniques were used to arrive at geometric results ; and many contemporary treatments still utilize this technique . if one permutes the points of m by a diffeomorphism , carries along the fields defining its geometry and the fields defining the theory , and also carries out the corresponding coordinate transformations , then clearly the new fields at the new points will have the same coordinate components in the new coordinate system as the old fields at the old points in the old coordinate system . this observation is another , coordinate - dependent variant of the basic or trivial identity . it holds for any fields , quite independently of any theory , or any field equations that these fields may obey . geometrically , a coordinate system corresponds to the choice of a holonomic basis ei at each point of m : that is , there is a local coordinate system such that ei = /x . but the essential element geometrically is the choice of a basis , not its holonomicity . so , introduce an ordered set of basis vectors ei(x)(i = 1,2, ,n ) , holonomic or not , at each point x of m together with the associated dual basis of covectors or one - forms e(x ) , such that \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$\langle { e_i}{e^j}\rangle = \delta _ i^j$\end{document}.53 associated with the geometric object fields and on m are their components with respect to such a pair of basis vectors , which will be symbolized by [e(x ) ] and [e(x ) ] : is is a set of coordinate - independent scalars that result from saturating all the free covariant and contravariant indices of and with the ei and e respectively . of course , under a change of basis : e(x ) e(x ) these scalars transform appropriately . a diffeomorphism d : x x induces such a change of basis : e(x ) ed(x ) , and corresponding changes in the geometric object fields (e ) d(x ) and (x ) ed(x ) . however , the values of these scalars remain unchanged if one carries out the associated push forwards and pull backs of and , as well of the basis vectors and covectors . that is , if we take the new basis vectors at the new point : ed(x ) ; then the new components with respect to the new basis vectors at the new points will equal the old components with respect to the old basis vectors at the old points : [e(x ) ] = d[ed(x ) ] and [e(x ) ] = d[ed(x ) ] . this observation is a coordinate - independent formulation of the basic identity . since any model of a physical theory can only fix the values of such coordinate - independent scalars with respect to some basis for all geometric objects in that model , this identity can not fail to hold for any theory based on the -geometry of m. suppose we perform the push forwards and pull packs on the geometric object fields , but not on the -geometry or the basis vectors and convectors . that is , let e(x ) e(x ) and [e(x ) ] [e(x ) ] , but [e(x ) ] d[e(x ) ] . in general , [e(x ) ] d[e(x ) ] , so this results in a set of scalars that is distinct from [e(x ) ] at each point x of m. a theory is covariant under the the -geometry s automorphism group if , whenever [e(x ) ] is a model of the theory , then so is d [ e(x ) ] . covariance clearly defines an equivalence relation between models of the theory ; so covariance divides all models of a theory into equivalence classes.54 a covariant theory is generally covariant under the -geometry s automorphism group if an entire equivalence class of its mathematically distinct models corresponds to a single physical model of the theory . an ordered set of basis vectors ei(x ) at a point of m is called a linear frame , and the set of all such linear frames at a point of m constitutes one fiber of the bundle of linear frames over m. as kobayashi explains , the bundle concept can be used to formulate any geometry on m as a g - structure:56 let m be a differentiable manifold of dimension n and l(m ) the bundle of linear frames over m. then l(m ) is the principal fibre bundle over m with group gl(n ; r ) . let g be a lie subgroup of gl(n ; r ) . by a g - structure on m we shall mean a differentiable subbundle p of l(m ) with structure group g. ( kobayashi , 1972 , p. 1 ) let m be a differentiable manifold of dimension n and l(m ) the bundle of linear frames over m. then l(m ) is the principal fibre bundle over m with group gl(n ; r ) . let g be a lie subgroup of gl(n ; r ) . by a g - structure on m we shall mean a differentiable subbundle p of l(m ) with structure group g. ( kobayashi , 1972 , p. 1 ) . such a fiber bundle formulation of geometries has several crucial advantages : it makes evident the fundamental distinction between vertical geometrical quantities , such as metric tensors , that live on the fibers of the bundle ; and horizontal geometrical objects , such as linear affine connections , that serve to connect these fibers . this is the case whether the metric and/or connection are fixed and given components of ; or are components of , themselves subject to dynamical field equations . it enables us to go from global to local formulations of background - independent theories , such as general relativity , in which the global topology of the base manifold m can not be specified a priori , because it differs for different solutions to the field equations.57 the concept of fibered spaces for a set , discussed in section 4.1 , can now be applied to differentiable manifolds ( see section 4.2 ) . after a fibered manifold is defined , the important cases of principal bundles , vector bundles , natural bundles and gauge - natural bundles and their physical applications are discussed , stressing the importance for general relativity of quotient bundles and local considerations . a fibered manifold ( e , m , ) consists of a total manifold e , a base manifold m , and a projection operator : e m. e is a differentiable manifold , the points of which , u , , etc . , are grouped into fibers by an equivalence relation between its points . m is also a differentiable manifold , the points of which are symbolized by x , y , etc . note that , if the relation is given initially , sometimes the base manifold m may be defined as the quotient of the total manifold e by : m = e/ ; in other words as the orbit space of g ( see sections 4.1 and 4.2 ) . but the situation is generally somewhat more complicated : usually , when symmetries and invariance groups are considered , a problem reduces to the corresponding orbit space , and therefore the structure of these spaces has to be investigated . this structure theory is quite complicated in general , since these spaces usually are singular spaces and not again manifolds . in fact , only if the action of the lie group is free ( i.e. , all isotropy subgroups of single points are trivial ) , the resulting orbit space bears a manifold structure and forms together with the manifold and the quotient map a principal fiber bundle , whose structure is well known . more often , the orbit space admits a stratification into smooth manifolds with an open and dense largest stratum , the set of principal orbits . this stratified space can then be treated almost like a manifold when taking special care . the existence of such a stratification is usually shown by proving the existence of slices at every point for the group action ( schichl , 1997 , p. 1 ) . usually , when symmetries and invariance groups are considered , a problem reduces to the corresponding orbit space , and therefore the structure of these spaces has to be investigated . this structure theory is quite complicated in general , since these spaces usually are singular spaces and not again manifolds . in fact , only if the action of the lie group is free ( i.e. , all isotropy subgroups of single points are trivial ) , the resulting orbit space bears a manifold structure and forms together with the manifold and the quotient map a principal fiber bundle , whose structure is well known . more often , the orbit space admits a stratification into smooth manifolds with an open and dense largest stratum , the set of principal orbits . this stratified space can then be treated almost like a manifold when taking special care . the existence of such a stratification is usually shown by proving the existence of slices at every point for the group action ( schichl , 1997 , p. 1 ) . i shall assume that as in general relativity in any theory considered , the quotient space is either a manifold or a stratified manifold ; and that any local solution to its field equations can be extended to a global solution.58 a fiber bundle is a fibered manifold in which all its fibers are isomorphic to a typical fiber f , itself a manifold ; that is , for all x , fx f. suppose f is q - dimensional and m is p - dimensional one can always introduce a local trivialization of the bundle : let x be an open subspace of m. locally , the total space e is a product space ( f x ) , and one can introduce p variables ( x , x , x)as local coordinates of a point x of x , and q variables ( u , u , , u ) as coordinates of a point u of f. so ( f x ) is coordinatized by the ( p+q ) coordinates ( x , u ) of a point ux of e lying on the fiber fx over the point x. let g be a lie group of diffeomorphisms that acts on e.59 the action of an element g g on the point ( x , u ) of e is symbolized by ( x , u ) g(x , u ) = ( x,u ) = [ (x , u ) , (x , u ) ] . two subgroups of g are especially important : the base transformations ( diffeomorphisms of x ) that do not affect the fibers : x x = (x ) , u = u.the pure fiber or pure gauge transformations on the fiber at each point : x = x , u = (x , u ) . both of these are included in a third subgroup : the fiber - preserving transformations : ( x , u ) ( x,u ) = [ (x),(x , u ) ] . the base transformations ( diffeomorphisms of x ) that do not affect the fibers : x x = (x ) , u = u. the pure fiber or pure gauge transformations on the fiber at each point : x = x , u = (x , u ) . the fiber - preserving transformations : ( x , u ) ( x,u ) = [ (x),(x , u ) ] . if g is a connected lie group , all of its actions can be constructed from iterations of the action of its lie algebra , composed of its infinitesimal generators : the vector fields v on e , each of which generates a one - parameter group of point transformations , or flow , on e. locally v may be written in terms of the coordinates ( x , u ) : 3\documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$$v = \sum\limits_i { { \xi ^i}(x , u)\partial/\partial { x^i } + \sum\limits_\alpha { { \varphi ^\alpha}(x , u)\partial/\partial { u^\alpha}\quad i = 1 , \ldots , p\;\;\alpha = 1 , \ldots , q.}}$$\end{document } the generator v is called : horizontal if = 0 , i.e. , it generates only base transformations . vertical if = 0 , i.e. , if it generates only pure fiber or pure gauge transformations . the flow generated by v will be fiber preserving if and only if = ( x ) . a fiber - preserving diffeomorphism projects naturally into a unique diffeomorphism of the base manifold m ; but generally the converse does not hold . if it does , i.e. , if a base diffeomorphism of m lifts uniquely to a fiber - preserving diffeomorphism of e , then the bundle is a natural bundle . a geometric object defined on such a bundle is called a natural object.60 this is the fiber bundle version of the definition of geometric objects in section 4.2 . if the the typical fiber f is isomorphic to the structure group g : f g , then the bundle is a principal fiber bundle p with structure group g : p = ( e , m , ; g ) . corresponding to any p with structure group g , there is a class of associated vector bundles . in such an associated bundle , each fiber forms a vector representation of g. this vector representation need not be irreducible , so the class of associated vector bundles includes all tensor fields . the use of fibered manifolds allows a precise formulation of the concepts of covariance and general covariance for any physical theory ; and of the hole argument for background - independent theories , and even with appropriate modifications for some partially - background - dependent theories . every natural physical theory can be formulated in terms of some natural geometric object(s)61 that lives on an appropriate fibered differentiable manifold,62 the nature of which depends on these geometric object(s ) . if the theory is defined on a differentiable manifold m that is the quotient of the fibered manifold e divided by the equivalence relation defining the fibration , m = e/ ; then there is an operator , projecting each fiber onto the corresponding point of m : : e m. since the fibered manifold represents a natural object , there is a one - one correspondence between fiber - preserving diffeomorphisms of e and diffeomorphisms of m. a number of most important gauge natural theories can not be so formulated , but require the broader concept of gauge natural bundles for their precise formulation . indeed , every classical physical theory can be reformulated as the jet prolongation of some gauge natural bundle by adjoining the derivatives of the geometric object fields to the original bundle.63 a theory based on such fixed -fields on m is called background - dependent , with aut(m , ) as its symmetry group.64 any geometric object fields (x ) can then be introduced on this fixed - background space - time together with a set of field equations governing their dynamics , which generally involve some or all of the (x ) . in many theories , the fixed geometric object fields on m consist of a vertical chrono - geometric metric tensor on each fiber and the corresponding horizontal inertio - gravitational linear connection . any non - gravitational theory can be formulated on a fiber bundle associated with the principal bundle determined by the metric and connection : the (x ) break up into two subclasses : the fields of massive objects ( such as charged bodies ) are represented by geometric quantities living on the vertical fibers ; and the gauge fields transmitting the forces between these objects ( such as the electromagnetic field ) are represented by verical connections along the fibers ; these connections are only fixed up to some group of gauge transformations . in the case of general relativity and other background - independent theories ( such as the coupled einstein - maxwell equations ) , reduces to the identity and there are no fixed background space - time structures on m. diff(m ) is chosen as aut(m ) in the usual formulations ; but , as suggested in section 4.2 , sdiff(m ) , the unimodular subgroup , may be chosen . in that case , the space - time structures subdivide further : the pseudo - metric splits into a conformal metric with determinant 1 and a scalar field , both of which live on the vertical fibers ; while the linear affine connection splits into a trace - free projective connection and a one form , both of which serve to connect the fibers . to define the gauge symmetries of a certain type of theory , one must consider the sections of the corresponding fiber bundle . a local section : m x e or global cross section : m e is a map taking each point p of x or e , respectively , into a unique point of the fiber fp over p.65 for each type of physical theory , a section represents a particular configuration of the corresponding physical field . however , in theories of the gauge - field type , this representation is not unique . there is a group of gauge transformations , each element of which maps one mathematical representative of some field configuration into another representative of the same configuration . a gauge symmetry is an equivalence relation on the set of sections : two sections and are gauge equivalent if there is a gauge transformation taking one into the other . this equivalence relation divides the set of all sections into equivalence classes , the gauge orbits ; each section belongs to one and only one such orbit . if the gauge group of some type of theory consists entirely of fiber - preserving transformations , then the theory can be formulated on a natural bundle . but if its gauge group includes non - fiber - preserving transformations , then a gauge - natural bundle is needed to formulate this type of theory correctly . the field equations of a particular gauge field theory serve to pick out a class of preferred sections consisting of the solutions to these equations . for a gauge theory , these equations must be of such a form that , if one section is a solution , then so are all members of the entire gauge orbit of that section . in other words , the gauge transformations must form a symmetry group of the field equations . this group is the automorphism group aut(p ) of the principal gauge - natural bundle p corresponding to the theory ( see , e.g. , fatibene and francaviglia , 2003 , p. 223 ) . fatibene and his collaborators explain the distinction between the two types of theory well : the main technical difference between natural and gauge natural theories is that [ base ] diffeomorphisms are completely replaced by gauge transformations . in gauge natural theories spacetime diffeomorphisms do not act at all on fields , since the only action one can define in general is that of gauge transformations . this is due to the fact that although pure gauge transformations are canonically embedded into the group of generalized gauge transformations , there is no canonical horizontal complement to be identified with diff(m ) . horizontal symmetries , in fact , are generally associated to physically relevant conservation laws , such as energy , momentum and angular momentum . the definition of such quantities is almost trivial in natural theories ; on the contrary , in gauge natural theories pure gauge transformations are easily associated to gauge charges ( e.g. , the electric charge in electromagnetism ) , while the absence of horizontal gauge transformations is a problem to be solved to appropriately define energy , momentum and angular momentum . for this reason , in gauge natural theories the dynamical connection plays an extra role in determining horizontal infinitesimal symmetries as the gauge generators which are horizontal with respect to the principal connection ( fatibene et al . , 2001 , pp . the main technical difference between natural and gauge natural theories is that [ base ] diffeomorphisms are completely replaced by gauge transformations . in gauge natural theories spacetime diffeomorphisms do not act at all on fields , since the only action one can define in general is that of gauge transformations . this is due to the fact that although pure gauge transformations are canonically embedded into the group of generalized gauge transformations , there is no canonical horizontal complement to be identified with diff(m ) . horizontal symmetries , in fact , are generally associated to physically relevant conservation laws , such as energy , momentum and angular momentum . the definition of such quantities is almost trivial in natural theories ; on the contrary , in gauge natural theories pure gauge transformations are easily associated to gauge charges ( e.g. , the electric charge in electromagnetism ) , while the absence of horizontal gauge transformations is a problem to be solved to appropriately define energy , momentum and angular momentum . for this reason , in gauge natural theories the dynamical connection plays an extra role in determining horizontal infinitesimal symmetries as the gauge generators which are horizontal with respect to the principal connection ( fatibene et al . , 2001 , pp . 34 ) . while bundle formulations of the hole argument originally dealt only with natural bundles , lyre ( 1999 ) develops a generalized version that can be applied to gauge - natural bundles : the generalized hole argument is motivated and extended from the spacetime hole argument . [ it ] rules out fiber bundle substantivalism and , thus , a relationalistic interpretation of the geometry of fiber bundles is favored ( lyre , 1999 , p. 1 ) . [ it ] rules out fiber bundle substantivalism and , thus , a relationalistic interpretation of the geometry of fiber bundles is favored ( lyre , 1999 , p. 1 ) . healey ( 2001 ) also argues that fiber bundle substantivalism is subject to an analogue of the to recapitulate : the choice of a bundle ( e , m , ) selects a certain type of physical theory but does not picked out a particular theory of that type , nor introduce any space - time structures on e or m. the points of m form a geometry ( see section 4.1 ) . as points of the space - time manifold , they have quiddity but they lack haecceity : a priori there is nothing to distinguish one such point from the others . their automorphism group is the diffeomorphism group of m or some appropriate subgroup , such as the unimodular group ( see stachel , 2011 ; bradonji and stachel , 2012 ) . for example , a metric - free formulation of electromagnetic theory can be based on a bundle of one - forms . a particular theory is a rule for choosing a preferred class of cross sections of the fiber bundle . this rule generally includes specification of some space - time structures on m. for example , in addition to a bundle of one - forms , source - free maxwell electromagnetic theory , requires the specification of a conformal structure on m. in general relativity , an equivalence class of diffeomorphically - equivalent pseudo - metrics on a four - dimensional manifold,66 often referred to as a four - geometry , is regarded as corresponding to a single inertia - gravitational field . while the fiber space consisting of all four - metrics over a given manifold forms a manifold,67 the space of all four - geometries does not form a simple manifold , but a stratified manifold . that is , it is partitioned into slices , each of which is itself a manifold , consisting of all four - geometries having the same symmetry or isometry group . the largest slice is the manifold of generic geometries having no nontrivial symmetries ; it contains the vast majority of geometries . thence one descends slice by slice down to the slice consisting of all four - geometries having the maximal symmetry group ( see stachel , 2009 and section 6.1 ) . the rule specifying the choice of a preferred class of space time structures may or may not include some restriction on diff(m ) , the maximal possible automorphism group of m. obviously , diffeomorphisms always remain unrestricted in the sense of the trivial identity . a true restriction on the theory arises with the imposition of a finite - parameter lie group as the symmetry group of the class of space - time structures picked out by the rule.68 if there are no such restrictions , the theory is background independent . if the rule includes a lie group involving some functions as well as parameters , the theory is partially background dependent . if the lie group is maximal ( ten - parameters in four dimensions ) , then the theory is totally background dependent . if the rule restricts the preferred class maximally , i.e. , to the identity , then the theory specifies an individuating field on the space - time , turning it into an algebra . in this formulation , the symmetry group is included in the rule defining a physical theory , rather than being imposed a priori on the space - time structures defined on m. this change enlarges the class of physical solutions : for example , not fixing the global topology of m allows several possibilities for the global topology associated with a given local metric . but this does not alter the fact that the symmetry group of the space - time structures must be preserved by all such solutions . to what extent the hole argument applies to a non background - independent theory depends on the degree of background dependence that has been imposed ( see stachel , 2009 ) ; but if a theory is background independent , the hole argument certainly applies . on the basis of the analysis developed in the previous sections,69 i shall re - examine some of the issues currently being discussed in the philosophy of science . rather than attempting to cover the vast literature on this subject , the discussion is limited to a few representative samples of what i consider to be the most important trends , and will show that they are converging towards variations around a common denominator . since earman and norton ( 1987 ) ( see section 3 ) , philosophical discussion of the hole argument has centered largely around the issue of space - time absolutism now often called substantivalism70 versus the opposing viewpoint , usually denominated relationalism or relationism . einstein summarized an earlier version , the age - old controversy over the nature of space : two concepts of space may be contrasted as follows : space as positional quality of the world of material objects;space as container of all material objects . space as positional quality of the world of material objects ; space as container of all material objects . in case ( a ) , ( b ) , a material object can only be conceived as existing in space ; space then appears as a reality which in a certain sense is superior to the material world ( foreword to jammer , 1954 ) . relativity theory metamorphosed the object of controversy from space to space - time , and einstein made is his own viewpoint quite clear : on the basis of the general theory of relativity space as opposed to what fills space has no separate existence . if we imagine the gravitational field to be removed , there does not remain a space of the type [ of the minkowski space of sr ] , but absolutely nothing , not even a topological space [ i.e. , a manifold ] there is no such thing as an empty space , i.e. , a space without field . space - time does not claim existence on its own , but only as a structural quality of the field ( einstein , relativity and the problem of space , in einstein , 1952 ) . on the basis of the general theory of relativity space as opposed to what fills space has no separate existence . if we imagine the gravitational field to be removed , there does not remain a space of the type [ of the minkowski space of sr ] , but absolutely nothing , not even a topological space [ i.e. , a manifold ] there is no such thing as an empty space , i.e. , a space without field . space - time does not claim existence on its own , but only as a structural quality of the field ( einstein , relativity and the problem of space , in einstein , 1952 ) . here are a couple of recent statements on the nature of the controversy : substantivalists understand the existence of spacetime in terms of the existence of its pointlike parts , and gloss spatiotemporal relations between material events in terms of the spatiotemporal relations between points at which they occur . relationists will deny that spacetime points enjoy this robust sort of existence , and will accept spatiotemporal relations between events as primitive ( belot and earman , 2001 , p. 227).a modern - day substantivalist thinks that spacetime is a kind of thing which can , in consistency with the laws of nature , exist independently of material things ( ordinary matter , light , and so on ) and which is properly described as having its own properties , over and above the properties of any material things that may occupy parts of it ( hoefer , 1996).what is space ? what is time ? do they exist independently of the things and processes in them ? or is their existence parasitic on these things and processes ? are they like a canvas onto which an artist paints ; they exist whether or not the artist paints on them ? or are they akin to parenthood ; there is no parenthood until there are parents and children ? that is , is there no space and time until there are things with spatial properties and processes with temporal durations ? the hole argument arose when these questions were asked in the context of modern spacetime physics . in that context , space and time are fused into a single entity , spacetime , and we inquire into its status . one view is that spacetime is a substance , a thing that exists independently of the processes occurring within spacetime . substantivalists understand the existence of spacetime in terms of the existence of its pointlike parts , and gloss spatiotemporal relations between material events in terms of the spatiotemporal relations between points at which they occur . relationists will deny that spacetime points enjoy this robust sort of existence , and will accept spatiotemporal relations between events as primitive ( belot and earman , 2001 , p. 227 ) . a modern - day substantivalist thinks that spacetime is a kind of thing which can , in consistency with the laws of nature , exist independently of material things ( ordinary matter , light , and so on ) and which is properly described as having its own properties , over and above the properties of any material things that may occupy parts of it ( hoefer , 1996 ) . do they exist independently of the things and processes in them ? or is their existence parasitic on these things and processes ? are they like a canvas onto which an artist paints ; they exist whether or not the artist paints on them ? or are they akin to parenthood ; there is no parenthood until there are parents and children ? that is , is there no space and time until there are things with spatial properties and processes with temporal durations ? the hole argument arose when these questions were asked in the context of modern spacetime physics . in that context , space and time are fused into a single entity , spacetime , and we inquire into its status . one view is that spacetime is a substance , a thing that exists independently of the processes occurring within spacetime . this is spacetime substantivalism ( norton , 2011 ) . in the light of the hole argument , i find it more fruitful to frame discussion in terms of two other distinctions , leading to a point of view about space - time distinct from either substantivalism or relationalism as traditionally defined . these are the distinctions between : internal and external relations , and betweenquiddity and haecceity . these concepts are discussed in appendix b and briefly reviewed in section 5.5 . when applied to mathematical structures , they lead to succinct discussions of algebraic and geometric structures and the nature of coordinatization in section 5.6 , which establishes a correspondence between the two ( for a fuller discussion , see section 4.1 ) . these concepts lead to a viewpoint on the nature of space - time that has been given various names , such as structural spacetime realism71 and sophisticated substantivalism ( see pooley , 2000 , summarized in section 5.3 ) . i have called it dynamic structural realism ( see stachel , 2006a ) , which has several advantages . it avoids use of the words substantivalism and relationalism , fraught with so many unwanted implications ; it places emphasis on diachronic aspects of structure ; and its application is not confined to theories of space - time structure ( see stachel , 2005 ) . the fiber bundle approach , motivated in section 3 and treated in more detail in section 4.3 , allows a rigorous formulation of this viewpoint in section 5.4 . but first , i shall give a brief account of the controversy between relationalists and substantivalists provoked by the hole argument and how it has led a number of participants from each camp to adopt this new viewpoint . rather than attempting a ( necessarily superficial ) review of the vast philosophical literature on the controversy , i shall focus on an account of the views of one important relationalist and one important substantivalist . earman ( 1989 ) is a standard reference , so i shall employ its terminology and notation in discussing his views . discussing a modified form of absolutism , he states that the only plausible candidate for the role of supporting the nonrelational structures [ of a physical theory ] is the space - time manifold m the only plausible candidate for the role of supporting the nonrelational structures [ of a physical theory ] is the space - time manifold m ( ibid . manifold substantivalism the view that m is a basic object of predication , he sets out to show that this view lays itself open to leibniz s argument ( p. 126 ) . in his formulation of the problem , earman uses the standard pre - bundle approach to theories ( see sections 3.2 and 4.3 ) : a model of a theory consists of the manifold m , together with [ geometric ] object fields on m , which he denotes by ai and pj , characterizing respectively the space - time structure and the physical contents of space - time . symbolically , = m , ai , pj , with i and j each running through a finite sequence of integers . a manifold diffeomorphism d : m m then results in a different model d = m , d * ai , d * pj , where d * ai , d * pj denote the pull backs or the push forwards of ai , pj . it is important to note that earman ( 1989 ) defines general covariance in a way that is equivalent to my definition of covariance ( see section 4.4 ) : let us say that the laws of [ a theory ] t are generally covariant just in case whenever [ is a model of the theory ] then also [ is a model ] for any manifold diffeomorphism let us say that the laws of [ a theory ] t are generally covariant just in case whenever [ is a model of the theory ] then also [ is a model ] for any manifold diffeomorphism 175208 ) , earman applies this concept of model to the formulation of general relativity in terms of the metric tensor and its first and second derivatives . thus his ai is restricted to the metric field gik ; while the pj correspond to components of the stress energy tensor tik . he presents a version of einstein s hole argument , involving a diffeomorphism d , such that d = i d outside the hole h but i d inside h and such that the two pieces join smoothly on the boundary the upshot is that we have produced two solutions , m , g , t and m , d * g , t , which have identical t fields but different g fields an apparent violation of the kausalgesetz that the t field determines the g field ( ibid . d = i d outside the hole h but i d inside h and such that the two pieces join smoothly on the boundary the upshot is that we have produced two solutions , m , g , t and m , d * g , t , which have identical t fields but different g fields an apparent violation of the kausalgesetz that the t field determines the g field ( ibid . he then presents a version of hilbert s cauchy problem argument ( see section 2.7 ) . assuming the existence of a cauchy surface , parameterized by t = 0 , he considers : a diffeomorphism d such that d = i d for all t 0 and i d for t > 0 and such that there is a smooth join at t = 0 ( ibid . a diffeomorphism d such that d = i d for all t 0 and i d for t > 0 and such that there is a smooth join at t = 0 ( ibid . one can then construct two solutions , m , g , t and m , d * g , t , that do not differ for t 0 and sharing the same initial data to any finite order of differentiability on t = 0 . [ this provides ] a seeming violation of the weakest form of laplacian determinism indeed , any nontrivial form of determinism suffers equally ( ibid . [ this provides ] a seeming violation of the weakest form of laplacian determinism indeed , any nontrivial form of determinism suffers equally ( ibid . , the discussion of the range of applicability of the hole argument in earman ( 1989 ) differs significantly from that in earman and norton ( 1987 ) , which maintained that the hole argument applied to every classical spacetime field theory [ that ] can be formulated as a local space - time theory . they included all special - relativistic field theories , which they maintained were made local by adjoining the riemann tensor rabcd(g ) to the set of geometric objects included in any model , and adding the equation rabcd = 0 to the set of field equations defining acceptable special - relativistic models . i must linger a bit longer on earman and norton ( 1987 ) , because their discussion of general covariance has led to much confusion . they divide geometric object fields o1, on into two classes : o1, ,ok1 and ok, ,on . the second , dynamical class is assumed to obey field equations 4\documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$${o_k } = 0,\quad { o_{k + 1 } } = 0 \ldots , \quad { o_n } = 0,$$\end{document } while the first class may include non - dynamical , fixed background fields.72 when applied to special relativistic field theories , the minkowski metric and the associated flat affine connection are included among these non - dynamical fields . in order to demonstrate that the hole argument applies to all such theories ( and in contrast to the definition of general covariance in earman ( 1989 ) , cited above ) , earman and norton ( 1987 ) prove a gauge theorem ( general covariance ) : if m , o1 , , on is a model of a local spacetime theory and h is a diffeomorphism from m to m , then the carried along n - tuple m , h * o1, ,h * on is also a model of the theory ( ibid . , p. 520 ) . if m , o1 , , on is a model of a local spacetime theory and h is a diffeomorphism from m to m , then the carried along n - tuple m , h * o1, ,h * on is also a model of the theory ( ibid . , ,on fields of both classes are subject to arbitrary diffeomorphisms , which need not be symmetries of the non - dynamical fields . is essentially the trivial identity ( see sections 2.5 , 4.2 and 5.7 ) . indeed , we can reformulate the trivial identity in their notation : if m , o1 , , on is a model of a local spacetime theory and h is a diffeomorphism from m to m , taking a point p m into a point p m : p p , then the carried along n - tuple is m , h * o1, ,h * on. if we now carry out a coordinate transformation x x , such that x(p ) = x(p ) , i.e. , the new coordinates of the new point equal the old coordinates of the old point ; then [ h * oi] = oi , i.e , the new components of the new geometric objects are numerically equal to the old components of the old objects , then clearly nothing has changed . so it is not clear why the authors feel any : need to establish that the vanishing of the field equations ok = 0,ok+1 = 0 ,on = 0 is preserved under a diffeomorphism ( ibid . , p. 520 ) . need to establish that the vanishing of the field equations ok = 0,ok+1 = 0 ,on = 0 is preserved under a diffeomorphism ( ibid . , p. 520 ) . while earman ( 1989 ) avoids this confusion by silently renouncing this position , much of the later literature on the hole argument still falls into this error . earman now argues that one can not simply take a special relativistic theory of motion and rewrite the equations using covariant derivatives with respect to an undetermined lorentz metric g. then write the field equation for g , namely , \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$r_{jkl}^i(g ) = 0$\end{document } where \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$r_{jkl}^i$\end{document } is the riemann curvature tensor ( earman , 1989 , p. 183 ) , and then apply the hole argument to show that the theory is non - deterministic . [ t]his distinction corresponds to the distinction between the [ geometric ] object fields ( ai ) that characterize the structure of space - time and those ( pj ) that characterize the physical contents of space - time ( ibid . he then requires that , for any two dynamically possible models of the theory = m , ai , pj and = m,ai, pi there is a diffeomorphism d : m m such that d * ai = ai for all i ( ibid . , p. 184 ) take a special relativistic theory of motion and rewrite the equations using covariant derivatives with respect to an undetermined lorentz metric g. then write the field equation for g , namely , \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$r_{jkl}^i(g ) = 0$\end{document } where \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}$r_{jkl}^i$\end{document } is the riemann curvature tensor ( earman , 1989 , p. 183 ) , a distinction between absolute and dynamical objects [ t]his distinction corresponds to the distinction between the [ geometric ] object fields ( ai ) that characterize the structure of space - time and those ( pj ) that characterize the physical contents of space - time ( ibid . , for any two dynamically possible models of the theory = m , ai , pj and = m,ai, pi there is a diffeomorphism d : m m such that d * ai = ai for all i ( ibid . , p. 184 ) letting m = m , ones sees that the condition d * ai = ai singles out those diffeomorphisms d of the manifold that are symmetries of the ai fields.73 without going into further detail ( see ibid . , p. 184 ) , earman essentially argues that the hole argument does not apply if the symmetry group makes the absolute - space time structures sufficiently rigid . it is also clear ( although earman does not make the point ) that the trivial identity ( earman and norton s gauge theorem ) is of no help in an attempt to apply the hole argument to such cases . earman ( 2004 ) continues the line of reasoning in earman ( 1989 ) , but with some further evolution : it emphases from the start the difference between finite - parameter lie symmetry groups , covered by noether s first theorem ; and symmetry groups that are function groups , covered by noether s second theorem . earman ( 2006 ) starts off in a way reminiscent of earman and norton ( 1987 ) : it will be assumed that the spacetime theories to be discusses have been formulated in such a way that ( a ) their models have the form ( , o1 , o2 , , on ) , where is a differentiable manifold and the on are geometric object fields that live on m , and ( b ) their laws of motion / field equations have the form f(1,2 , , k ) = 0 , where f is some functional and the k are geometric object fields constructed from the on ( ibid . it will be assumed that the spacetime theories to be discusses have been formulated in such a way that ( a ) their models have the form ( , o1 , o2 , , on ) , where is a differentiable manifold and the on are geometric object fields that live on m , and ( b ) their laws of motion / field equations have the form f(1,2 , , k ) = 0 , where f is some functional and the k are geometric object fields constructed from the on ( ibid . , p. 446 ) . the difference is that now he separates the field equations f = 0 from the quantities defining the model , and relaxes the demand that they be tensorial equations . he introduces the concept of gauge symmetry as a transformation , in which : the physical situation is not being changed ; rather different but equivalent descriptions of one and the same physical situation are being generated . the physical situation is not being changed ; rather different but equivalent descriptions of one and the same physical situation are being generated . he then defines substantive general covariance ( sgc ) : the equations of motion / field equations of the theory display diffeomorphism invariance ; that is , if ( ,ol , o2, ,on ) is a solution , then so is ( , d*ol,*o2 , , on ) for any d diff( ) . and the equations of motion / field equations of the theory display diffeomorphism invariance ; that is , if ( ,ol , o2, ,on ) is a solution , then so is ( , d*ol,*o2 , , on ) for any d diff( ) . and in other words , the two solutions are mathematically distinct descriptions of the same physical solution . formal general covariance is what bergmann ( 1957 ) calls weak covariance , trivial general covariance ; and stachel and iftime ( 2005 ) call covariance tout court . now corresponds to the stachel and iftime ( 2005 ) definition of general covariance but what s in a name?75 the two positions are now substantively the same . the remaining difference is mathematical : rather than using fiber bundles , earman still works with fields on a manifold , so his formalism is still vulnerable to the substantivalists attack . as he noted in another context , formalism generated the problem and formalism is needed to resolve it ( earman , 1989 , p. 184 ) . or perhaps it would be better to say : if you adopt a certain philosophical stance , you should adopt the formalism best suited to it . pooley describes sophisticated substantivalism succinctly as a combination of anti - haecceitism and realism about spacetime points ( pooley , 2006 , p. 103 ) . a frequent response [ to the argument from leibniz equivalence ] is that one can regard all isomorphic models of general relativity as representing the same physical possibility ( leibniz equvalence ) and regard spacetime as a basic , substantival and concrete entity.sophisticated substantivalism : isomorphic models and 0 represent the same physical possibility (= l[eibniz ] e[quivalence ] ) and spacetime points exist as fundamental entities . le accords with the practice of physicsthe metric ( plus manifold ) gets its natural interpretation as spacetime and 0 can only be regarded as representing distinct possible worlds if spacetime points have primitive identity . denying that they do is good metaphysics independently of the hole argument ( pooley , 2000 ) . sophisticated substantivalism may be compatible with taking seriously physicists concerns , but does it have a coherent motivation ? the obvious thing to be said for the position is that one thereby avoids the indeterminism of the hole argument . a less ad hoc motivation would involve a metaphysics of individual substances that does not sanction haecceitistic differences , perhaps because the individuals are individuated by their numerical distinctness is grounded by their positions in a structure . stachel has recently sought to embed his response to the hole argument in exactly this type of more general framework . i hope enough has been said to indicate the coherence of such a point of view ; it is perhaps a modest structuralism about spacetime points , but it is a far cry from the objectless ontology of the ontic structural realist ( pooley , 2006 , p. 102 ) . a frequent response [ to the argument from leibniz equivalence ] is that one can regard all isomorphic models of general relativity as representing the same physical possibility ( leibniz equvalence ) and regard spacetime as a basic , substantival and concrete entity . sophisticated substantivalism : isomorphic models and 0 represent the same physical possibility (= l[eibniz ] e[quivalence ] ) and spacetime points exist as fundamental entities . le accords with the practice of physicsthe metric ( plus manifold ) gets its natural interpretation as spacetime and 0 can only be regarded as representing distinct possible worlds if spacetime points have primitive identity . denying that they do is good metaphysics independently of the hole argument ( pooley , 2000 ) . sophisticated substantivalism may be compatible with taking seriously physicists concerns , but does it have a coherent motivation ? the obvious thing to be said for the position is that one thereby avoids the indeterminism of the hole argument . a less ad hoc motivation would involve a metaphysics of individual substances that does not sanction haecceitistic differences , perhaps because the individuals are individuated by their numerical distinctness is grounded by their positions in a structure . stachel has recently sought to embed his response to the hole argument in exactly this type of more general framework . i hope enough has been said to indicate the coherence of such a point of view ; it is perhaps a modest structuralism about spacetime points , but it is a far cry from the objectless ontology of the ontic structural realist ( pooley , 2006 , p. 102 ) . le accords with the practice of physics the metric ( plus manifold ) gets its natural interpretation as spacetime and 0 can only be regarded as representing distinct possible worlds if spacetime points have primitive identity . denying that they do is good metaphysics independently of the hole argument ( pooley , 2000 ) . again , there is sophisticated substantial agreement between pooley s viewpoint and those of earman and stachel ( see pooley , 2013 , for a more recent account of his position ) . my earliest discussions of the hole argument were based on a purely relationalist approach to space time , which denied any physical significance to points of the four - dimensional manifold m ; they only became elements of space - time after a metric tensor field was specified . this was largely in response to mathematical formulations of physical field theories in terms of geometric object fields on a given m. if one conceded that the points of this manifold represented elements of space - time , this seemed to hand victory to the absolutists ( subsequently metamorphosed into substantivalists ) . when i realized the full implications of the fiber bundle approach , which allows the definition of m as the quotient of the total manifold of the bundle by the equivalence relation defining the fibration ( see section 4.2 ) ; and of schouten s ( 1951 ) observation that , in contrast to mathematical tensor fields , physical tensor fields have physical dimensions ; i came to recognize that the points of m , so defined , do have the physical character of elements of space - time even before the choice of a particular field ( cross section of the bundle).76 what they lack is individuality , or haecceity as i put it after adopting teller s ( 1998 ) terminology ( see section 5.5 ) . this led me to a structuralist account of physical theories , but not the kind of structuralism espoused by ladyman and french ( see , e.g. , ladyman , 1998 ; french and ladyman , 2003 ) which they call ontic structural realism ; but which is really a kind of hyper - relationalism.77 stachel ( 2006a ) espouses a form of traditional realism as a philosophical position , and also stresses the priority of processes over states , hence it names this position dynamic structural realism . to summarize the last three sections , starting from various relationalist or substantivalist positions , earman , pooley and stachel have been led to a third position , which earman calls substantive general covariance , pooley calls sophisticated substantivalism , and stachel calls dynamic structural realism ; but all three positions are essentially the same . the major difference is stachel s emphasis on the utility of the fiber bundle approach for the mathematical expression of this position . after this lengthy historical - critical excursus , i shall turn to some philosophical arguments for this approach , starting with the definition of some terms already given in section 4 and appendix b , but repeated here for the benefit of those who did not read that section . a relation is said to be internal if one or more essential properties of the relata78 depend on the relation . it is said to be external if no essential property so depends.79 this distinction is in turn based on the distinction between intrinsic and extrinsic properties of an entity . some of its intrinsic properties serve to characterize what has been variously called the essence , nature or natural kind of the entity ; if any of these essential intrinsic properties depend on its relation(s ) to other entities , then these relations are internal . no extrinsic property can depend on an internal relation . whether a relation is internal or external is theory - dependent , and hence may depend on the theoretical level at which the objects are treated . in any physical theory , for example , a set of units must be adopted before a mathematical form can be given to any physical quantity . its numerical expression is actually a relation the ratio of the quantity to its unit . at this level , it is an external relation based on the properties of the quantity and its unit . whether these properties themselves are intrinsic or extrinsic may depend on the theoretical level considered . in the system of units adopted , haecceity refers to those properties of the relata that enable us to individuate entities of the same quiddity . up until the last century , it was assumed that entities of the same quiddity could always be individuated by some of their intrinsic properties , independently of any relations , into which they entered . this is leibniz s principle : the identity of indiscernibles.80 any further individuation due to such relations was supposed to supervene on this basic individuation.81 with the advent of quantum statistics , it was argued that there are entities the elementary particles that have quiddity ( any particle with charge e , mass me and spin 1/2 is an electron ) but no inherent haecceity ( one can not distinguish one electron from another by any intrinsic property ) . and the refutation of hole argument can be similarly formulated : the points of space - time have quiddity but no inherent haecceity . so theoretical physics led to the introduction of a new category : entities having quiddity but no inherent haecceity . an important example of the utility of this category in mathematics is the fundamental distinction between geometric and algebraic structures . geometry deals with elements that have ( the same ) quiddity but lack inherent haecceity ; a set of internal relations between these elements then defines a particular geometric structure . the group of permutations of these elements preserving the defining internal relations is the symmetry or automorphism group of the geometry . each geometry has such a group of transformations of its elements , under which all geometrical relations of that geometry remain invariant.82 algebra deals with elements that possess both quiddity and haecceity ; a set of external relations between these elements defines a particular algebraic structure . 83 coordinatization of a geometry by an ( appropriate ) algebra is the assignment of a unique element of this algebra to each point of the geometry ; one can carry out certain algebraic operations and then give the result a geometric interpretation . coordinate transformations : any coordinatization of a geometry gives each of its elements a haecceity , thus negating their homogeneity . this is restored by negating in turn any individual coordinatization : a group of coordinate transformations between all admissible coordinate systems is introduced . an admissible coordinate transformation is one that corresponds to an element of the automorphism group of the geometry . it follow that each point of the geometry will have every element of the algebra as its coordinate in ( at least ) one admissible coordinate system . to talk about a principle of relativity only makes sense if one has first defined a frame of reference . one then asserts that the laws of physics take the same form in all members of some class of frames of reference . in special relativity , this class of frames ( actually a group in the case ) consists of the inertial frames of reference . given the minkowski metric and its associated flat inertial connection , such a frame may be defined by taking any time - like autoparallel ( straight ) line , and constructing the family of such lines , one through each point of the manifold ( i.e. , a fibration of the space - time ) , each of which is parallel to the initial line.84 one may then pick a fiduciary point on each such line , and use the proper time along this world line , counted forwards and backwards starting from that point = 0 , to individuate the points along the line . assuming that each line is itself physically individuated ( given haecceity ) in some way , all the points of the space - time are now individuated . it is customary to choose all the fiduciary points to lie on the same space - like hyperplane orthogonal to the time - like fibration ( einstein convention for defining distant simultaneity ) . then the entire group of inertial frames may be generated from the initial one by the action of the poincar group on the points of that inertial frame . of course , the trivial identity holds : if we move everything together with some diffeomorphism of the manifold , nothing has changed . but given that we move only the world lines with respect to the metric and connection , the poincar group is the automorphism group of the inertial frames . the inertial frames thus form a rigid structure , individuating the points of minkowski space - time , and the hole argument fails , as it will for any finite - parameter lie group . in general relativity , a spatial frame of reference also corresponds to a fibration of the four - dimensional manifold m with the stipulation that , when a metric tensor field g is introduced , the fibration consist of curves with a unit time - like vector field tangent to the fibration : = dx / d.85 we may then define projection operators p along the foliation and p orthogonal to it . the vector field represents the four - velocities of observers in the chosen reference frame and the orthogonal projection of the metric field g represents the instantaneous spatial rest - frame of each observer . again , one may pick a fiduciary point on each time - like world line and use the proper time , forwards and backwards starting from that point , to individuate the points along the line . evolution of any geometric object field along the congruence will be represented by its lie derivatives with respect to , . one will usually pick the fiduciary points so that they fit together smoothly to form a space - like hypersurface that transvects the fibration . now there are two possibilities : holonomic case : if one has chosen a congruence , the tangent field of which has vanishing rotation , there will be a foliation of space - time consisting of a one - parameter family hypersurfaces orthogonal to the fibration . the fiduciary points can be chosen to lie on one hypersurface of the foliation , and the local spatial rest - frames of each observer will fit together to form a global spatial rest - frame ; so that the local spatial rest - frames of each observer fit together to form a one - parameter family of global spatial rest - frames . this is the geometric basis of the traditional approach to the cauchy problem in general relativity ( see section 2.7).86 non - holonomic case : but there is no need to impose this requirement . it is customary to introduce a triad of orthonormal space - like vectors ei(i = 1 , 2 , 3 ) that , together with , span the tangent space at each point of the manifold . then , the components with respect to this tetrad of any geometric object field , called the physical components by pirani , are assumed to be the physically measurable quantities by an observer in that frame at that point . on the assumption that each curve in the three - parameter fibration is physically individuated ( given haecceity ) in some way , and that some foliation is introduced to provides the fourth individuating quantity , the hole argument still fails , because a fibration and foliation provide an individuating field ( see section 4.4 ) , whether or not the rotation of the foliation vanishes . indeed , one does not even need a foliation : just as in the case of sr , if one hypersurface intersecting all the fibers is chosen as the origin for the proper time on each fiber ( i.e. , = 0 on this hypersurface ) ; then the proper time on each fiber provides the fourth individuating quantity . the transformation from one fibration with associated proper times to another is merely a change of labeling of the individuation.87 this individuation evades the hole argument and allows the formulation of the cauchy problem for the einstein field equations in terms of lie derivatives of the tetrad components of the appropriate quantities with respect to any time - like congruence , holonomic ( see stachel , 1969 ) or non - holonomic ( see stachel , 1980).88 thus , the principle of relativity has been extended beyond inertial frames in minkowski spacetime : the laws of any physical theory based on a geometric object field , or indeed the laws governing any particle world - lines introduced into the theory , can be formulated with respect to any reference frame based on any such fibration and the associated proper times . the automorphisms of these reference frames now form a function group , which can be defined by its action on the orthonormal tetrad field ( , ei ) ( i = 1 , 2 , 3 ) characterizing some initial frame . at any point x , an element of the group so(1 , 3 ) will take one such tetrad into another ( ,ei ) ; such an element depends on six position - dependent parameters ( three rotations and three pseudorotations ) . since any smooth vector field is holonomic , the resulting field will generate a new fibration . ( i.e. , diffeomorphisms of the manifold m that are transitive and effective ) will take the origin of the first reference frame into the origin of the second.89 in this sense , the general theory does extend the principle of relativity from inertial frames in minkowski space - time to arbitrary orthonormal tetrad frames in pseudo - riemannian space - times , either given a priori ( background - dependent theories ) or constructed from a solution to the einstein equations ( background - independent theories such as general relativity ) . as noted in section 2.6 , in the case of a generic metric ( i.e. , one having no symmetries ) , the kretschmann - komar coordinates may be used to individuate the points of space - time . while non - generic metrics constitute a subset of measure zero , all known solutions to the einstein field equations belong to this subset . only the a priori imposition of some fixed , background symmetry group on the pseudo - metric tensor enables construction of such solutions ( see , e.g. , stephani et al . , 2003 ) . the symmetry group , also called the isometry group of the metric , determines a portion of the metric field non - dynamically ; the remaining portion obeys a reduced set of dynamical einstein equations . one must examine each symmetry group to see how much freedom remains in the class of solutions to these reduced equations ; in particular , whether enough freedom remains for a restricted version of the hole argument to apply to these solutions . the possible isometries of a four - dimensional pseudo - riemannian manifold have been classified : they are characterized by two integers ( m , o ) ( see , e.g. , stephani et al . , 2003 ; hall , 2004):91 the dimension m 10 of the isometry group.the dimension o min(4 , m ) of the highest - dimensional orbits of this group . the dimension o min(4 , m ) of the highest - dimensional orbits of this group . the two extreme cases are : the maximal symmetry group ( m = 10 , o = 4 ) . minkowski s - t is the unique ricci - flat space - time in this class . its isometry group is the poincar or inhomogeneous lorentz group , which acts transitively on the entire space - time manifold . the field equations of special - relativistic field theories must be invariant under this group ; they provide the most important physical example of background - dependent theories . at the other extreme ( m = 0 , o = 0 ) is the class of generic metrics already mentioned . field theories that are covariant under the group of all diffeomorphisms of the underlying four - dimensional differentiable manifold ( see section 4 ) , will include a subclass of the generic metrics among their models . if such a theory is generally covariant , so that all diffeomorphically - related models represent one physical model , it is called a background - independent theory . in between these two extremes lie all models of a background - independent theory that are restricted by the further requirement that they have some fixed non - maximal , non trivial symmetry group . if this symmetry group is a function group , we shall say the theory has a partially - fixed background . as noted above , all known exact models of general relativity fall into this category.92 considerable work has been done on two classes of such models : the mini - superspace cosmological solutions , in which so much symmetry is imposed that only functions of one parameter ( the time ) are subject to dynamical equations ( see , e.g. , ashtekar et al . , quantization here resembles more the quantization of a system of particles than of waves , and does not seem likely to shed too much light on the generic case.the midi - superspace ( see torre , 1999 ) solutions , notably the cylindrical wave metrics ( see ashtekar and pierri , 1996 ; ashtekar et al . , 1997a , b ; bik , 2000 ) . here , sufficient freedom remains to include both degrees of freedom of the gravitational field . diffeomorphisms of a two - dimensional manifold with pseudo - metric are still possible , so a two - dimensional version of the hole argument can be formulated . the mini - superspace cosmological solutions , in which so much symmetry is imposed that only functions of one parameter ( the time ) are subject to dynamical equations ( see , e.g. , ashtekar et al . , quantization here resembles more the quantization of a system of particles than of waves , and does not seem likely to shed too much light on the generic case . the midi - superspace ( see torre , 1999 ) solutions , notably the cylindrical wave metrics ( see ashtekar and pierri , 1996 ; ashtekar et al . , 1997a , b ; bik , 2000 ) . here , sufficient freedom remains to include both degrees of freedom of the gravitational field . diffeomorphisms of a two - dimensional manifold with pseudo - metric are still possible , so a two - dimensional version of the hole argument can be formulated . taking advantage of fact that any two - dimensional pseudo - metric is conformally related to two - dimensional minkowski space , one can adopt a coordinate system in which the two degrees of freedom are represented by a pair of scalar fields obeying non - linear , coupled wave equations in this flat space - time ( stachel , 1966 ) . in addition to static and stationary fields , the solutions include gravitational radiation fields having both states of polarization . their quantization can be carried out formally as if they were two interacting two - dimensional fields ( see kouletsis et al . , 2003 ) . but , of course , the invariance of any results under the remaining diffeomorphism freedom must be carefully examined , as well as possible implications for the generic case . as noted in section 4.3 , the space of all four - geometries forms a stratified manifold , partitioned into slices . if a metric in one such slice is then perturbed , unless the perturbation is restricted to lie within the same ( or some other ) symmetry group , it will move the geometry into the generic slice of the stratified manifold . this observation is often neglected when perturbation - theoretic quantization techniques , developed for special relativistic field theories , are applied to perturbations of the minkowski metric . diffeomorphisms of such perturbations can not be treated as pure gauge transformations on a fixed background minkowki s - t ; they modify the entire causal and inertio - gravitational structure of space - time ( see , e.g. , doughty , 1990 , chapter 21 ) . this seems to be the fundamental reason behind the problems that plague the application of special - relativistic quantization techniques to such perturbations . an important class of solutions to the field equations lacks global symmetries , but does have asymptotic symmetries as infinity is approached in null directions . this results in the possibility of separating kinematics from dynamics , and the asymptotic quantization of such solutions ( see komar , 1973 , section vi , and ashtekar , 1987 ) . the imposition of certain conditions on the behavior of the weyl ( conformal curvature ) tensor in the future ( past ) null limit allows conformal compactification of a large class of space - times ( penrose , 1963 ) by adjoining boundary null hypersurfaces scri plus + ( scri minus ) to the space - time manifold . both boundaries have a symmetry group that is independent of particular dynamical solutions to the field equations in this class . thus , there is a separation of kinematics and dynamics on , and a quantization based on this asymptotic symmetry group can be carried out in more or less conventional fashion . more or less because the asymptotic symmetry group , the bondi - metzner - sachs ( bms ) group , is not a finite - parameter lie group , as is the poincar group ; it includes four so - called supertranslation functions , which depend on two angular variables . nevertheless , asymptotic gravitons may be defined as representations of the bms group , independently of how strong the gravitational field may be in the interior region ( ashtekar , 1987 ) . consider a natural or gauge - natural fiber bundle involving a geometric object field with base manifold m , and a covariant theory t that picks out a valid class of sections of the -bundle . since the theory is assumed covariant , if is a model of the theory , then so is d * , where d * is the unique fiber - preserving bundle diffeomorphism corresponding to a base diffeomorphism dm if the bundle is natural ; or to any member of the class of such diffeomorphisms if the bundle is gauge - natural . consider the equation = 0 , where is a vector field generating a one - parameter family of base diffeomorphisms . we say that such a family of diffeomorphisms is a symmetry of generated by . the class of all models of the theory is divided into equivalence classes by the following equivalence relation : two models and are equivalent if any that generates a symmetry of also generates a symmetry of and vice versa . if there is more than one generator of the symmetries in such an equivalence class , they form a group under addition with constant coefficients . the poisson bracket of two symmetry generators : [ , ] is always a generator of a symmetry ; so the generators form a lie algebra.93 if we assume the theory based on to be background - independent , i.e. , generally covariant , we can impose a further condition on models of the theory : that they all have the symmetries generated by some particular lie sub - algebra . this results in a class of partially background - dependent theories , each based on the background independent theory . there are a number of ways to treat general relativity as a gauge theory ( see sections 4.3 and 4.4 ; for a survey that includes some generalizations of general relativity , see hehl et al . , 1995 ) . we shall follow the discussion in trautman ( 1980 ) : for me , a gauge theory is any physical theory of a dynamic variable which , at the classical level , may be identified with a connection on a principal bundle . the structure group g of the bundle p is the group of gauge transformations of the first kind ; the group of gauge transformations of the second kind may be identified with a subgroup of the group aut p of all automorphisms of p. in this sense , gravitation is a gauge theory . the basic gauge field is a linear connection ( or a connection closely related to the linear connection ) . in addition to , there is a metric tensor g which plays the role of a higgs field . the most important difference between gravitation and other gauge theories is due to the soldering of the bundle of frames lm to the base manifold m. the bundle lm is constructed in a natural and unique way from m , whereas a noncontractible m may be the base of inequivalent bundles with the same structure group . the soldering form leads to a torsion which has no analogue in nongravitational theories . moreover , it affects the group , which now consists of the automorphisms of lm preserving . this group contains no vertical automorphism other than the identity ; it is isomorphic to the group diff m of all diffeomorphisms of m ( ibid . , for me , a gauge theory is any physical theory of a dynamic variable which , at the classical level , may be identified with a connection on a principal bundle . the structure group g of the bundle p is the group of gauge transformations of the first kind ; the group of gauge transformations of the second kind may be identified with a subgroup of the group aut p of all automorphisms of p. in this sense , gravitation is a gauge theory . the basic gauge field is a linear connection ( or a connection closely related to the linear connection ) . in addition to , there is a metric tensor g which plays the role of a higgs field . the most important difference between gravitation and other gauge theories is due to the soldering of the bundle of frames lm to the base manifold m. the bundle lm is constructed in a natural and unique way from m , whereas a noncontractible m may be the base of inequivalent bundles with the same structure group . the soldering form leads to a torsion which has no analogue in nongravitational theories . moreover , it affects the group , which now consists of the automorphisms of lm preserving . this group contains no vertical automorphism other than the identity ; it is isomorphic to the group diff m of all diffeomorphisms of m ( ibid . theory : in a gauge theory of the yang - mills type over minkowski space - time , the group is isomorphic to the semi - direct product of the poincar group by the group 0 of vertical automorphisms of p . in other words , in the theory of gravitation , the group of pure gauge transformations reduces to the identity ; all elements of correspond to diffeomorphisms of m ( ibid . in a gauge theory of the yang - mills type over minkowski space - time , the group is isomorphic to the semi - direct product of the poincar group by the group 0 of vertical automorphisms of p . in other words , in the theory of gravitation , the group of pure gauge transformations reduces to the identity ; all elements of correspond to diffeomorphisms of m ( ibid . trautman points out that , even for a given theory , the choice of structure group g is not unique . since space - time m is four - dimensional , if p = lm then g = gl(4 , r ) . but one can equally well take for p the bundle am of affine frames ; in this case g is the affine group . there is a simple correspondence between affine and linear connections which makes it really immaterial whether one works with lm or am . if one assumes as usually one does that and g are compatible , then the structure group of lm or am can be restricted to the lorentz or poincar group , respectively . it is also possible to take , as the underlying bundle for a theory of gravitation , another bundle attached in a natural way to space - time , such as the bundle of projective frames or the first jet extension of lm . the corresponding structure groups are natural extensions of gl(4 , r ) , o(1 , 3 ) or the poincar group ( ibid . since space - time m is four - dimensional , if p = lm then g = gl(4 , r ) . but one can equally well take for p the bundle am of affine frames ; in this case g is the affine group . there is a simple correspondence between affine and linear connections which makes it really immaterial whether one works with lm or am . if one assumes as usually one does that and g are compatible , then the structure group of lm or am can be restricted to the lorentz or poincar group , respectively . it is also possible to take , as the underlying bundle for a theory of gravitation , another bundle attached in a natural way to space - time , such as the bundle of projective frames or the first jet extension of lm . the corresponding structure groups are natural extensions of gl(4 , r ) , o(1 , 3 ) or the poincar group ( ibid . as indicated earlier ( see section 4.2 ) , for all relevant physical theories there are good arguments for considering sl(4 , r ) as the frame automorphism group , with the unimodular diffeomorphism group as the corresponding structural group ( see stachel , 2011 ; bradonji and stachel , 2012 ) . to recapitulate the main results of section 4.1 : the essence of the hole argument is independent of the differentiability or even the continuity properties of a manifold . when one abstracts from these properties,94 a manifold diffeomorphism becomes a permutation of the members of the set ; so one can use fibered sets to formulate the hole argument for permutable theories . the covariance of a theory defined on a fibered manifold any valid model of it is turned into another valid model by a diffeomorphism acting on its base manifold becomes the permutability of a theory defined on a fibered set : a theory is permutable if any valid model of it is turned into another valid model by a permutation of the elements of its base set . the theory is generally permutable if an equivalence class of such mathematically distinct models corresponds to a single model of the theory . consider a system consisting of n so - called identical , or better , indistinguishable elementary particles , sharing a common quiddity . by nature , these particles lack an inherent haecceity ; but in order to formulate a dynamical theory for the system ( e.g. , in order to write down a lagrangian or hamiltonian for it in non - relativistic quantum mechanics ) , one needs to enumerate them ( i.e. , assign a number from 1 to n to each of the particles ) . this is a discrete example of coordinatization ( see section 4.1 ) , and one must undo this individuation by requiring invariance under all possible permutations of the initial enumeration . as discussed in section 4.1 for sets in general , such permutations can be done in either of two ways : active : fix the enumeration and permute the particles ; orpassive : fix the particles and permute the enumeration . active : fix the enumeration and permute the particles ; or passive : fix the particles and permute the enumeration . either way , it is obvious that each of the n particles will have each of the integers from 1 to n assigned to it in some of these permutations . like space - time points , particles of the same natural kind ( quiddity ) can only be individuated ( to the extent that they are ) by their position in some relational structure in a theory that is permutable in both the active and passive senses : if some state of affairs is possible for a system that includes n indistinguishable particles , then the state of affairs resulting from the action of any element of the permutation group perm(n ) on these n particles must be an identical state of affairs . possible state of affairs in quantum mechanics ? as discussed in the appendix b , one may adopt a synchronic or diachronic approach to such questions . the term state of affairs is often interpreted synchronically , as referring to the instantaneous state of the system ( e.g. , its state vector or wave function ) . but quantum theory can only treat open systems ( again see the appendix b ) ; and its task is diachronic : to compute the probability amplitude for a complete process ( or phenomenon in bohr s terminology ) . in short , only a complete process constitutes a possible state of affairs for a quantum - mechanical system ( see stachel , 1997 ) . if the system is generally permutable and a certain value of the probability for such a process is calculated , then the same value must be calculated for any process that results from this one by a permutation of the indistinguishable particles in either the initial act of preparation or ( inclusive or ) the final act of registration . in order to verify these probabilities , it seems that , for such individuation of an object , a level of structural complexity must be reached , at which it can be uniquely and irreversibly marked in a way that distinguishes it from other objects of the same nature ( quiddity ) . my argument is based on an approach , according to which quantum mechanics does not deal with quantum systems in isolation , but only with processes that such a system can undergo a process ( feynman uses process , but bohr uses phenomenon to describe the same thing ) starts with the preparation of the system , which then undergoes some interaction(s ) , and ends with the registration of some result ( a measurement ) . in this approach , a quantum system is defined by certain essential properties ( its quiddity ) ; but manifests other , non - essential properties ( its haecceity ) only at the beginning ( preparation ) and end ( registration ) of some process . ( note that the initially - prepared properties need not be the same as the finally - registered ones . ) the basic task of quantum mechanics is to calculate a probability amplitude for the process leading from the initially prepared values to the finally - registered ones . ( i assume a maximal preparation and registration the complications of the non - maximal cases are easily handled ) ( stachel , 2006a ) . it seems that , for such individuation of an object , a level of structural complexity must be reached , at which it can be uniquely and irreversibly marked in a way that distinguishes it from other objects of the same nature ( quiddity ) . my argument is based on an approach , according to which quantum mechanics does not deal with quantum systems in isolation , but only with processes that such a system can undergo a process ( feynman uses process , but bohr uses phenomenon to describe the same thing ) starts with the preparation of the system , which then undergoes some interaction(s ) , and ends with the registration of some result ( a measurement ) . in this approach , a quantum system is defined by certain essential properties ( its quiddity ) ; but manifests other , non - essential properties ( its haecceity ) only at the beginning ( preparation ) and end ( registration ) of some process . ( note that the initially - prepared properties need not be the same as the finally - registered ones . ) the basic task of quantum mechanics is to calculate a probability amplitude for the process leading from the initially prepared values to the finally - registered ones . ( i assume a maximal preparation and registration the complications of the non - maximal cases are easily handled ) ( stachel , 2006a ) . consider a scattering process for a system of indistinguishable particles , for example . the process consists of the transition from some initial free in - state of the particles to some final free out - state of the particles after their interaction with the target producing the scattering . the cross section for this process depends on the choice of initial in- and out - states.95 for a permutable theory , if some value for such a cross section is a possible result , then the same value must result for the processes that arise from separate permutations of the particles in both the in - state and the out - state . then specification of a unique initial preparation for some process ( an experiment ) could never result in a unique prediction of the outcome of the registration : every possible permutation of the particles in the registration result would be equally probable . conversely , registration of a unique result ( an observation ) could never produce a unique retrodiction of the preparation that led to this outcome : every possible permutation of the particles in the act of preparation would be equally probable . the way out of this non - uniqueness paradox , of course , is to require that any permutable theory involving indistinguishable particles be generally permutable . the basic entities are elementary particles ( fermions ) and field quanta ( bosons ) . i reserve the term elementary particles for fermions and field quanta for bosons , although both are treated as field quanta in quantum field theory . i aim thereby to recall the important difference between the two in the classical limit : classical particles for fermions and classical fields for bosons . [ i]n the special - relativistic theories , a preparation or registration may involve either gauge - invariant field quantities or particle numbers.at the level of non - relativistic quantum mechanics for a system consisting of a fixed number of particles of the same type , this [ difference ] is seen in the need to take into account the bosonic or fermionic nature of the particle in question by the appropriate symmetrization or anti - symmetrization procedure on the product of the one - particle hilbert spaces . at the level of special - relativistic quantum field theory , in which interactions may change particle numbers , it is seen in the notion of field quanta , represented by occupation numbers ( arbitrary for bosons , either zero or one for fermions ) in the appropriately constructed fock space ; these quanta clearly lack individuality ( stachel , 2006a ) . i reserve the term elementary particles for fermions and field quanta for bosons , although both are treated as field quanta in quantum field theory . i aim thereby to recall the important difference between the two in the classical limit : classical particles for fermions and classical fields for bosons . [ i]n the special - relativistic theories , a preparation or registration may involve either gauge - invariant field quantities or particle numbers . at the level of non - relativistic quantum mechanics for a system consisting of a fixed number of particles of the same type , this [ difference ] is seen in the need to take into account the bosonic or fermionic nature of the particle in question by the appropriate symmetrization or anti - symmetrization procedure on the product of the one - particle hilbert spaces . at the level of special - relativistic quantum field theory , in which interactions may change particle numbers , it is seen in the notion of field quanta , represented by occupation numbers ( arbitrary for bosons , either zero or one for fermions ) in the appropriately constructed fock space ; these quanta clearly lack individuality ( stachel , 2006a ) . as discussed in section 4.3 , in background - independent theories such as general relativity , an analogous principle of general covariance holds for the points of space - time . this common lack of haecceity suggests that , whatever their nature , the fundamental entities of any theory purporting to explain a deeper physical level should satisfy the principle of maximum permutability . thiemann has pointed out that the concept of a smooth space - time should not have any meaning in a quantum theory of the gravitational field where probing distances beyond the planck length must result in black hole creation which then evaporate in planck time , that is , spacetime should be fundamentally discrete . the fundamental symmetry is probably something else , maybe a combinatorial one , that looks like a diffeomorphism group at large scales ( thiemann , 2001 ) . the concept of a smooth space - time should not have any meaning in a quantum theory of the gravitational field where probing distances beyond the planck length must result in black hole creation which then evaporate in planck time , that is , spacetime should be fundamentally discrete . the fundamental symmetry is probably something else , maybe a combinatorial one , that looks like a diffeomorphism group at large scales ( thiemann , 2001 ) . it is hard to believe that , having had to renounce intrinsic individuality at the level of field quanta in qft and at the level of events in gr , that it will reemerge as we go to a deeper level , from which both qft and gr will emerge in the appropriate limits . [ t]he way to assure the inherent indistinguishability of the fundamental entities of the theory is to require the theory to be formulated in such a way that physical results are invariant under all possible permutations of the basic entities of the same kind the exact content of the principle depends on the nature of the fundamental entities . for theories , such as non - relativistic quantum mechanics , that are based on a finite number of discrete fundamental entities , the permutations will also be finite in number , and maximal permutability becomes invariance under the full symmetric group . for theories such as general relativity , that are based on fundamental entities that are continuously , and even differentiably related to each other , so that they form a differentiable manifold , permutations become diffeomorphisms . for a diffeomorphism of a manifold further extensions to an infinite number of discrete entities or mixed cases of discrete - continuous entities , if needed , are obviously possible ( stachel , 2006a ) . [ t]he way to assure the inherent indistinguishability of the fundamental entities of the theory is to require the theory to be formulated in such a way that physical results are invariant under all possible permutations of the basic entities of the same kind the exact content of the principle depends on the nature of the fundamental entities . for theories , such as non - relativistic quantum mechanics , that are based on a finite number of discrete fundamental entities , the permutations will also be finite in number , and maximal permutability becomes invariance under the full symmetric group . for theories such as general relativity , that are based on fundamental entities that are continuously , and even differentiably related to each other , so that they form a differentiable manifold , permutations become diffeomorphisms . for a diffeomorphism of a manifold is nothing but a continuous and differentiable permutation of the points of that manifold . further extensions to an infinite number of discrete entities or mixed cases of discrete - continuous entities , if needed , are obviously possible ( stachel , 2006a ) . current versions of string theory , for example , do not meet this criterion , and it has been suggested that an ultimately satisfactory version of that theory will have to be background independent ( see the discussions in stachel , 2006a , b ; greene , 2004 ) . on the other hand , various discretized space - time theories , such as causal set theory , do seem to meet this criterion ; but have other problems ( see the discussion in stachel , 2006a ) . the next section 6.4 deals with a much more modest problem : attempts to quantize the field equations of general relativity based on the application of some variant of the standard techniques for the quantization of field theories . there is a well known tension between the methods of quantum field theory and the nature of general relativity . the methods of quantization of pre - general - relativistic theories are based on the existence of some fixed , background space - time structure(s ) with a given symmetry group.96 the space - time structure is needed both for the development of the formalism and equally importantly for its physical interpretation ( see dosch et al . , 2005 ) . it provides a fixed kinematical background for the dynamical theory to be quantized : the dynamical equations for particle or fields must be invariant under all automorphisms of the symmetry group . it is a background - independent theory , with no fixed , non - dynamical structures . to recapitulate , its field equations are generally covariant under all differentiable automorphisms ( diffeomorphisms ) of the underlying manifold , the points of which have no haecceity ( and hence are indistinguishable ) until the dynamical fields are specified . in a background - independent theory , there is no kinematics independent of the dynamics . this applies to both the homogeneous einstein equations , and to the inhomogeneous einstein equations coupled to the dynamical equations for any non - gravitational fields . and this is still the case if the automorphism group is restricted to the unimodular diffeomorphisms ( see section 4.2 ) . however general relativity and ( special - relativistic ) quantum field theory do share one fundamental feature that often is not sufficiently stressed : the primacy of processes over states . the four - dimensional approach , emphasizing processes in regions of space - time , is basic to both ( see , e.g. , stachel , 2006a ; reisenberger and rovelli , 2002 ; dewitt , 2003 ) . the ideal approach to quantum gravity would be a diffeomorphism - invariant method of quantization that takes process as primary . however , the most successful approach so far , loop quantum gravity , only goes a certain way in this direction : it singles out a preferred fibration and foliation ( see section 5.7 ) ; and by adopting a ( 3 + 1 ) hamiltonian approach and a set of redundant variables subject to constraints on the initial hypersurface , it effects a certain separation between kinematics and dynamics . but given these limitations , it has produced a mathematically rigorous and surprisingly a unique ( but see nicolai and peeters , 2007 ) kinematic hilbert space ( see ashtekar , 2010 , for a brief review ) . even in non - relativistic quantum mechanics , the basic goal is to calculate a probability amplitude for a process connecting some initial preparation to some final registration . however , the existence of an absolute time allows one to choose a temporal slice of space - time so thin that it is meaningful to speak of instantaneous measurements of the initial and final states of the system ( see micanek and hartle , 1996 ) . but this is not so for measurements in ( special - relativistic ) quantum field theory , nor in general relativity ( see , e.g. , bohr and rosenfeld , 1933 , 1979 ; bergmann and smith , 1982 ; reisenberger and rovelli , 2002 ; stachel and bradonji , 2013 ) . the breakup of a four - dimensional space - time region into lower - dimensional sub - regions in particular , into a one parameter family of three - dimensional hypersurfaces raises a problem for measurements in both quantum field theory ( see bohr and rosenfeld , 1933 , 1979 ; dewitt , 2003 ) and general relativity ( see bergmann and smith , 1982 ) . a breakup of a process into the evolution of instantaneous states on a family of space - like hypersurfaces is a useful , perhaps sometimes indispensable , calculational tool . but no fundamental significance should be attached to such breakups , and mathematical results obtained from them should be carefully examined for their physical significance from the four - dimensional , process standpoint ( see , e.g. , nicolai and peeters , 2007 ) . ( see bergmann and smith , 1982 ) , based on the relation between formalism and observation ( see reisenberger and rovelli , 2002 ) , sheds light on the physical implications of any formalism . the possibility of the definition of some physically significant quantity within a theoretical framework should coincide with the possibility of its measurement in principle ; i.e. , by means of idealized measurement procedures consistent with that theoretical framework . this is true at the classical level , at which any complete classical set of physical observables should be measurable in principle by a single compound procedure.97 this criterion is easily met by unconstrained systems , such as a set of non - relativistic particles interacting via two - body potentials , or a scalar field obeying the classical klein - gordon equation . but delicate problems arise in applying it to constrained dynamical systems , and in particular to gauge field theories , including general relativity ( see section 6.2 ) . the problem becomes even more delicate for quantum systems , in which the existence of the quantum of action h is taken into account . the finite value of h precludes the measurement of a complete set of classical observables by a single compound procedure . it becomes important to show that a complete set of quantum observables , as defined by the theory , can indeed be so measured in principle . non - relativistic quantum mechanics and quantum electrodynamics , have been show to meet this criterion ; and it has been employed as a test of proposals for what should be the fundamental physical quantities defined in quantum gravity ( see bergmann and smith , 1982 ; amelino - camelia and stachel , 2009 ) . rovelli ( 2004 ) and oeckl ( 2008 , 2013 ) have shown how to define such measurements on the hypersurface bounding a four - dimensional region of space - time , even in a background - independent theory . in field theory , the analog of the data set ( x , t , x,t ) is the couple [ , ] , where is a 3d surface bounding a finite spacetime region , and is a field configuration on . the data from a local experiment ( measurements , preparation , or just assumptions ) must in fact refer to the state of the system on the entire boundary of a finite spacetime region . the field theoretical space \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}${\mathcal g}$\end{document } is therefore the space of surfaces and field configurations on . quantum dynamics can be expressed in terms of an [ probability ] amplitude w[ , ] . following feynman s intuition , we can formally define w[ , ] in terms of a sum over bulk field configurations that take the value on the boundary . notice that the dependence of w[ , ] on the geometry of codes the spacetime position of the measuring apparatus . in fact , the relative position of the components of the apparatus is determined by their physical distance and the physical time elapsed between measurements , and these data are contained in the metric of . what is happening is that in background - dependent qft we have two kinds of measurements : those that determine the distances of the parts of the apparatus and the time elapsed between measurements , and the actual measurements of the fields dynamical variables . in quantum gravity , instead , distances and time separations are on an equal footing with the dynamical fields . this is the core of the general relativistic revolution , and the key for background - independent qft ( rovelli , 2004 , p. 23 ) . in field theory , the analog of the data set ( x , t , x,t ) is the couple [ , ] , where is a 3d surface bounding a finite spacetime region , and is a field configuration on . the data from a local experiment ( measurements , preparation , or just assumptions ) must in fact refer to the state of the system on the entire boundary of a finite spacetime region . the field theoretical space \documentclass[12pt]{minimal } \usepackage{amsmath } \usepackage{wasysym } \usepackage{amsfonts } \usepackage{amssymb } \usepackage{amsbsy } \usepackage{mathrsfs } \usepackage{upgreek } \setlength{\oddsidemargin}{-69pt } \begin{document}${\mathcal g}$\end{document } is therefore the space of surfaces and field configurations on . quantum dynamics can be expressed in terms of an [ probability ] amplitude w[ , ] . following feynman s intuition , we can formally define w[ , ] in terms of a sum over bulk field configurations that take the value on the boundary . notice that the dependence of w[ , ] on the geometry of codes the spacetime position of the measuring apparatus . in fact , the relative position of the components of the apparatus is determined by their physical distance and the physical time elapsed between measurements , and these data are contained in the metric of . what is happening is that in background - dependent qft we have two kinds of measurements : those that determine the distances of the parts of the apparatus and the time elapsed between measurements , and the actual measurements of the fields dynamical variables . in quantum gravity , instead this is the core of the general relativistic revolution , and the key for background - independent qft ( rovelli , 2004 , p. 23 ) . in this sense , einstein s hole , as a symbol of process , has reasserted its physical primacy over hilbert s cauchy surface , as a symbol of instantaneous state ( see section 2.7 ) .
this is a historical - critical study of the hole argument , concentrating on the interface between historical , philosophical and physical issues . although it includes a review of its history , its primary aim is a discussion of the contemporary implications of the hole argument for physical theories based on dynamical , background - independent space - time structures.the historical review includes einstein s formulations of the hole argument , kretschmann s critique , as well as hilbert s reformulation and darmois formulation of the general - relativistic cauchy problem . the 1970s saw a revival of interest in the hole argument , growing out of attempts to answer the question : why did three years elapse between einstein s adoption of the metric tensor to represent the gravitational field and his adoption of the einstein field equations?the main part presents some modern mathematical versions of the hole argument , including both coordinate - dependent and coordinate - independent definitions of covariance and general covariance ; and the fiber bundle formulation of both natural and gauge natural theories . by abstraction from continuity and differentiability , these formulations can be extended from differentiable manifolds to any set ; and the concepts of permutability and general permutability applied to theories based on relations between the elements of a set , such as elementary particle theories.we are closing with an overview of current discussions of philosophical and physical implications of the hole argument .
Introduction Early History Modern Revival of the Argument The Hole Argument and Some Extensions Current Discussions: Philosophical Issues Current Discussions: Physical Issues Conclusion: The Hole Argument Redivivus
PMC4900865
a problem that has frequently emerged as a critical issue in patients with venoarterial ( va ) extracorporeal membranous oxygenation ( ecmo ) is a dilatation of the left heart due to volume overload of the left ventricle . conventionally , a venting cannula is placed in the left atrium via the right upper pulmonary vein or the la auricle with a sternotomy or a lateral thoracotomy . however , these approaches are risky because of significant complications such as bleeding and scarring . the procedure chosen for percutaneous left heart decompression varies from insertion of the la venting cannula to placement of an atrial septal stent . recent reports showed that there was no significant difference between these procedures . here , we describe a successful percutaneous balloon atrial septostomy and an la venting cannula insertion during va ecmo in a patient with severe acute myocarditis . a 38-month - old , 11.7 kg boy he had suffered from intraventricular hemorrhage and intracranial hemorrhage caused by neonatal asphyxia , and he had been kept on antiepileptic medication due to infantile spasm . the patient was transferred to the intensive care unit with very poor hemodynamics and subsequent supraventricular tachycardia ( heart rate > 200/min ) , which did not respond to intravenous amiodarone or adenosine . despite conventional therapy including inotropes , his condition continued to deteriorate with tachycardia . echocardiography revealed a marked decrease of 20% in the ejection fraction , and we therefore decided to support him with va ecmo . arterial cannulation was performed via the right carotid artery with an 8 fr percutaneous arterial cannula ( rmi ; edwards lifesciences , irvine , ca , usa ) , and a 14 fr percutaneous venous cannula ( rmi , edwards lifesciences ) was placed in the right internal jugular vein . ecmo flow was initially 1,300 ml / min and was maintained between 700 to 800 ml / min . after ecmo insertion , the patient s vital signs stabilized and his chest x - ray improved ( fig . however , he began to produce a large amount of frothy , bloody endotracheal secretions and his pulse pressure disappeared twelve hours later . follow - up echocardiography revealed marked left heart distention and a left ventricular ejection fraction lower than 10% . therefore , fourteen hours after ecmo insertion , the patient was taken to the cardiac catheter laboratory for an atrial balloon septostomy and la venting cannula insertion . the procedure was accomplished with a percutaneous approach through the right femoral vein without complications and an 8.5 fr la venting cannula ( mullins sheath ; cook inc . 2 ) . after placement of the la venting cannula , the pulse pressure appeared again , with a 15 mmhg gap between the systolic and diastolic pressure . his follow - up chest x - ray improved ( fig . four days later , we observed recovery of left ventricular function and improvement of the chest x - ray ( fig . follow - up echocardiography showed improved lv systolic function and an ejection fraction of 65% . since its introduction as a major support for severe respiratory distress , ecmo has become a well - established therapy in pediatric patients , particularly those with severe but reversible neonatal respiratory failure , cardiac failure , or cardiorespiratory failure . in particular , va ecmo has been used in pediatric patients with cardiac dysfunction , such as myocarditis , cardiac arrest , and cardiomyopathy . left heart distension during va ecmo may develop , leading to progressive left heart deterioration , pulmonary edema , and impairment of myocardial oxygenation . despite its clinical importance , the management of left heart decompression in a patient with ecmo is rarely discussed . only sporadic case reports and a few articles have shown the efficacy of la decompression on functional recovery of the left heart [ 57 ] . conventionally , la decompression is achieved by insertion of an la venting cannula via sternotomy or thoracotomy . however , sternotomy or thoracotomy themselves are risky and have several complications , such as bleeding and significant scarring . la decompression with a percutaneous cardiac catheterization - based technique including septostomy using a blade or radiofrequency ablation , balloon dilatation , and la venting cannula insertion has been effective [ 2,46,8 ] . several institutions with la vents have identified technical issues related to management of the indwelling catheters , such as kinking , poor flow , movement of the catheter with required patient care , and ongoing concern for thrombosis . therefore , a recent study showed a shift in preference from la vent insertion to balloon dilation alone . however , balloon atrial septostomy alone is not always successful because of varying degrees of atrial septal thickness . we believe that using an la venting cannula after balloon dilation offers several advantages over balloon dilatation alone . first , the placement of an la venting cannula potentially allows for controlled decompression of the left heart by adjusting flow rates on the ecmo circuit or clamping the cannula . third , the size of the cannula can be tailored to each patient , so this procedure can be adjusted for use in smaller patients . for patients with left heart dysfunction causing pulmonary edema during va ecmo , percutaneous balloon atrial septostomy with la venting cannula insertion is a treatment option that carries a low risk of other complications such as bleeding .
patients with venoarterial extracorporeal membrane oxygenation ( ecmo ) frequently suffer from pulmonary edema due to left ventricular dysfunction that accompanies left heart dilatation , which is caused by left atrial hypertension . the problem can be resolved by left atrium ( la ) decompression . we performed a successful percutaneous la decompression with an atrial septostomy and placement of an la venting cannula in a 38-month - old child treated with venoarterial ecmo for acute myocarditis .
CASE REPORT DISCUSSION
PMC3299299
the classical renin - angiotensin - system ( ras ) is a proteolytic cascade which is constituted by multiple enzymes and effector peptides . the cascade starts when angiotensin i ( ang 110 ) is released from the propeptide angiotensinogen by kidney - secreted renin . the peptide metabolites produced from ang 110 by a variety of proteases act as ligands for angiotensin receptors in different tissues leading to a diversified panel of physiological functions mediated by angiotensin peptides . angiotensin ii ( ang 18 ) is one of the most extensively studied angiotensin peptides . it is mainly produced by the proteolytic action of angiotensin - converting enzyme ( ace ) by removal of the two c - terminal amino acids from ang 110 . ang 18 is able to bind to several cellular receptors leading to a variety of physiologic effects among different tissues and cell types . importantly , increased levels of ang 18 are reported to be associated with life - threatening pathologic conditions including hypertension , congestive heart failure , chronic kidney disease , and also tumor progression . ang 18 was described to directly increase blood pressure and vessel permeability , to induce na reabsorption and ros production and excert proinflammatory and proliferative effects on various cell types [ 4 , 5 ] . the disease - promoting functions of ang 18 convert it to a favorable therapeutic target in the treatment of many diseases mainly by preventing its formation by low - molecular - weight compounds inhibiting appropriate enzymes of the ras cascade . an alternative way of decreasing ang 18 levels became available over the recent years and uses recombinant angiotensin - converting enzyme 2 ( ace2 ) to lower ang 18 levels . inactivates ang 18 by clipping off one c - terminal phenylalanine , while ang 17 is generated . ang 17 is known to take over ang 18 antagonistic functions by activating the mas receptor [ 79 ] and therefore is thought to be the key effector peptide of the so - called alternative ras . therefore , the monocarboxypeptidase ace2 is a key activator of the alternative ras and is critically involved in the regulation of the classical ras , which is known to be functionally important in the vascular system and in a variety of organs [ 6 , 10 , 11 ] . the biological function of the ras has been investigated in cardiovascular [ 12 , 13 ] , pulmonary , fibrotic , nephrologic , and artheriosclerotic models . throughout all these studies the loss of ace2 activity in knock - out variants induced pathologies which could be restored by systemic administration of the recombinant enzyme . ace2 therefore can be regarded as one of the key players of the renin - angiotensin - system ( ras ) being responsible for fluid homeostasis , blood pressure regulation , inflammatory processes , and cell proliferation . ace2 is a membrane anchored glycoprotein which is expressed in most organs and blood vessels and recognizes multiple peptide substrates within the ras and other peptide hormone systems . among its substrates beside ang 18 , ang 110 , and des - arg - bradykinin , apelins and dynorphins have been reported to be cleaved by ace2 in vitro with ang 18 being the preferred substrate regarding conversion rates . we recombinantly expressed both human ace2 ( rhace2 ) and murine ace2 ( rmace2 ) and compared their substrate conversion rates in vitro and in blood plasma which represents the natural compartment of enzyme action . in previously mentioned murine knock - out models , rhace2 was frequently used to restore ace2 activity . despite the fact that sequence coverage between murine and human ace2 is only 83% , it has been assumed that the enzyme has the same catalytic activity and function . in this work we will highlight species - specific differences between human and murine ace2 regarding their function of keeping the balance between the classical and the alternative ras . the extracellular domains of human or murine ace2 were recombinantly expressed in cho cells under serum - free conditions . the sequence identity between rhace2 and rmace2 accounts to 84% which leads to minor alterations in physicochemical properties and altered patterns in posttranslational modifications , especially n - glycosylation . both expression products were purified by sequentially performing a capture step on a deae - sepharose , ammonium sulfate precipitation , followed by a purification step on a hic - phenyl sepharose column and a final polishing step on a superdex 200 gel filtration column . the purity of rhace2 and rmace2 was determined by high - performance liquid chromatography ( hplc ) and was found to exceed 98% . the concentrations of final ace2 preparations were determined by size - exclusion chromatography ( sec ) and in line with photometric measurement at 280 nm and peak integration ( od280 : rhace2 : = 1621 lmolcm , rmace2 : = 1750 lmolcm ) . 2 g of rmace2 and rhace2 were applied on a precast native 312% gradient gel ( invitrogen ) . anode buffer ( 50 mm bis / tris , 50 mm tricine ) and cathode buffer ( invitrogen nativepage cathode buffer additive , 50 mm bis / tris , 50 mm tricine ) were used to run the gel . 40% glycerol , 200 mm bis / tris , and 200 mm tricine were used as a loading buffer . nativemark unstained protein standard ( thermo scientific ) was used for estimation of molecular weights in coomassie blue - stained gels . proteins were stained in gel using novex colloidal blue staining kit according to manufacturers ' recommendations . samples were analyzed by sds - page using a 412% precast gradient gel ( nupage ) following reductive denaturation for 5 min at 95c . the gel was run in nupage mes sds running buffer ( invitrogen ) at 150 v for 80 min . in gel protein staining substrate specific turnover rates for rhace2 and rmace2 were determined by in vitro kinetic analysis of ang 18 and ang 110 cleavage followed by hplc - based quantification of substrate and product concentrations . enzyme reactions were started by adding a defined amount of enzyme to substrate dilutions in mes - buffer ( 50 mm mes , 300 mm nacl , 10 m zncl2 , 0.01% brij-35 , ph 6.5 ) which were previously equilibrated at 37c . aliquots of the reaction mixes were taken every 10 minutes and stopped by addition of 0.5 m edta to a final concentration of 100 mm before hplc - based quantification of peptides . the concentration of peptides in enzymatic reactions was quantified by detection of peaks eluted from the hplc column using an in - line diode array detector . chromatography was performed by running a gradient on a reversed - phase matrix ( source 5rpc , 4.6150 mm , 5 m ) with 0.08% h3po4 in water as mobile phase a and 40% acetonitrile in water and 0.08% h3po4 as mobile phase b. the optical density at 280 nm was recorded inline for all eluting peaks , and peptide concentrations were calculated via calibration curves for each individual peptide . anticoagulated blood was collected from healthy volunteers , and plasma was separated by 10 minutes centrifugation at 3000 rcf . following addition of 100 pg / ml recombinant human renin ( sigma ) to isolated blood plasma , rmace2 or rhace2 was added to the samples . after 10 minutes of incubation at 37c , in the presence or absence of lisinopril ( sigma ) , samples were chilled on ice and immediately subjected to lc - ms / ms analysis . plasma samples were spiked with 100 pg / ml stable - isotope - labeled internal standards and subjected to solid - phase extraction using sep - pak cartridges ( waters ) according to manufacturers protocol . following elution and solvent evaporation , samples were reconstituted in 50 l 50% acetonitrile/0.1% formic acid and subjected to lc - ms / ms analysis using a reversed - phase analytical column ( luna c18 , phenomenex ) using a gradient ranging from 10% acetonitrile/0.1% formic acid to 70% acetonitrile/0.1% formic acid in 9 minutes . the eluate was analyzed in line with a qtrap-4000 mass spectrometer ( ab sciex ) operated in the mrm mode using dwell times of 25 msec at a cone voltage of 4000 volts and a source temperature of 300c . angiotensin peptide concentrations were calculated by relating endogenous peptide signals to internal standard signals provided that integrated signals achieved a signal - to - noise ratio above 10 . the quantification limits for individual peptides were found to range between 1 pg / ml and 5 pg / ml undiluted plasma . the quality of the frozen enzyme batches used for later functional analysis was analyzed regarding enzyme purity and characteristics . the investigation of rmace2 and rhace2 by size - exclusion chromatography revealed that no detectable contaminations were present in the enzyme preparations . the protein concentration in the enzyme batches was determined by measuring the peak absorbance inline at 280 nm , peak integration , and subsequent calculation based on the corresponding extinction coefficients . rhace2 and rmace2 were found to slightly differ in retention times , pointing to a difference in their hydrodynamic molecular diameter which was found to be lower for rmace2 ( figure 1(a ) ) . in order to further investigate this observation , we employed sds - page analysis , revealing a mass difference under denatured conditions ( figure 1(b ) ) , indicating the presence of additional covalent mass - increasing modifications in rhace2 . the mass shift was found to be caused by two additional glycosylation sites in the human enzyme ( data not shown ) . according to our results , both recombinant ace2 versions apparently occur as noncovalent homodimers in physiological solution . these findings and their possible implications the calculated molecular weights of monomeric rmace2 and rhace2 are 85.2 kda and 85.3 kda , respectively . rmace2 and rhace2 were both found to give a single band at approximately 170 kda in native page , giving evidence for a homodimer occurrence of both recombinant ace2 versions ( figure 1(c ) ) . these findings indicate that the recombinant enzymes , produced and purified according to our protocol , are free of contaminants and possess their natural folding and tertiary structure . based on the concentrations and purity of enzyme batches previously determined , we investigated the biological activity of rhace2 and rmace2 in an in vitro system . therefore , we coincubated defined amounts of purified enzymes with an excess of ang 18 and ang 110 , respectively , which represent natural substrates for ace2 . we found that rhace2 as well as rmace2 converted ang 18 to ang 17 at comparable rates ( figure 2(a ) ) . the calculation of kcat via the graphically determined product formation rate and substrate degradation rate revealed that the turnover number of rhace2 for ang 18 was 1.2-fold higher than that for rmace2 ( table 1 ) . as an alternative natural angiotensin substrate for ace2 surprisingly , rhace2 turned out to be much more effective in performing the cleavage of ang 110 to ang 19 compared to rmace2 ( figure 2(b ) ) . the calculation of ang 110 related turnover rates for rhace2 , and rmace2 revealed that the ang 110 related kcat for rhace2 was 15-fold higher than that for rmace2 ( 1.8 10versus 1.2 10 s ) . furthermore , the comparison of turnover numbers for different substrates revealed that ang 18 is the preferred substrate for both enzymes in vitro with a 42-fold higher turnover number for rhace2 ( 0.77 s ) and a 492-fold higher turnover number for rmace2 ( 0.62 s ) compared to ang 110 ( table 1 ) . these results demonstrate that human and murine ace2 possess substantially different turnover numbers for ang 110 , pointing to a species - specific functional diversity of the enzyme . in order to investigate the substrate specificity of rhace2 and rmace2 under physiologic conditions , we assessed the impact of the two enzymes on the human ras in blood plasma . therefore , we simulated a pathological hyperactivated ras by addition of recombinant human renin to anticoagulated human blood plasma . going in line with our previous in vitro findings , the addition of rhace2 or rmace2 to ex vivo incubated plasma samples revealed that both enzymes effectively degraded ang 18 to yield ang 17 and ang 15 when compared to the enzyme - free control sample ( figure 3(a ) ) . although the plasma concentration of ang 110 was only 161 pg / ml ( 124 pm ) , a concentration of 5 g / ml ( 58,8 nm ) rhace2 was found to efficiently convert ang 110 to ang 19 , as indicated by the peptide levels depicted in the ras - fingerprints ( figure 3(a ) , right ) . in contrast to rhace2 , rmace2 was unable to decrease ang 110 concentrations in plasma and failed to induce detectable ang 19 levels ( figure 3(a ) , middle ) . of note , the increase of ang 17 and ang 15 in the presence of rhace2 was even more prominent , due to this second pathway of ang 17 production via ang 19 , which was selectively supported only by rhace2 . as in vitro experiments revealed that rmace2 was capable of converting ang 110 to ang 19 , although to a much lower extent compared to rhace2 , we further investigated the capability of rmace2 for ang 19 formation in human plasma at increased ang 110 concentrations . we added the ace - inhibitor lisinopril to our ex vivo setting , in order to prevent ace - mediated degradation of ace2-produced ang 19 and to increase ang 110 levels by preventing its degradation by endogenous ace . the presence of lisinopril led to significantly increased ang 110 peptide levels compared to untreated control samples ( 710 pg / ml versus 161 pg / ml ) ( figures 3(a ) and 3(b ) left ) . comparison of rhace2 and rmace2 activities in lisinopril - treated complete human plasma revealed that rhace2 effectively converts large amounts of ang 110 to ang 19 in the physiological matrix while rmace2 was found to be much less effective in catalysing this reaction ( figure 3(b ) ) . interestingly , lisinopril was not able to increase ang 17 concentrations in our experimental settings , which was in contrast to several published reports . no ang 17 was detectable in plasma samples incubated with lisinopril in the absence or presence of rhace2 or rmace2 ( figure 3(b ) ) meaning that the concentration was below the quantification limit of 2 pg / ml plasma . for further investigation of these surprising results , whole blood incubations gave similar results as previously reported by other groups , showing an increase of ang 17 concentrations in control and rhace2 samples in response to lisinopril ( see supplementary figure 1 available online at doi:10.1155/2012/428950 ) . for further investigation of rmace2 and rhace2 substrate specificities , different states of ras activity were simulated by addition of lower amounts of recombinant human renin in the presence of lisinopril , confirming our findings about strongly diverging conversion rates for ang 110 between rhace2 and rmace2 in a substrate concentration - dependent manner ( figure 3(c ) ) . these results demonstrate that ang 110 serves as a natural substrate for rhace2 which is efficiently processed under physiological conditions . in contrast to that , rmace2 is much less effective regarding this catalytic conversion , strongly supporting a species - specific role of ace2 in the activation of the alternative ras pathway . we expressed and purified both rhace2 and rmace2 in cho cells under serum - free conditions . both cell lines were stably secreting high levels of recombinant proteins for at least two months of roller bottle cultivation . the quality of the expression products did not change from early to the latest passages . both rhace2 and rmace2 appeared as stable homo - dimers , while we did not identify monomeric or other multimeric forms . we evaluated the quality of the enzyme preparations by multiple methods including hplc , sec , sds - page , and native page which all confirmed the purity of the final products and their homo - dimeric tertiary structure ( figure 1 ) . despite the similarity of the calculated molecular weights for human and murine monomers , surprisingly high mass differences between rhace2 and rmace2 were observed in sec and could be finally identified to be caused by species - specific sequence variations which lead to a different number in n - glycosylation sites in human and murine ace2 . ace2 is known to cleave a variety of peptide substrates in vitro , which are involved in a broad panel of physiological functions . based on our findings about the differences in tertiary structure between rhace2 and rmace2 , we hypothesized that the well - known sequence diversity between the two species might have an impact on the functional characteristics of the enzymes . therefore , we assessed the turnover rates for rhace2 and rmace2 for two natural and physiologically important substrates ( ang 110 and ang 18 ) in a well - defined in vitro model system ( figure 2 ) . we selected these substrates for ace2 characterization because of their functional importance in maintaining ras peptide levels . it has been described previously that the angiotensin peptides ang 18 and ang 110 are cleaved by ace2 in vitro . as the conversion rates of ace2 for ang 110 were reported to be substantially slower than those for ang 18 , this enzyme reaction was supposed to take over a minor role in the formation of ang 17 than the direct production by ang 18 cleavage . we could confirm previous findings regarding substrate preferences and found a 42-fold higher turnover number for ang 18 compared to ang 110 when cleaved by rhace2 ( table 1 ) . interestingly , our values for kcat were lower compared to previous publications which might have been caused by differences in the employed experimental settings , in particular because of different buffer systems . while showing comparable turnover rates for ang 18 , the ang 110-related turnover rate for rmace2 was found to be only 7% of the respective rate for rhace2 . this fundamental difference in the substrate conversion rates of the two enzymes might also have substantial impact on the regulation of the ras under physiologic conditions in the two different species . unfortunately , only limited conclusions about physiological consequences can be drawn out of in vitro experiments . an important feature of the physiologic conditions in blood plasma is that all ras enzymes except renin are present in excess compared to their substrates , which is the exact opposite of the in vitro situation . although in vitro investigations are very useful for the comparison of enzyme characteristics in one and the same model system , they tell us very little about the in vivo situation . therefore , we developed an ex vivo experimental setup which allowed us to investigate the enzymatic function of the two recombinant enzymes in their physiological environment , with their natural substrates being present at picomolar concentrations . we would like to point out that these ex vivo conditions reflect the human in vivo plasma conditions regarding circulating enzyme concentrations after systemic administration of rhace2 ( data not shown ) . although ex vivo incubations are very reproducible and reflect an integrated picture of soluble enzyme activities throughout the ras in undiluted plasma , the angiotensin peptide concentrations are clearly higher in ex vivo incubated plasma samples , which might be caused by a lack of the peptide flow towards organs or endothelial surfaces ex vivo . we investigated the ras in these samples by means of a newly developed lc - ms / ms method , which allows the quantification of multiple angiotensin metabolites simultaneously in one single sample of blood plasma . the obtained ras - fingerprints revealed that , in contrast to rmace2 , rhace2 is capable to generate ang 19 from ang 110 at physiologic peptide concentrations ( figure 3 ) . this activity even more gains importance in the presence of the ace - inhibitor lisinopril , which blocks the formation of ang 18 . under latter conditions , large amounts of ang 19 are generated in the plasma samples by rhace2 , while rmace2 is much less effective in its formation of ang 19 from ang 110 . although significant amounts of ang 110 and ang 19 were present in samples treated with lisinopril alone or in combination with rhace2 , no ang 17 could be detected in these samples . these findings were in contrast to previously published reports on ang 17 accumulating effects of ace inhibitors in vivo [ 20 , 21 ] . in our experimental setting , we employed heparinized blood plasma as a sample matrix for ace2 characterization which is reflecting in vivo conditions very well . however , plasma lacks all blood cells which might carry receptors and angiotensin peptide converting enzymes being able to affect angiotensin peptide concentrations in vivo . comparison of plasma and whole blood samples revealed that lisinopril - induced ang 17 accumulation is strictly dependent on blood cell - associated angiotensin peptide converting enzymes , as it was exclusively observed in whole blood ex vivo incubations ( figure 3(b ) , supplementary figure 1 ) . as neutral endopeptidase ( nep , cd10 ) is known to be expressed on the cell surface of leukocytes [ 22 , 23 ] and that it is able to convert ang 110 and ang 19 to ang 17 in vitro , nep is very likely to be responsible for ang 17 accumulation also in vivo , especially in the presence of ace inhibitors which block the formation of ang 18 which is an important precursor for ang 17 . in human plasma , the ace2-mediated formation of ang 19 from ang 110 represents a significant route of establishment of the alternative ras . this may be , for example , of particular importance in vivo , when ace inhibitors are used for antihypertensive treatment . as ace2 is primarily expressed as a membrane - attached enzyme in several organs , the local production of ang 19 from ang 110 which is increased when ace inhibitors are present , might become an important mechanism of action for ace inhibitors action in vivo in humans . in addition to ang 17 , also ang 19 has been reported to possess protective effects in cardiovascular disease models . these mechanistic considerations seem to be of particular importance in humans , while murine model systems for the investigation of ace inhibitor efficacy might be reconsidered in respect to the species - specific lack of ang 110 cleavage by murine ace2 . altogether , our findings describe important species - specific differences in the fine specificity of ace2 . thus , the murine ras is likely to function differently when compared to its human counterpart . furthermore , our data point to the importance of further investigations and improved understanding of the human ras , while data generated in murine model systems might be partially reconsidered in respect to different enzyme properties . deciphering the functional characteristics of the human ras using new analytical possibilities reveals previously invisible features of the system . the future generation of human - derived data describing ras function in health and disease will pave the way for new concepts of therapeutic manipulations of the system which are more specifically designed for application in humans .
angiotensin - converting enzyme 2 ( ace2 ) is a monocarboxypeptidase of the renin - angiotensin - system ( ras ) which is known to cleave several substrates among vasoactive peptides . its preferred substrate is angiotensin ii , which is tightly involved in the regulation of important physiological functions including fluid homeostasis and blood pressure . ang 17 , the main enzymatic product of ace2 , became increasingly important in the literature in recent years , as it was reported to counteract hypertensive and fibrotic actions of angiotensin ii via the mas receptor . the functional connection of ace2 , ang 17 , and the mas receptor is also referred to as the alternative axis of the ras . in the present paper , we describe the recombinant expression and purification of human and murine ace2 ( rhace2 and rmace2 ) . furthermore , we determined the conversion rates of rhace2 and rmace2 for different natural peptide substrates in plasma samples and discovered species - specific differences in substrate specificities , probably leading to functional differences in the alternative axis of the ras . in particular , conversion rates of ang 110 to ang 19 were found to be substantially different when applying rhace2 or rmace2 in vitro . in contrast to rhace2 , rm ace2 is substantially less potent in transformation of ang 110 to ang 19 .
1. Introduction 2. Material and Methods 3. Results 4. Discussion
PMC3401650
arachnoid cysts in the brain usually have an indolent course unless complicated by headache , seizures , increasing head circumference , behavioral disturbances , ocular , motor , speech disorders , and sudden cyst changes such as acute cyst expansion , sudden hemorrhage into the cyst , subdural hematoma , or subdural hygroma . rupture of arachnoid cyst causing subdural hygroma is very rare , with few case reports . we herein present a clinical case , radiology , and discussion of asymptomatic middle cranial fossa arachnoid cyst in a 15-year - old male child who presented with raised intracranial features following a trivial trauma . a 15-year - old male child presented with complaints of headache , visual blurring , and projectile vomiting for 20 days duration . the child had a history of trivial fall about 10 days prior to onset of headache , with no loss of consciousness . on examination , the child had bilateral florid papilledema and right lateral rectus palsy . there were no other focal deficits or signs of meningeal irritation . computed tomography ( ct ) scan of the brain showed a left middle fossa , galassi type 3 arachnoid cyst , with bilateral subdural hygroma / hematoma ( chronic ) , bilateral diffuse cerebral edema , and mass effect causing compression of both frontal horns [ figure 1 ] . magnetic resonance imaging ( mri ) of the brain showed bilateral collection in the subdural space , hypo on t1w [ figure 2 ] and hyper on t2w [ figure 3 ] images , matching with the intensities of cerebrospinal fluid ( csf ) with widened sylvian fissure on the left side and a compressed temporal lobe on the left side , suggestive of arachnoid cyst with subdural hygroma and mass effect . computed tomography scan plain , axial section showing hypodense region compressing the temporal horn with bilateral subdural hygroma magnetic resonance imaging brain t1w image showing subdural hypointensity in the temporal region and bilateral convexities compressing temporal and frontal lobes on the left side suggestive of arachnoid cyst with subdural hygroma and mass effect magnetic resonance imaging brain t2w image showing subdural hyperintensity in the temporal region and bilateral convexities compressing temporal and frontal lobes on the left side suggestive of arachnoid cyst with subdural hygroma and mass effect left pterional craniotomy , evacuation of hygroma , fenestration of cyst into suprasellar cistern , and marsupialisation of the cyst was performed . the patient developed pseudomeningocele , which was managed with lumbar csf drainage for 5 days and was discharged without any deficits . the postoperative imaging showed resolution of the subdural hygroma with small extradural and subgaleal collection of the csf [ figure 4 ] . the postoperative imaging showed resolution of subdural hygroma with small extradural and subgaleal collection of cerebrospinal fluid arachnoid cysts are considered intra - arachnoidal in location and account for 1% of the intracranial mass lesions . they can develop anywhere in the cerebrospinal axis but have a predilection toward the middle cranial base . they are usually asymptomatic , but may present with raised intracranial pressure , focal neurologic deficits , or seizures . being indolent and slowly growing , most of the arachnoid cysts can be managed conservatively , reserving surgical intervention for symptomatic lesions . intra - cystic hemorrhage and subdural rupture of the veins running over the surface of the cyst are well described . subdural rupture of the arachnoid cyst per se,[46 ] either traumatic or spontaneous , is sparingly reported with about 21 cases documented in literature . even a minor trauma can cause rupture of the cyst as seen in the present case , where the patient fell down while playing , without any loss of consciousness . the gradual seepage of the csf from the cyst into the subdural space , probably through a flap- valve effect , caused a gradual rise in the intracranial pressure . ruptures are usually asymptomatic in areas other than the cysts in the middle cranial fossa . however , immediate operative intervention was warranted in view of raised intracranial pressure and progressive neurologic deterioration .
intracranial arachnoid cysts developing in relation to the cerebral hemispheres and middle cranial fossa are usually incidental or asymptomatic . however , most of the clinically active cysts present with seizures because of chronic compression . presentation as raised intracranial pressure due to cyst rupture into the subdural space is a rare clinical entity . we herein present a case of an asymptomatic arachnoid cyst with rupture into the subdural space bilaterally and presenting as raised intracranial pressure .
Introduction Case Report Discussion
PMC4837825
patellofemoral osteoarthritis ( pfoa ) commonly occurs among middle - aged and elderly people , especially among asian women . pathogenically , lower limb alignment abnormalities will result in wear and tear on cartilage in the patellar / groove joint surface , subchondral sclerosis , and osteophytosis . patellofemoral osteoarthritis is a common cause of anterior knee pain and severely reduces the quality of life . in our hospital , we treated 156 pfoa patients using arthroscopic knee joint debridement , patelloplasty , circumpatellar denervation , and release of the lateral patellar retinaculum . a total of 156 pfoa patients ( 62 males , 94 females ; ages 45 - 81 years , mean 66 years ) were involved in this study . pfoa occurred on the left side in 73 patients and on the right side in 83 patients . the clinical manifestations included recurrent swelling and pain in the knee joint ; aggravated pain upon ascending / descending stairs , squatting down , or standing up ; positive patellar grinding tests ; pain mainly located at patellar edges ; varying degrees of quadriceps femoris atrophy ; and a sense of joint friction during activities . knee - joint x - rays showed that the space between the patellofemoral joint was narrowed , and osteophytosis was present . patellar axial slices showed that the patellofemoral joints degenerated , the space between patellofemoral joints was narrow , and the patella was inclined outwards . t2-weight and three - dimensional - fat - suppressed spoiled gradient recalled echo sequence ( 3d - fs - spgr ) cartilage - sequence magnetic resonance imaging ( mri ) showed that the cartilage on the patella and femoral groove joint surface had degenerated or was lost and was mainly accompanied by slight degeneration in the menisci and tibial joints [ figure 1 ] . three - dimensional fat - suppressed spoiled gradient recalled echo sequence cartilage sequence magnetic resonance imaging shows that the cartilage on the patella and groove surface degenerated ( red arrow ) , and this degeneration was frequently complicated by slight degeneration in the menisci and tibial joints . the inclusion criteria were as follows : patients with pfoa , mainly from patella wear and osteophytosis only ; intact femorotibial joints , meniscus , cruciate ligaments , and collateral ligaments ; and normal lower - limb alignment and normal bending and stretching abilities . all procedures in this group were performed by one highly - qualified orthopedic surgeon . under local anesthesia , the surgical sites were disinfected and draped according to standard procedures , and the arthroscopy was approached from inside and outside of the knee eyes . comprehensive detection was first conducted to assess the patellar trajectory and patellofemoral articular cartilage degeneration and to classify the cartilage defect . next , the hyperplastic synovium , medial and lateral compartments , and lesions in the intercondylar fossa were cleared . the lateral patellar retinaculum was released under radio frequency , and the circumpatella was denervated [ figures 2 and 3 ] . under arthroscopy , the surgeon observed the patellar trajectory , the bone blockage and cartilage wound margins affecting patellofemoral joint activities , and the osteophytes present around the ground patella . axial patellar osteophytes were cleared , and the joint space increased . after awakening from anesthesia , the patients began isometric quadriceps training . patients began straight leg raising and knee flexion and extension training and started partial weight - bearing ambulation the day after surgery . the arthroscopic articular cartilage samples before and after surgery were classified and evaluated using lysholm and kujala scores . lysholm knee scoring is globally used for the comprehensive evaluation of patellofemoral joint disorders and includes 8 items : limp , locking , pain , support , instability , swelling , difficulty in ascending stairs , and restriction in squatting . kujala scoring is used for the evaluation of patellofemoral joints and covers 13 items : limp when walking , supporting weight , walking distance , symptoms occurring when ascending or descending stairs , squatting ability and symptoms , symptoms occurring when running or jumping , symptoms occurring when kneeling down for long periods , degree of knee joint pain , anterior knee swelling , accompanying abnormal patellar activity , occurrence of thigh muscle atrophy , and restriction upon knee joint bending . the above items were tested preoperatively and again during the last follow - up to evaluate the therapeutic effects and the improvement in knee joint functions after surgery . the quantitative data were expressed as mean standard deviation and tested with the paired t - test , with the significance level set at = 0.05 . a total of 156 pfoa patients ( 62 males , 94 females ; ages 45 - 81 years , mean 66 years ) were involved in this study . pfoa occurred on the left side in 73 patients and on the right side in 83 patients . the clinical manifestations included recurrent swelling and pain in the knee joint ; aggravated pain upon ascending / descending stairs , squatting down , or standing up ; positive patellar grinding tests ; pain mainly located at patellar edges ; varying degrees of quadriceps femoris atrophy ; and a sense of joint friction during activities . knee - joint x - rays showed that the space between the patellofemoral joint was narrowed , and osteophytosis was present . patellar axial slices showed that the patellofemoral joints degenerated , the space between patellofemoral joints was narrow , and the patella was inclined outwards . t2-weight and three - dimensional - fat - suppressed spoiled gradient recalled echo sequence ( 3d - fs - spgr ) cartilage - sequence magnetic resonance imaging ( mri ) showed that the cartilage on the patella and femoral groove joint surface had degenerated or was lost and was mainly accompanied by slight degeneration in the menisci and tibial joints [ figure 1 ] . three - dimensional fat - suppressed spoiled gradient recalled echo sequence cartilage sequence magnetic resonance imaging shows that the cartilage on the patella and groove surface degenerated ( red arrow ) , and this degeneration was frequently complicated by slight degeneration in the menisci and tibial joints . the inclusion criteria were as follows : patients with pfoa , mainly from patella wear and osteophytosis only ; intact femorotibial joints , meniscus , cruciate ligaments , and collateral ligaments ; and normal lower - limb alignment and normal bending and stretching abilities . all procedures in this group were performed by one highly - qualified orthopedic surgeon . under local anesthesia , the surgical sites were disinfected and draped according to standard procedures , and the arthroscopy was approached from inside and outside of the knee eyes . comprehensive detection was first conducted to assess the patellar trajectory and patellofemoral articular cartilage degeneration and to classify the cartilage defect . next , the hyperplastic synovium , medial and lateral compartments , and lesions in the intercondylar fossa were cleared . the lateral patellar retinaculum was released under radio frequency , and the circumpatella was denervated [ figures 2 and 3 ] . under arthroscopy , the surgeon observed the patellar trajectory , the bone blockage and cartilage wound margins affecting patellofemoral joint activities , and the osteophytes present around the ground patella . axial patellar osteophytes were cleared , and the joint space increased . after awakening from anesthesia , the patients began isometric quadriceps training . patients began straight leg raising and knee flexion and extension training and started partial weight - bearing ambulation the day after surgery . the arthroscopic articular cartilage samples before and after surgery were classified and evaluated using lysholm and kujala scores . lysholm knee scoring is globally used for the comprehensive evaluation of patellofemoral joint disorders and includes 8 items : limp , locking , pain , support , instability , swelling , difficulty in ascending stairs , and restriction in squatting . kujala scoring is used for the evaluation of patellofemoral joints and covers 13 items : limp when walking , supporting weight , walking distance , symptoms occurring when ascending or descending stairs , squatting ability and symptoms , symptoms occurring when running or jumping , symptoms occurring when kneeling down for long periods , degree of knee joint pain , anterior knee swelling , accompanying abnormal patellar activity , occurrence of thigh muscle atrophy , and restriction upon knee joint bending . the above items were tested preoperatively and again during the last follow - up to evaluate the therapeutic effects and the improvement in knee joint functions after surgery . the quantitative data were expressed as mean standard deviation and tested with the paired t - test , with the significance level set at = 0.05 . seven of the 156 patients were missed , resulting in a follow - up rate of 95.5% . during 10 - 24 ( 18.8 3.5 ) months of follow - up , the incisions all healed well , and no recurrence , infection or nerve vascular injury occurred . after surgery , the average lysholm knee score improved from 73.29 4.48 to 80.93 4.21 [ table 1 ] , and the kujala score improved from 68.34 6.22 to 76.48 6.54 [ table 2 ] . paired t - tests based on the patellofemoral articular cartilage degeneration classification , both scores were improved significantly among patients with cartilage defects i - iii , but not among patients with cartilage defect iv . the scores for each specific item ( e.g. limp , locking , instability , pain , swelling , ascending and descending stairs , squatting ) were improved significantly ( paired t - test ) [ tables 3 and 4 ] . preoperative and postoperative lysholm scores preoperative and postoperative kujala scores comparison of pre- and post - operative lysholm score about cartilage damage grading comparison of pre- and post - operative kujala score about cartilage damage grading the patellofemoral joint , composed of the patella and the femur groove , is a major component of the extension apparatus of the knee . under normal gait , the patellofemoral joint bears 0.5- to 1-fold of the body weight . however , the weight - bearing is increased to 3- to 4-fold of body weight upon walking up and down stairs and is maximized to 8-fold of body weight upon high bending . this high stress causes wear and tear in the patellofemoral articular cartilage and thus accelerates its degeneration . anterior knee pain is mainly caused by an abnormal patellar trajectory ( due to congenital deformity , injury or degeneration ) or patellofemoral joint malalignment . patellar subluxation alters the contact surface of the patellofemoral joint and increases the pressure on the joint surface , which leads to abnormal stress imposed on the patella , structural destruction of the cartilage collagen fibers , and finally , cartilage wear , tear and degeneration . because the patellae are rich in nerve endings , exposure of the nerve endings below the cartilage will also induce anterior knee pain . the tension on the lateral patellar retinaculum will lead to pressure on the lateral surface of the patellofemoral joint and finally to degeneration of the articular cartilage . according to lin et al . study , the incidence rate of pfoa among people older than 50 years is 6.3% and is higher among females . the wear and tear on the patellofemoral joint is intensified by long - term rapid and forced bending and stretching in the knee joint , which are major causes of pfoa . non - surgical treatments include limitation of movement , pharmacologic symptom relief , and injection of hyaluronic acid into the articular cavity . as the condition progresses , however , the above treatments are unable to adequately relieve pain and/or symptoms . traditional surgical therapies include advancement of the tibial tuberosity , patellofemoralarthroplasty , and total knee replacement . however , invasiveness , frequent bleeding and surgical risks limit the application of these approaches . moreover , arthroplasty is unsuitable for young patients . local blocking of the patellofemoral joint was tested among patients whose femorotibial joint , meniscus , cruciate ligaments and collateral ligaments were all intact . these patients then underwent release of the lateral patellar retinaculum to relieve the high external patellofemoral pressure for circumpatellar denervation . the results proved that the pain - killing effects were significant with few complications , despite the presence of varying degrees of cartilage defects . circumpatellar nerves include the cutaneous nerves , the superior branch of the saphenous nerve , and the joint branch of the knee extensor muscle [ figure 4 ] . the superior branch of the saphenous nerve ( also called the patellar branch ) passes through the internal superior border of the patella into the subcutaneous prepatellar area and is located in the prepatellar skin . the joint branch of the knee extensor muscle includes the interior , medial and lateral femoral muscular branches ; the knee joint muscular branch ; and the anterior branch of the obturator nerve . the interior femoral muscular branch originates from the obturator nerve or saphenous nerve and is divided into two branches after entering the joint capsule . , each nerve passes through the joint capsule , is distributed in the articular synovial area , and forms the somatic nerve and autonomic nerve network in the joint . in one study , no significant difference between the experimental and control group was observed ( via microscopy ) in articular cartilage thickness after the denervation of rabbit knees . moreover , there was no effect on the articular cartilage tissue structure of the knee joint nerve branch , and the knee joint partial denervation therapy was determined to be feasible . therefore , in tka and arthroscopic debridement , through synovectomy , articular branch of the patella peripheral denervation , to reduce the patellofemoral joint pain has reasonable mechanism of anatomy . the treatment of patellofemoral arthritis includes many surgical procedures , such as arthroscopic debridement , drilling or microfracture surgery , lateral retinaculum release , tibial tuberosity advancement , patella thin cut surgery , patellar resection , patellofemoralarthroplasty , and tka , many of which can be combined . arthroscopy and drilling or microfracture surgery may have an effect on mild to moderate cartilage damage , but the long - term efficacy remains unclear . simple lateral retinaculum release is only appropriate for patella displacement or tilt ; in previous studies , the force line changes were effective , but the results were controversial . the extensor force line was relatively normal in appropriate transposition within the anterior tibial tuberosity . there are reports that there is a significant pain relief effect of patella thin cut surgery for pure pfoa . however , this approach is not recommended for severe abrasions or when the patella thickness is < 20 mm . patellar resection may lead to quadriceps weakness , reducing the range of motion and leaving the femoral condyle vulnerable to trauma and without enough protection . total knee arthroplasty is a good choice for older patients , cases of severe functional limitations or bilateral knee involvement and for patients who perform mild physical labor . however , in relatively young patients or in very physically active pfoa patients , tka is not the best choice . the group of patients who received outpatient treatment to limit their activity and who received oral nsaids , glucosamine and intra - articular injections of sodium hyaluronate for more than 3 months experienced less - than - ideal relief . the 3d - fs - spgr sequence is recognized as one of the best academic cartilage mri sequences . this technique can show transparent cartilage clearly and can accurately locate not only large cartilage defects but also small superficial cartilage wear . spgr increases the contrast of articular cartilage and subchondral bone to improve the signal to noise ratio of the cartilage image , allowing the cartilage to be displayed more clearly . normal articular cartilage in the 3d - fs - spgr sequence demonstrates a significantly high signal , and in the most areas of joint cartilage , delamination can be observed . thin line , volume scanning and 3d reconstruction , and quantitative measurement of the thickness and volume of cartilage can also be performed to identify small lesions in the joint . the 3d - fs - spgr sequence has high sensitivity and specificity for knee articular cartilage degeneration that is consistent with arthroscopy . based on these findings , we can roughly judge the degeneration of articular cartilage by preoperative mri . after arthroscopic surgery in mild to moderate osteoarthritis and i - iii grade cartilage degeneration , knee pain relief was clear and the majority of joint function improved significantly . the aims of treatment of pfoa are to recover the inosculation of the patellofemoral joint , balance the soft tissues , and eliminate the primary causes for patellofemoral joint pain . patelloplasty is able to lower the patellar lateral pressure , correct the patellar motion trajectory , improve bad contact on patellofemoral joint surfaces , prevent wear and tear on the patellofemoral joint , and delay degeneration . radiofrequency burning with denervation is able to clear away a portion of the circumpatellar nerves , reduce the nerve conduction of pain , and relieve anterior knee pain . with the above treatments , we eliminated the biomechanical factors affecting knee joint activities and blocked the vicious circle of degeneration and injury . moreover , the articular cavity was washed with enough saline water to fully eliminate the worn areas and inflammatory factors and to alleviate inflammatory reactions , which were the objectives of treatment . this technique uses conventional arthroscopic electrical cauterization of peripatellar synovial tissue , partial removal of the peripheral nerve with patella osteophyte drill grinding , and grinding of the hyperplasia . this operation can remove diseased tissue while maximizing preservation of normal cartilage and synovial hyperplasia , clean up around the patella , and improve patellofemoral involution relations . the operation reduces inflammatory substances caused by the friction of the patellofemoral joint hyperplasia , thereby reducing the pain . the patella is composed of different bundle branches , but peripheral nerve distributions overlap . therefore , even if some denervation exists , it will not completely block the patellar plexus nerve , preventing sensory loss in the patellar skin . this procedure of cutting the patellar plexus nerve around the patellar cartilage does not damage the patella in front of the tissue . the main patellar vascular holes are on the front area of the patella and hence it will not result in patellar fracture , necrosis or other complications . , through large randomized controlled studies , found that there was no difference between arthroscopic surgery and control subjects after treatment for osteoarthritis of the knee . thus , they concluded that arthroscopic surgery is just a placebo treatment for the disease . while this conclusion about arthroscopic techniques has a certain reference value , it was published more than 10 years ago , so it does not represent the current state of emerging technologies and new perspectives . the technique described here flushes the inflammatory factors to relieve inflammation , removes the source of patellofemoral malalignment osteophytes , and denervates nerve endings ; although it can not reverse the process of osteoarthritis , this approach was able to relieve pain and delay joint replacement surgery . however , this procedure was not very effective for patients with a low degree of cartilage defect and thus should not be their first treatment choice . the follow - up results show that among patients with degree i - iii cartilage degeneration , both lysholm and kujala scores improved significantly . in conclusion , the therapeutic effects for the treatment of pfoa with arthroscopic patelloplasty and circumpatellar denervation are closely associated with the degree of patellofemoral articular cartilage degeneration . this procedure is a minimally invasive procedure that lies between conservative treatment and joint replacement surgery . moreover , this approach is accepted by patients who do not obtain satisfactory results after conservative treatment and can not afford the financial cost of joint replacement . this technique , as a therapeutic method , is suitable for mild - to - moderate patellofemoral arthritis , relieves pain to a certain extent , improves quality of life , delays joint replacement , and delays the progression of pfoa . however , the main limitation of this procedure is that it is difficult to obtain a better curative effect for patients with more severe articular cartilage degeneration and poor knee function . the long - term effects of this procedure will require further clinical observation and research . the patellofemoral joint , composed of the patella and the femur groove , is a major component of the extension apparatus of the knee . under normal gait , the patellofemoral joint bears 0.5- to 1-fold of the body weight . however , the weight - bearing is increased to 3- to 4-fold of body weight upon walking up and down stairs and is maximized to 8-fold of body weight upon high bending . this high stress causes wear and tear in the patellofemoral articular cartilage and thus accelerates its degeneration . anterior knee pain is mainly caused by an abnormal patellar trajectory ( due to congenital deformity , injury or degeneration ) or patellofemoral joint malalignment . patellar subluxation alters the contact surface of the patellofemoral joint and increases the pressure on the joint surface , which leads to abnormal stress imposed on the patella , structural destruction of the cartilage collagen fibers , and finally , cartilage wear , tear and degeneration . because the patellae are rich in nerve endings , exposure of the nerve endings below the cartilage will also induce anterior knee pain . the tension on the lateral patellar retinaculum will lead to pressure on the lateral surface of the patellofemoral joint and finally to degeneration of the articular cartilage . according to lin et al . study , the incidence rate of pfoa among people older than 50 years is 6.3% and is higher among females . the wear and tear on the patellofemoral joint is intensified by long - term rapid and forced bending and stretching in the knee joint , which are major causes of pfoa . non - surgical treatments include limitation of movement , pharmacologic symptom relief , and injection of hyaluronic acid into the articular cavity . as the condition progresses , however , the above treatments are unable to adequately relieve pain and/or symptoms . traditional surgical therapies include advancement of the tibial tuberosity , patellofemoralarthroplasty , and total knee replacement . however , invasiveness , frequent bleeding and surgical risks limit the application of these approaches . moreover , arthroplasty is unsuitable for young patients . local blocking of the patellofemoral joint was tested among patients whose femorotibial joint , meniscus , cruciate ligaments and collateral ligaments were all intact . these patients then underwent release of the lateral patellar retinaculum to relieve the high external patellofemoral pressure for circumpatellar denervation . the results proved that the pain - killing effects were significant with few complications , despite the presence of varying degrees of cartilage defects . circumpatellar nerves include the cutaneous nerves , the superior branch of the saphenous nerve , and the joint branch of the knee extensor muscle [ figure 4 ] . the superior branch of the saphenous nerve ( also called the patellar branch ) passes through the internal superior border of the patella into the subcutaneous prepatellar area and is located in the prepatellar skin . the joint branch of the knee extensor muscle includes the interior , medial and lateral femoral muscular branches ; the knee joint muscular branch ; and the anterior branch of the obturator nerve . the interior femoral muscular branch originates from the obturator nerve or saphenous nerve and is divided into two branches after entering the joint capsule . , each nerve passes through the joint capsule , is distributed in the articular synovial area , and forms the somatic nerve and autonomic nerve network in the joint . in one study , no significant difference between the experimental and control group was observed ( via microscopy ) in articular cartilage thickness after the denervation of rabbit knees . moreover , there was no effect on the articular cartilage tissue structure of the knee joint nerve branch , and the knee joint partial denervation therapy was determined to be feasible . therefore , in tka and arthroscopic debridement , through synovectomy , articular branch of the patella peripheral denervation , to reduce the patellofemoral joint pain has reasonable mechanism of anatomy . the treatment of patellofemoral arthritis includes many surgical procedures , such as arthroscopic debridement , drilling or microfracture surgery , lateral retinaculum release , tibial tuberosity advancement , patella thin cut surgery , patellar resection , patellofemoralarthroplasty , and tka , many of which can be combined . arthroscopy and drilling or microfracture surgery may have an effect on mild to moderate cartilage damage , but the long - term efficacy remains unclear . simple lateral retinaculum release is only appropriate for patella displacement or tilt ; in previous studies , the force line changes were effective , but the results were controversial . the extensor force line was relatively normal in appropriate transposition within the anterior tibial tuberosity . there are reports that there is a significant pain relief effect of patella thin cut surgery for pure pfoa . however , this approach is not recommended for severe abrasions or when the patella thickness is < 20 mm . patellar resection may lead to quadriceps weakness , reducing the range of motion and leaving the femoral condyle vulnerable to trauma and without enough protection . total knee arthroplasty is a good choice for older patients , cases of severe functional limitations or bilateral knee involvement and for patients who perform mild physical labor . however , in relatively young patients or in very physically active pfoa patients , tka is not the best choice . the group of patients who received outpatient treatment to limit their activity and who received oral nsaids , glucosamine and intra - articular injections of sodium hyaluronate for more than 3 months experienced less - than - ideal relief . the 3d - fs - spgr sequence is recognized as one of the best academic cartilage mri sequences . this technique can show transparent cartilage clearly and can accurately locate not only large cartilage defects but also small superficial cartilage wear . spgr increases the contrast of articular cartilage and subchondral bone to improve the signal to noise ratio of the cartilage image , allowing the cartilage to be displayed more clearly . normal articular cartilage in the 3d - fs - spgr sequence demonstrates a significantly high signal , and in the most areas of joint cartilage , delamination can be observed . thin line , volume scanning and 3d reconstruction , and quantitative measurement of the thickness and volume of cartilage can also be performed to identify small lesions in the joint . the 3d - fs - spgr sequence has high sensitivity and specificity for knee articular cartilage degeneration that is consistent with arthroscopy . based on these findings , we can roughly judge the degeneration of articular cartilage by preoperative mri . after arthroscopic surgery in mild to moderate osteoarthritis and i - iii grade cartilage degeneration , knee pain relief was clear and the majority of joint function improved significantly . the aims of treatment of pfoa are to recover the inosculation of the patellofemoral joint , balance the soft tissues , and eliminate the primary causes for patellofemoral joint pain . patelloplasty is able to lower the patellar lateral pressure , correct the patellar motion trajectory , improve bad contact on patellofemoral joint surfaces , prevent wear and tear on the patellofemoral joint , and delay degeneration . radiofrequency burning with denervation is able to clear away a portion of the circumpatellar nerves , reduce the nerve conduction of pain , and relieve anterior knee pain . with the above treatments , we eliminated the biomechanical factors affecting knee joint activities and blocked the vicious circle of degeneration and injury . moreover , the articular cavity was washed with enough saline water to fully eliminate the worn areas and inflammatory factors and to alleviate inflammatory reactions , which were the objectives of treatment . this technique uses conventional arthroscopic electrical cauterization of peripatellar synovial tissue , partial removal of the peripheral nerve with patella osteophyte drill grinding , and grinding of the hyperplasia . this operation can remove diseased tissue while maximizing preservation of normal cartilage and synovial hyperplasia , clean up around the patella , and improve patellofemoral involution relations . the operation reduces inflammatory substances caused by the friction of the patellofemoral joint hyperplasia , thereby reducing the pain . therefore , even if some denervation exists , it will not completely block the patellar plexus nerve , preventing sensory loss in the patellar skin . this procedure of cutting the patellar plexus nerve around the patellar cartilage does not damage the patella in front of the tissue . the main patellar vascular holes are on the front area of the patella and hence it will not result in patellar fracture , necrosis or other complications . , through large randomized controlled studies , found that there was no difference between arthroscopic surgery and control subjects after treatment for osteoarthritis of the knee . thus , they concluded that arthroscopic surgery is just a placebo treatment for the disease . while this conclusion about arthroscopic techniques has a certain reference value , it was published more than 10 years ago , so it does not represent the current state of emerging technologies and new perspectives . the technique described here flushes the inflammatory factors to relieve inflammation , removes the source of patellofemoral malalignment osteophytes , and denervates nerve endings ; although it can not reverse the process of osteoarthritis , this approach was able to relieve pain and delay joint replacement surgery . however , this procedure was not very effective for patients with a low degree of cartilage defect and thus should not be their first treatment choice . the follow - up results show that among patients with degree i - iii cartilage degeneration , both lysholm and kujala scores improved significantly . in conclusion , the therapeutic effects for the treatment of pfoa with arthroscopic patelloplasty and circumpatellar denervation are closely associated with the degree of patellofemoral articular cartilage degeneration . this procedure is a minimally invasive procedure that lies between conservative treatment and joint replacement surgery . moreover , this approach is accepted by patients who do not obtain satisfactory results after conservative treatment and can not afford the financial cost of joint replacement . this technique , as a therapeutic method , is suitable for mild - to - moderate patellofemoral arthritis , relieves pain to a certain extent , improves quality of life , delays joint replacement , and delays the progression of pfoa . however , the main limitation of this procedure is that it is difficult to obtain a better curative effect for patients with more severe articular cartilage degeneration and poor knee function . the long - term effects of this procedure will require further clinical observation and research .
background : patellofemoral osteoarthritis commonly occurs in older people , often resulting in anterior knee pain and severely reduced quality of life . the aim was to examine the effectiveness of arthroscopic patelloplasty and circumpatellar denervation for the treatment of patellofemoral osteoarthritis ( pfoa).methods : a total of 156 pfoa patients ( 62 males , 94 females ; ages 45 - 81 years , mean 66 years ) treated in our department between september 2012 and march 2013 were involved in this study . clinical manifestations included recurrent swelling and pain in the knee joint and aggravated pain upon ascending / descending stairs , squatting down , or standing up . pfoa was treated with arthroscopic patelloplasty and circumpatellar denervation . the therapeutic effects before and after surgery were statistically evaluated using lysholm and kujala scores . the therapeutic effects were graded by classification of the degree of cartilage defect.results:a total of 149 cases were successfully followed up for 14.8 months , on average . the incisions healed well , and no complications occurred . after surgery , the average lysholm score improved from 73.29 to 80.93 , and the average kujala score improved from 68.34 to 76.48 . this procedure was highly effective for patients with cartilage defects i - iii but not for patients with cartilage defect iv.conclusions:for pfoa patients , this procedure is effective for significantly relieving anterior knee pain , improving knee joint function and quality of life , and deferring arthritic progression .
I M Basic information Surgical procedures Observation items and therapeutic evaluation Statistical methods R D Anatomical analysis of patellofemoral joint Distribution of circumpatellar nerves Significance
PMC3204919
the global population growth is estimated to increase from 6.8 million today to 8 billion by 2025 , which will put pressure on water demand from many perspectives as water is used in the production of both food and energy ( 4 ) . more people demand more food and also , with a shift in diet to more so - called westernised food , there will be an increased pressure for water ; agriculture accounts for 70% of all water use today ( 5 ) . by 2030 , it is estimated that the world will need to produce 50% more food and energy that means a continuous increase in demand for water . the pollution of the seas is an established fact , and ocean transport of contaminants is growing as a health concern for populations in the area ( 68 ) . decision makers will have to very clearly include life quality aspects of future generations in the work as the impact of ongoing changes will be noticeable , in many cases , in the future . recently , an estimation of an increase of 30% of fresh water is needed to mitigate the causes of and adapting to climate change ( 5 ) . thus , according to these estimations , the demand for water will without doubt be increased in the near future . this article will focus on effects of climate change on water security with an arctic perspective , giving some examples from different countries how arising problems are being addressed . water stress occurs when the demand for water exceeds the available amount during a certain period or when poor quality restricts its use . in 2010 , a report from the world bank found that the effects of water shortages are felt strongly by 700 million people in 43 countries ( 9 ) . another report from 2010 states that 80% of the world 's population is exposed to high levels of threat to water security ( 3 ) . the stress is not limited to the human sphere as a majority of the flora is also threatened ; a majority of biodiversity dependent on river discharge is at risk for extinction as well as flora and fauna are dependent upon arctic lakes ( 3 ) . the impact on human health is thus complex with many parts of nature being affected and interacting in an interwoven biological / physiological communication . although the situation is a cause for considerable concern , technologies and expertise are being developed that can help address these problems . but to implement effective adaptation measures , it is important to raise awareness among decision makers as well as the general public as changes in water consumption at an individual level will be crucial to tackling water scarcity . it is a challenge of pedagogical nature to show the need for individual actions and for personalised willingness to take on responsibility for mitigating changes of climate . water has traditionally been regarded as a free resource , but this can be changed . the term water footprint is a measure how much water has been used during the production process of any goods or food . recognition of the water footprint in all aspects of society is needed to change public awareness about water value , and ultimately water consumption behaviour . air temperature has increased in the arctic , warming 0.6c since the early 20th century , with seasonal as well as geographical variations . precipitation is a parameter that is difficult to measure in the arctic and complex to predict . arctic climate impact assessment ( acia ) suggests that a 1% increase in precipitation per decade has occurred over the last century ( 2 ) . seasonal distribution of precipitation is important to consider as winter precipitation has increased since the 1970s and because arctic winter precipitation is projected to increase with continuing climate change . despite increased annual precipitation , a net summer drying effect is occurring due to decreased seasonal precipitation , increased temperatures , thawing permafrost and increased evapotranspiration . there has also been an increase in wind since the 1960s and in cyclone activity . this is favourable for northward expansion of agriculture and in natural plant and animal distribution . this will cause an increased loss of water due to evapotranspiration contributing to drier summer conditions in the future . degradation of the permafrost can result in drainage of ponds . in siberia and alaska , lakes in permafrost regions have undergone rapid change , some increasing in size and number , whereas others have decreased and in some instances disappeared . siberian rivers have , as rivers in alaska , increased in winter discharge , even in non - dammed tributaries . forecasts say that the total area of permafrost may shrink by 1012% in 2025 years with permafrost borders moving 150200 km northeast in russia ( 10 ) . in the arctic , permafrost extends to up to 500 m below the ground surface , and it is generally just the top metre that thaws in the summer ( 8) . lakes , rivers and wetlands on the arctic landscape are normally not connected with groundwater in the same way they are in temperate regions . so , when the surface is frozen in the winter , only lakes deeper than 2 m and rivers with significant flow retain liquid water . surface water is often abundant in summer , when it serves as a breeding ground for fish , birds and mammals . in winter , many mammals and birds are forced to migrate out of the arctic . many humans in the arctic rely on surface water for community use , so when conditions change and access to water is diminished , the prerequisites for human survival are affected . only 40% of yakutia 's population is supplied with running water from centralised sources and 140 operational water pipes fail to meet sanitary standards ( 10 ) . the population in the arctic part of russia is also estimated to increase as huge investments in infrastructure and regional planning will occur during the next coming decades . a study from alaska shows that when access to water is limited , it causes consequences for the health care . studies have shown a 24 times higher hospitalisation rates among children <3 years of age for pneumonia , influenza and childhood respiratory syncytial virus infections and higher rates of skin infections in persons of all ages in villages where the majority of homes had lower water availability because of no in - house piped water source , compared to homes that had higher water availability because of in - home piped water service ( 11 ) . in alaska , climate change is resulting in damage and disruption of community water infrastructure in many arctic communities ( 12 ) . reduced availability to safe water results , according to the study performed in alaska , in increased rates of hospitalisation for respiratory and skin infections . this could increase the use of antibiotics , and an overuse of antibiotics might result in an increase in resistant bacteria . studies are in progress to investigate the situation . today in parts of northern russia as well as other areas of the arctic , surface water meets domestic needs as drinking , cooking and cleaning as well as subsistence and industrial demands . indigenous communities depend on sea ice and waterways for transportation across landscape and access to traditional country foods . the industries also use large quantities of surface water during winter to build ice roads and maintain infrastructure . for all of these reasons , it is critical to understand the impacts of climate change on water security in the arctic with its specific demands . arctic warming means thawing of permafrost that is impacting both the community source water ( groundwater , rivers and lakes ) and water infrastructure , the piped water and water storage and purification systems often build on permafrost . floods have affected yakutia more than other regions in russia . in 2001 , a flooding occurred in the city of lensk . a spring flood made the water level rise by 2.02.5 m resulting in city infrastructure being destroyed and a 30-fold increase of hepatitis a. the total damages amounted to over 7 billion roubles ( 10 ) . the so - called geocryological hazard index used to assess the risk of damage to structures built on permafrost is especially high in chukotka , on the coast of the kara sea , in novaya zemlya and the north of the european part of russia . permafrost degradation along the coast of the kara sea may lead to intensified coastal erosion that moves the coastline back by 24 m per year posing considerable risks for coastal population centres in yamal and taymyr . even in areas where there is good infrastructure , unexpected problems arise . during 2010 and 2011 , outbreaks of cryptosporidium parvum infections occurred in two municipalities in northern sweden causing disease in thousands of individuals and disrupting everyday life as water had to be boiled before being used . in stersund , the first municipality struck , > 12,000 persons got sick with gastrointestinal symptoms , 61 were hospitalised . more than 50,000 persons were affected by the advice from the authorities that all water used for drinking or cooking should be boiled . the second outbreak affected the population in skellefte , in northern vsterbotten where > 6,000 got sick . the water used for drinking had been boiled since the middle of april and the final cleaning of the water occurred in september 2011 . the advice about boiling caused a rapid response ; 2 days after this statement to the public , the number of new persons with symptoms declined . one cause that is under investigation is that the intake of surface water for drinking water is close to the sewage outlet and as more precipitation has occurred during the last decades , a connection is established . as in other parts of the arctic , the infrastructure of yesterday , supposed to last until tomorrow , is not sufficient for the situation of today . improved surveillance systems are needed for community source water , including waterborne and water - washed diseases to detect impacts of climate change in the arctic , and international networks need to be further developed . microbial surveillance of drinking water including water sources for indigenous peoples in the arctic should be prioritised . climate change health assessment methods have been developed in alaska ( 13 ) . in greenland , water quality is secured by legislation , day - to - day running of the water supply and supervision of the water resource . the government is implementing the eu drinking water directive that also is an eu demand , if greenland wants to continue to export foodstuffs to eu ( 14 ) . the directive demands water quality information from public utilities at a level not used in greenland before . so , there is a need for information material , both in the form of data sheets with analysis results and as explanations and descriptions of the analysis results . a portal , owned by greenland resources , is the authorities medium for information to the public . besides , there is a general request for a gathering and structuring of all the knowledge that is accumulated in the last 5 years about water quality , water resources , water handling and authority matters including legislation and the last 20 years water chemical analysis results . the impact of policy in one nation can have impact on the water security of other nations . there is a need for governance at all scales global , regional , national , local as well as the catchment level and a need for linkages between these scales . the arctic council has , through the sustainable development working group , established the arctic human health expert group ( ahheg ) . this group of experts have the task to develop working plans for improvement of health for the people living in the arctic . water security is important for national security as demonstrated by the international conflicts around access to water occurring in east africa . what is required to meet the increased demand is the implementation of effective governance , financing and regulation to allow technical solutions to be effective for global water security . thus , today it is of uttermost importance to raise awareness of key issues and potential responses and have a broader public debate on sustainable resource use and management . existing values , cultural norms and organisational structures that empower the individual determine patterns of individual behaviour and organisational response to influence this is a great pedagogic challenge , but the success of implementation from governments and public authorities relies on the response from the individual . maintaining and ensuring the security of water and ability to supply demands from the water resources available are essential to humankind everywhere now and in the future and are equally important for vulnerable populations in the north . the authors have not received any funding or benefits from industry or elsewhere to conduct this study .
water is of fundamental importance for human life ; access to water of good quality is of vital concern for mankind . currently however , the situation is under severe pressure due to several stressors that have a clear impact on access to water . in the arctic , climate change is having an impact on water availability by melting glaciers , decreasing seasonal rates of precipitation , increasing evapotranspiration , and drying lakes and rivers existing in permafrost grounds . water quality is also being impacted as manmade pollutants stored in the environment are released , lowland areas are flooded with salty ocean water during storms , turbidity from permafrost - driven thaw and erosion is increased , and the growth or emergence of natural pollutants are increased . by 2030 it is estimated that the world will need to produce 50% more food and energy which means a continuous increase in demand for water . decisionmakers will have to very clearly include life quality aspects of future generations in the work as impact of ongoing changes will be noticeable , in many cases , in the future . this article will focus on effects of climate - change on water security with an arctic perspective giving some examples from different countries how arising problems are being addressed .
Global aspects on water security Water stress and water footprint Climate change in the Arctic Climate change effects on water security in the Arctic Surveillance Policymaking Conflict of interest and funding
PMC4691608
over the last decades , obesity has become a global epidemic and an important public health problem in many countries . this condition is largely due to excessive consumption of saturated fats and simple sugars [ 2 , 3 ] , which , associated with sedentarism , represent the modern lifestyle . obesity is recognized as a risk factor for many disorders including type-2 diabetes and nonalcoholic fatty liver disease ( nafld ) . nafld encompasses a spectrum of increasingly severe clinicopathological conditions ranging from fatty liver to steatohepatitis ( nash ) with or without hepatic fibrosis / cirrhosis . recent evidence suggests that nafld is also associated with cardiovascular and chronic kidney disease and increased risk of hepatocellular carcinoma [ 58 ] . it has been considered that insulin resistance and hyperinsulinemia play a key role in the pathogenesis of nalfd ( first causative step ) . excessive deposition of fat in adipocytes and muscles determines insulin resistance with subsequent accumulation of fat in the liver , which , in turn , increases the rate of mitochondrial beta - oxidation of fatty acids and ketogenesis that can promote lipid peroxidation and accumulation of reactive oxygen species ( ros ) in the hepatocytes [ 10 , 11 ] . these compounds generate a variety of cellular stimulations with subsequent inflammatory response , which has been recognized as the causal factor of nash / fibrosis ( second causative step ) [ 12 , 13 ] . in spite of growing knowledge , several aspects of nafld pathogenesis are still unknown . considering the difficulty in developing human studies to evaluate the influence of nutrition in the development of nafld and associated metabolic abnormalities , experimental models constitute a reliable alternative way . different animal models of nafld / nash have been developed , but few of them replicate the entire human phenotype [ 12 , 14 ] . these models may be classified into three basic categories : those caused by either spontaneous or induced genetic mutation ; those produced by either dietary or pharmacological manipulation ; and those involving genetic mutation and dietary or chemical challenges . the dietary manipulations used in these last two types of models usually do not resemble human dietary pattern . in the present study , we developed a model of obesity and obesity - related nafld in nongenetically modified wistar rats using a simple carbohydrate - rich diet , which resembles the current dietary pattern of humans , and followed the sequence of the pathophysiologic events and their clinical and metabolic consequences . in this context , it should be noted that , in the vast majority of studies on nafld in which animal models were employed , the description of the sequence of the pathophysiologic events and their consequences have not been addressed , as their key goal is usually the evaluation of a specific aspect such as a therapeutic intervention . furthermore , we evaluated the impact of physical training on the metabolic abnormalities associated with this disorder . sixty male wistar rats , approximately 28 days old ( after weaning ) , were housed individually and had free access to water and rat diet . the animals were randomly separated into the following groups : experimental group ( eg ) , fed with highly palatable diet ( see below ) during 5 ( eg5 , 6 rats ) , 10 ( eg10 , 6 rats ) , 20 ( eg20 , 6 rats ) , and 30 ( eg30 , 12 rats ) weeks , and control group ( cg ) , fed with standard rat chow during 5 ( cg5 , 6 rats ) , 10 ( cg10 , 6 rats ) , 20 ( cg20 , 6 rats ) , and 30 ( cg30 , 12 rats ) weeks . from week 25 to week 30 , 12 animals belonging to the eg30 ( 6 rats ) and cg30 ( 6 rats ) were submitted to physical training ( see below ) . at the end of each experimental period , after fasting for 10 hours , the animals were sacrificed . the livers were immediately removed and fragments of about 1 mm thickness were fixed in 4% formaldehyde , dehydrated , immersed in xylene , and then embedded in paraffin for histology . all experiments were approved by the ethics committee of the universidade federal de minas gerais for the care and use of laboratory animals ( cetea 53/2007 ) and were carried out in accordance with the regulations described in the committee 's guiding principles manual . the standard rat chow ( nuvilab - cr1 nuvital - colombo , brazil ) had the following nutrient composition : protein , 22% ; fat , 4% ; carbohydrate , 42% ; minerals , 10% ; phosphorus , 0.8% ; vitamins , 1% ; fiber , 8% ; water , 12.5% . the chemical analysis revealed that 100 g of this diet contained 309 kcal , 24.8 g of protein , 3.4 g of fat , 44.8 g of carbohydrates , 8.2 g of fixed mineral residue , and 18.8 g of dietary fiber . the diet known as effective in inducing obesity in rats and described as highly palatable was composed of what follows : 33% of standard rat chow compacted to powder , 33% of condensed milk ( moa , nestl , brazil ) , 7% of sucrose ( refined sugar , unio , brazil ) , and 27% of water . the condensed milk was nutritionally composed of carbohydrate , 56.7% ; fat , 8.3% ; protein , 6.7% ; water , 28.3% . according to the chemical analysis , 100 g of dried highly palatable diet contained 339 kcal , 16.1 g of protein , 3.4 g of fat , 61 g of carbohydrates ( 18% of simple carbohydrates ) , 5.1 g of fixed mineral residue , and 14.4 g of dietary fiber . the diet was prepared daily , weighed , fractionated in portions , and stored in the feeder for 810 hours . the remaining food in the feeder was weighed to calculate the final amount of ingested food . the water content of the drinking bottles was renewed daily . on a weekly basis , the body weight , thoracic circumference ( tc ) ( measured between the foreleg and hind leg ) , and nasoanal length were measured . body mass index ( bmi ) , that is , the ratio between body weight ( g ) and the square of body length ( cm ) , was calculated . all animals were acclimatized to exercise on the motor - driven treadmill ( gaustec , brazil ) by running at a speed of 10 mmin at 5% inclination for 5 minutes / day , during 5 consecutive days . after exercise familiarization , trained rats were submitted to the physical training protocol , which consisted of running sessions with gradual increase in intensity across 5 weeks , 5 days / week . the speed and duration of the exercise bouts were increased until the rats were able to run at 25 mmin , 5% inclination , during 60 minutes / day . the achievement of this exercise intensity ensures that a significant endurance training effect is produced . in order to ensure that all animals were subjected to the same handling stress , untrained group was submitted to running exercise on the same days of physical training , at the same speed , but for 2 minutes only . measurement of glucose , total cholesterol , very low - density lipoprotein- ( vldl- ) cholesterol , low - density lipoprotein ( ldl- ) cholesterol , high - density lipoprotein- ( hdl- ) cholesterol , and triglycerides was performed as recommended by the manufacturer ( bioclin , quimbasa , basic chemistry ltda , brazil ) using an autoanalyzer ( statplus 2300 , yellow spring inst , usa ) . serum concentrations of leptin and insulin were determined by radioimmunoassay ( rat leptin ria kit , rat insulin ria kit , linco research , usa ) using a gamma - ray counter ( mor - abbot , usa ) . the minimum detection value was 0.5 ng / ml . the determination of superoxide dismutase ( sod ) activity was adapted from dieterich et al . . briefly , fresh liver samples were homogenized in 50 mm sodium phosphate buffer ( 1 ml , ph 7.8 , 37c ) and 1 mm of diethylenetriamine pentaacetic acid ( dtpa ) , immediately after their removal . the reaction was initiated by addition of pyrogallol acid ( 0.2 mm / l , 37c for 3 minutes ) and the absorbance measured at 420 nm . sod activity was calculated as u / mg protein , where 1 u of the enzyme was defined as the amount required to inhibit the oxidation of pyrogallol by 50% . catalase ( cat ) activity was measured in the supernatant of liver homogenate as described by nelson and kiesow . briefly , 0.04 ml of h2o2 , 0.06 ml of liver homogenate , and 1.9 ml of potassium phosphate buffer ( 50 mm , ph 7.0 ) were mixed to give a final concentration of 6 mm of h2o2 . the decomposition of h2o2 by cat was evaluated by the change in absorbance at 240 nm . cat activity was expressed as mmol of h2o2 decomposed per minute per milligram of protein . this procedure was adopted to avoid the possibility of interference in the activity of glutathione peroxidase , once the necessary cofactors were not present in the reaction medium . histological sections were prepared from the material embedded in paraffin and stained with hematoxylin - eosin . the criteria established by brunt et al . were used to describe the histological lesions . according to these criteria , macrovesicular steatosis is quantified based on the percentage of involved hepatocytes ( 0 = absent ; 1 < 33% ; 2 = 3366% ; 3 > 66% ) , and its zonal distribution and the presence of microvesicular steatosis are noted ; hepatocellular ballooning is evaluated for zonal location , and the estimate of its severity ( mild , marked ) is based on the numbers of hepatocytes showing this abnormality . hepatic expression of malondialdehyde ( mda ) , leptin , and the leptin receptor ob - r was evaluated by immunohistochemistry in the animals sacrificed at weeks 20 and 30 . from paraffin embedded tissues , sections on salinized slides ( 4 mm ) were collected , deparaffinized , and hydrated . for immunohistochemistry , antigen reaction with ethylenediaminetetraacetic acid ( edta ) at ph 8.0 , no steamer for 30 minutes at 98c , was conducted , followed by tris hcl ph 7.6 washing . the whole procedure was performed using polymer detection system kit ( novolink polymer detection system , novocastra , usa ) . the primary antibodies used were anti - mda monoclonal antibody ( 1f83 ) ( cosmo bio co. , ltd . , japan ) diluted in 0.5 ml ; anti - ob ( a-20 ) sc-84 ; and anti - ob - r ( h-300 ) sc-8325 ( santa cruz biotechnology inc . , usa ) at a dilution of 1 : 250 and 1 : 100 , respectively . data are presented as frequencies and percentages , mean standard deviation ( sd ) , and median and interquartile range ( iqr ) . for each quantitative response 's variables , we developed linear regression models in which all variables with p value 0.25 at univariate analysis would be included initially . however , due to the high level of correlation between the explanatory covariates , we opted to adjust the final model with the following covariates : group , physical training , variation in bmi ( bmi ) , and variation in the amount of ingested calories ( kcal ) . the adequacy of the models was assessed by analysis of the residues . for the categorical variables , logistic regression models were developed , with inclusion of the variables that showed on the univariate analysis a p value 0.25 , and also clinical significance . the results of the anthropometric parameters , lipid and glucose profile , hormones levels , and antioxidant enzymes activity , as well the results of their comparative analyses between eg and cg along the time of follow - up , are described in tables 1 , 2 , and 3 . insulin ( figure 1(a ) ) and leptin ( figure 1(b ) ) serum levels varied inversely over time in the eg . liver histology was normal ( figure 2(a ) ) in the cg in all times of the experiment . steatosis and hepatocellular ballooning ( figures 2(b ) and 2(c ) ) were observed only in the eg , from week 10 . steatosis was macro- and microvacuolar , located predominantly in zone 3 of the liver acinus . the intensity of the macrovacuolar steatosis varied from mild ( involvement of less than 33% of the hepatocytes ) to severe ( involvement of more than 66% of the hepatocytes ) regardless of the time of the experiment . ballooning was localized in zones 2 and 3 of the acinus , ranging from mild to marked and mismatched with the time of experiment . the reaction for identifying mda ( figure 2(d ) ) was positive and intense , of cytoplasmic localization in zone 3 of the hepatic acinus , around the central vein , in eg20 and eg30 . leptin ( figure 2(e ) ) was identified in the cytoplasm especially in zone 3 of the acinus , in eg20 and eg30 . in cg rats , ob - r was expressed as a weak cytoplasmic reaction predominantly in zone 3 of the acinus , in the rats of both groups , at weeks 20 and 30 . the comparison of the different variables between physical trained and untrained groups showed higher serum levels of hdl - cholesterol in the first group : medians 75 mg / dl and 52.2 mg / dl , respectively ( p = 0.007 ) . no other clinical or metabolic variable was significantly different between the groups after the physical training . table 4 shows the results of the final linear and logistic regressions models . in summary , blood glucose levels were 49% higher in eg rats than in cg rats , and the rats studied for 10 and 30 weeks had an increase of 49% and 65% , respectively , in serum glucose compared to those studied for 5 weeks . total cholesterol was 19.2 mg / dl higher in the eg in comparison with the cg . rats undergoing physical training showed an average of 27.1 mg / dl increase in hdl - cholesterol than those that did not exercise ; and each increase of 1 unit in kcal intake caused an average reduction of 0.03 mg / dl in hdl - cholesterol levels . regarding ldl - cholesterol , there was an average increase of 60.2 mg / dl for each increase of 1 unit in bmi . the first , composed by the time ( categorical ) and groups of rats , showed that the eg20 and eg30 had , respectively , lower insulin values of 83% and 89% compared to eg5 . furthermore , the animals of eg had an average insulin levels increased by 123% compared to the cg . the second model , including time ( quantitative form ) , groups of rats , and kcal intake , showed that , for each increase of 1 unit in time , the average value of insulin decreased by 7% and , for each increase of 1 unit in kcal intake , the average value of insulin increased by 0.2% . the eg rats had an average insulin level increased by 100% compared to those of the cg . in eg20 and eg30 , the leptin values were 33% and 40% higher , respectively , compared to the rats followed for 5 weeks . the eg had a mean value of leptin increased by 267% compared to the cg , and for every increase of 1 unit in bmi the average value of leptin increased by 124.6% . the amount of sod was 24% lower in the animals followed for 30 weeks in relation to those studied for 5 weeks . in the eg , the mean values of sod were 11% lower compared to the cg ; and , for each increase of 1 unit in bmi , the mean sod values decreased by 54% . the rats studied for 20 weeks presented an average of 2.4 less cat units than those studied for 5 weeks , and in the eg an average of 1.8 less units of cat relative to the cg was observed . concerning the histological findings , it was found that , for each increase of 1 unit in the tc , the chance of expressing ballooning and steatosis increased by 50% . this study demonstrates that a diet with high amount of simple carbohydrates , which resembles the current human dietary pattern , was able to induce obesity - related nafld , here characterized histologically by hepatic steatosis and hepatocyte ballooning , clinically by increased tc and bmi associated with hyperleptinemia , and metabolically by hyperglycemia , hyperinsulinemia ( with subsequent insulin return to baseline levels ) , hypertriglyceridemia , increased serum levels of vldl - cholesterol , depletion of antioxidants liver enzymes , and increased levels of mda , an oxidative stress marker . furthermore , rats that underwent physical training showed a significant increase in hdl - cholesterol in comparison to those that did not exercise . high - fat and methionine choline - deficient diets are widely used to produce hepatic steatosis and nash in experimental animals [ 12 , 14 , 2126 ] . however , these diets do not reflect the usual dietary pattern of humans regarding their composition . diets high in both saturated fat and simple carbohydrate have also been commonly used in genetically modified or wild - type animals in experimental models of nafld [ 2735 ] . animal models in which nafld was induced by simple carbohydrate - rich diets ( usually fructose ) are less numerous , and in most of them only hepatic steatosis was observed [ 28 , 3644 ] . although the animal models that combine naturally occurring or induced genetic mutations associated with dietary or chemical challenges resemble the histopathology and pathophysiology of human nafld more closely , the dietary challenge is usually performed by high - fat or methionine choline - deficient diets [ 12 , 14 , 4547 ] . although each of these models is valuable , they fail to address key aspects of the process in humans . for example , few humans have diets that are deficient in methionine and choline . moreover , rodents exposed to methionine- and choline - deficient diets are not obese ; rather , they lose weight and become more insulin - sensitive . on the other hand , the diet used in our investigation was balanced in terms of its content in proteins , lipids , carbohydrates , vitamins , and minerals , in addition to being highly palatable , normocaloric , and fiber containing . furthermore , it was administered in solid consistency , as pellets , during a relatively long period of time . what has usually been described in the other animal investigations is a rapid induction of obesity due to the administration , in a short period of time , of a high - caloric high - fructose and/or high - fat diet , as liquid in troughs or via a nasogastric tube . in synthesis , we sought to feed the animals with a diet as similar as possible to a normal diet regarding its content as well as its form of administration . in our study , free access to the sucrose - rich diet and high food consumption caused obesity / abdominal obesity in the eg rats from week 10 . obesity was associated with increased serum levels of glucose , triglycerides , vldl - cholesterol , and insulin , which are manifestations of insulin resistance [ 9 , 49 ] . the hyperinsulinemia led to increased hepatic synthesis of fatty acids , triglyceride accumulation in the hepatocytes , with subsequent steatosis . the de novo hepatic lipogenesis , which is aggravated by diets with higher carbohydrate content than fat , plays an important role in glucose homeostasis and development of hypertriglyceridemia and hyperinsulinemia [ 50 , 51 ] . for example , when the amount of ingested carbohydrate exceeds the total calorie needs , the rate of de novo hepatic lipogenesis increases by 10 times . likewise , this rate increases 27 times with the ingestion of a diet with high carbohydrate content compared to low - carbohydrate diets and fasting . a positive correlation between increase in serum levels of leptin and bmi was another finding of this study that corroborates human observations . the hyperleptinemia may be not only a consequence of hyperphagia and obesity , but also a result of the fructose component of the diet , which is in agreement with the study by vil et al . , which demonstrated induction of hyperleptinemia by fructose . in humans , increased levels of leptin are observed in obese individuals and in patients with nafld / nash . it is suggested that this increase may reflect a state of leptin resistance at central level as well in the muscles and liver [ 56 , 57 ] . in an attempt to understand the action of leptin in the liver and its possible role in the pathogenesis of nafld / nash , we evaluated the expression of leptin and ob - r in the hepatic parenchyma and found intense leptin reaction in eg30 , whereas ob - r was observed in both groups , without difference between them . a possible role of leptin as an inducer of hepatic mitochondrial beta - oxidation has been postulated . huang et al . demonstrated that leptin in vivo enhances the activity of the fatty acid oxidative pathway in the liver , thus contributing to the reduction of triglycerides and vldl - cholesterol in rats without leptin resistance . on the other hand , some authors observed increased mitochondrial beta - oxidation in the liver of leptin deficient mice ( ob / ob ) with severe steatosis . cao et al . showed that leptin , in the long term , can cause hepatic fibrosis due to the increase of the local levels of oxidative stress . therefore , it is possible to hypothesize that leptin may play a protective role in the early stages of nafld ; and , at later stages , it may contribute to the development of fibrosis . further studies are necessary to clarify the biological function of leptin in the normal liver and its possible role in diet - induced nafld . early stages of nafld were present in all liver samples of the eg from week 10 . at the final stage of the investigation , although more exuberant steatosis was expected , the pattern was similar to that observed at week 10 . the duration of the study may not have been long enough to allow the development of more severe steatosis and the histological changes that characterize nash . as the hepatic lesions that occur in nash are associated with the expression of proinflammatory cytokines in the liver , it is possible that their investigation could have demonstrated nash at an early stage . it is also possible to speculate that the high levels of leptin could be exerting a protective effect . mda , a marker of lipid peroxidation , presented exuberant expression in the eg , whereas this reaction was negative in the cg . oxidative stress induced by lipid peroxidation is a result of oxidant / antioxidant system imbalance . cellular stimulation by ros and the subsequent inflammatory response have been described as the second hit that culminate with the development of nash [ 62 , 63 ] . in this context , we found in eg30 a reduction in the levels of the antioxidants enzymes sod and cat . this observation suggests that during the initial phases of the experiment there was a balance between antioxidants / prooxidants constituents ; however , over time , an imbalance in favor of prooxidants was developed . the use of diets with high amounts of simple carbohydrates induces hypertriglyceridemia resulting in reduction of the antioxidants reserves [ 64 , 65 ] . although we observed hepatocellular ballooning denoting cell injury , one limitation of our study is the fact of not detecting nash histologically . this was also a finding in several of the previous models in which nafld was induced by a simple carbohydrate - rich diet [ 28 , 3644 ] . as stated above , it is possible that the time of the experiment was not long enough to enable the development of the histological characteristics of nash , which may require higher levels of ros and/or longer exposure to the offending agent , in addition to liver susceptibility probably related to genetically determined factors , such as preexisting defects in mitochondrial oxidative phosphorylation [ 66 , 67 ] . in the presence of intense and sustained production , ros can cause damage to cell membranes , proteins , and dna , leading to the release of proinflammatory cytokines , activation of hepatic stellate cells , fibrogenesis , and direct liver damage . the physical training used in this study was effective in increasing hdl - cholesterol , corroborating the findings from a study in zucker rats . on the other hand , other authors found no significant effect on hdl - cholesterol in rats or mice submitted to physical training [ 71 , 72 ] . no other metabolic parameter suffered alteration in response to physical exercise , which could have been due , at least partially , to the time not long enough of the physical training . in this context , 12 weeks of regular exercise reduced liver triglyceride content and serum levels of ldl - cholesterol in the kk / ta mice fed a high - sucrose diet . in humans , evidence suggests that regular exercise reduces the risk factors for nash [ 1 , 8 ] . our study demonstrated that a diet enriched with sucrose induced obesity , insulin resistance , diabetes , oxidative stress , and subsequent hepatic steatosis and hepatocellular ballooning . the lack of histologically evident inflammation and fibrosis in the liver parenchyma may have been due to the insufficient time of the experiment .
the pathogenesis of nonalcoholic fatty liver disease ( nafld ) is not fully understood , and experimental models are an alternative to study this issue . we investigated the effects of a simple carbohydrate - rich diet on the development of obesity - related nafld and the impact of physical training on the metabolic abnormalities associated with this disorder . sixty wistar rats were randomly separated into experimental and control groups , which were fed with sucrose - enriched ( 18% simple carbohydrates ) and standard diet , respectively . at the end of each experimental period ( 5 , 10 , 20 , and 30 weeks ) , 6 animals from each group were sacrificed for blood tests and liver histology and immunohistochemistry . from weeks 25 to 30 , 6 animals from each group underwent physical training . the experimental group animals developed obesity and nafld , characterized histopathologically by steatosis and hepatocellular ballooning , clinically by increased thoracic circumference and body mass index associated with hyperleptinemia , and metabolically by hyperglycemia , hyperinsulinemia , hypertriglyceridemia , increased levels of very low - density lipoprotein- ( vldl- ) cholesterol , depletion of the antioxidants liver enzymes superoxide dismutase and catalase , and increased hepatic levels of malondialdehyde , an oxidative stress marker . rats that underwent physical training showed increased high - density lipoprotein- ( hdl- ) cholesterol levels . in conclusion , a sucrose - rich diet induced obesity , insulin resistance , oxidative stress , and nafld in rats .
1. Introduction 2. Material and Methods 3. Results 4. Discussion 5. Conclusion
PMC3236469
patients with cleft palate often exhibit nasality , which is a distinctive feature and an important target in speech therapy and rehabilitation . to evaluate velopharyngeal function , the aerodynamic and acoustic aspects of nasalization have been studied . an aerodynamic exam can diagnose the degree of velopharyngeal closure [ 1 , 2 ] , and acoustic measurements can categorize velopharyngeal insufficiency [ 3 , 4 ] . the abnormal resonance generated by velopharyngeal insufficiency can be evaluated quantitatively using a nasometer . on the other hand , the voice and speech of patients with cleft palates have been studied using many techniques including spectral analysis , perturbation analysis , and formant analysis . zajac and linville and lewis et al . reported that cleft palate speakers have larger frequency perturbations ( jitter ) than normal controls . however , the methods used to calculate perturbations , jitter , and shimmer are only reliable for nearly periodic voice signals and can not reliably analyze strongly aperiodic signals . recently , nonlinear dynamic methods have enabled the quantification of aperiodic and chaotic phenomena [ 911 ] . in our previous paper , we reported that the lyapunov exponents ( les ) of the vowels /a/ , /e/ and /o/ for adult cleft palate patients are higher than those for normal resonance adults and that there were no correlation coefficients between les and nasalance scores ( nss ) . these results suggested that vocal fold vibration may be less stable in adult cleft palate patients than in normal resonance subjects and that the le may be a parameter independent of resonance . subsequently , we investigated the nonlinear dynamic characteristics of cleft palate speech and voice . in the present paper , the purpose was to clarify the difference between the les for cleft palate patients with hypernasality versus without hypernasality and to investigate the relationship between their les and nss . six repaired cleft palate patients with severe hypernasality ( mean age 9.2 years ; range 6 to 13 , 2 boys and 4 girls ) and six repaired cleft palate patients without hypernasality ( mean age 8.0 years ; range 6 to 13 , 4 boys and 2 girls ) were enrolled . the presence of hypernasality was perceptually judged by two speech therapists from okayama university hospital . the present study , which was approved by the okayama university institutional ethical board , was carried out after obtaining informed consents from the parents of all participants . the voices were recorded through a microphone ( shure bg1.1 , niles , ill ) on a portable solid - state recorder ( marantz pmd 640 , itasca , ill ) with a nasometer ii headset ( model 6400 , kay elemetrics corp . , lincoln park , nj ) in a quiet room designated for speech therapy in the okayama university dental hospital . the voice samples were recorded on the compact flush medium of the recorder at a sampling rate of 44.1 khz , at 16 bits , in a * .wav file format . the japanese vowels /a/ ; [ a ] , /i/ ; [ i ] , /u/ ; [ ] , /e/ ; [ e ] , and /o/ ; [ o ] were used as voice samples . each vowel was naturally phonated during approximately one - second three times . the voice data were processed on a personal computer ( nec mate ma30y , tokyo ) with a modified chaos analyzing program ( ver . 1.0.4 , cci corporation , fukuoka ) , which used the algorithm from sano and sawada . the first lyapunov exponent ( le1 ) was computed for each one second interval , while the interval was being shifted by 100 msec . the first zero - crossing points of autocorrelation were calculated for each vowel . as a result , the delay time was estimated at 15 , 32 , 27 , 22 , and 21 points ( 1 point = 1/44.1 msec . ) for the vowels /a/ , /i/ , /u/ , /e/ , and /o/ , respectively ( table 1 ) . the fractal dimensions were computed using the grassberger - procaccia algorithm , and convergent diagrams , in which the embedding dimensions were assumed , were then constructed for each vowel . thus , the embedding dimensions were estimated at 5 for all vowels ( table 1 ) . the differences of the first zero - crossing points of autocorrelation , estimated the embedding dimensions , and the mle1s between the two groups with versus without hypernasality were analyzed statistically using the mann - whitney u test . the statistical package spss ( ver.16.0 ) was utilized , and differences with p values of less than 0.05 were considered to be statistically significant . there were no significant differences between the first zero - crossing points and the estimated embedding dimensions of those patients with or without hypernasality for all vowels ( table 1 ) . the mle1 for /o/ in the patients with hypernasality was significantly higher than in patients without hypernasality ( p = 0.015 ) ( table 2 ) . the nss for /i/ , /u/ , /e/ , and /o/ in the patients with hypernasality were significantly higher than in patients without hypernasality ( table 3 ) . the correlation coefficients between the mle1 and ns for all vowels were not statistically different ( table 4 ) . although nasality can be evaluated using a spectral analysis of speech signals , voice acoustic measures of nasality are not universally used in clinical or empirical work because of ambiguity in the literature regarding the appropriate acoustic methodology , the amount of labor involved as compared with the nasometer , and so forth . however , vogel et al . demonstrated the potential for the wider application of acoustic investigation into nasality . several authors have described laryngeal disorders , including organic and functional disorders , in cleft palate speakers [ 16 , 17 ] . zajac and linville and lewis et al . reported that cleft palate speakers have higher frequency perturbations ( jitter ) than normal controls . nicollas et al . demonstrated that neither jitters nor shimmers significantly differed with age or gender . van lierde et al . reported a multiparameter approach to vocal quality but stated that the nature of the vocal quality and the voice range measurement differences can not be explained from their study . therefore , we concluded that future studies on the voice of cleft palate subjects using nonlinear analysis may be beneficial in gaining further insight into the mechanics of phonation . to our knowledge , there have been no reports on the application of nonlinear dynamic analysis to cleft palate speech . our previous study demonstrated that the mle1 for /a/ , in both males and females with cp , is significantly higher than in normal resonance individuals and that the mle1 for /e/ in males with cp and for /o/ in females with cp are significantly higher than in normal resonance individuals . since the mle1 is a measure of the instability of the voice signal , these results suggest that the vocal fold vibration is less stable in cp speakers than in normal resonance subjects . in addition , the correlation coefficients between the mle1 and ns for all vowels were not statistically different in both normal and cp subjects . the mle1 for /o/ in the patients with hypernasality was significantly higher than in patients without hypernasality ; in other words , the voice signal of /o/ for the patients with hypernasality was more instable than in those without hypernasality . on the other hand , the correlation coefficients between the mle1 and ns for all vowels were not statistically different in patients both with versus without hypernasality . this supported the independence of chaotic phenomenon and nasal resonance in cleft palate speech and voice , which was demonstrated in our previous paper . nicollas et al . reported that the large le seems to decrease with age from their studies of children between 6 and 12 years of age . it was also suggested that the large le is lower in boys than in girls overall but varies for each age . in our present study , the boys and girls were not separated because of the small sample size . the voice signal of /o/ for the patients with hypernasality was more instable than in those without hypernasality .
objectives . to clarify the difference between lyapunov exponents ( les ) for cleft palate ( cp ) patients with hypernasality versus without hypernasality and to investigate the relationship between their les and nasalance scores ( nss ) . material and methods . six cp patients with severe hypernasality ( mean age 9.2 years ) and six cp patients without hypernasality ( mean age 8.0 years ) were enrolled . five japanese vowels were recorded at 44.1 khz , and the nss were measured simultaneously . the mean first le ( mle1 ) from all one - second intervals was computed . results . the mle1 for /o/ in patients with hypernasality was significantly higher than that in patients without hypernasality . the correlation coefficients between the mle1 and ns for all vowels were not statistically different . conclusion . the voice signal of /o/ for the patients with hypernasality was more instable than in those without hypernasality . the chaotic phenomenon was independent of nasal resonance in cp voice .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion 5. Conclusions
PMC2943184
the western denmark heart registry ( wdhr ) is a clinical database within a population - based health care system . to improve cardiac treatment quality , the danish national board of health decided in 1993 to increase the number of invasive cardiac interventions in denmark.1 in response to this initiative , the wdhr was founded on january 1 , 1999 as a collaborative effort by western denmark s three major cardiac centers ( aarhus university hospital - skejby , odense university hospital , and aarhus university hospital - aalborg ) in order to monitor the cardiovascular treatment quality in western denmark . the remaining cardiac centers in western denmark ( varde heart centre , region hospital viborg , region hospital herning , region hospital silkeborg , vejle hospital , haderslev hospital , aarhus hospital , svendborg hospital , and hospital of southwest denmark - esbjerg ) joined the registry later . the participating centers own the wdhr and finance its operation through annual membership fees set according to hospital size . the wdhr serves as a regional data source to the danish heart registry , which also contains data from eastern denmark and thus is responsible for the national monitoring of cardiac intervention quality.2,3 the wdhr , however , contains several data beyond what is delivered to the danish heart registry.2,3 thus , in addition to monitoring and improving the cardiac intervention quality in western denmark , the aim of collecting data to the wdhr is to allow for clinical and health - service research on the use of and outcomes from these procedures . in this study we examined the setting , organization , content , data quality , and research potential of the wdhr . western denmark has a population of 3.3 million ( 55% of the total danish population ; figure 1 ) . denmark provides an optimal environment for conducting medical database - based research because : ( i ) the danish national health service provides tax - supported universal health care , guaranteeing unfettered access to general practitioners and hospitals , and partial reimbursement for prescribed medications ; ( ii ) cardiac intervention in western denmark are performed only at participating cardiac centers ; ( iii ) all danish citizens can be tracked in the health care system and national registries using the unique ten digit central personal registry ( cpr ) number assigned to each danish citizen at birth and to residents upon immigration;4 and ( iv ) information on exposures , disease outcomes , and potential confounding factors can be ascertained through cpr linkage to other danish medical databases ( figure 2 ) , which store information on eg , citizen vital statistics since 1968 , including date of birth , change of address , date of emigration , and exact date of death ( the civil registration system),5 specific causes of death since 1943 ( the registry of causes of deaths),6 characteristics of all nonpsychiatric inpatient admissions since 1977 and all outpatient clinic visits since 1995 ( the national patient registry),7 prescribed medication since 1995 ( the nationwide prescription database),8 and all laboratory results from patient blood samples since 1997 ( the laboratory database).9 the organization behind the wdhr comprises a committee of representatives , a board , and a data management group . the committee of representatives consists of medical specialists from the cardiac centers and includes nine cardiologists , three cardiac surgeons , and three anesthesiologists . one member from each specialty group is selected for the representatives executive committee with voting rights on the board . the committee of representatives coordinates all database changes , participates in securing data quality , reports to the danish heart registry , and promotes future initiatives within the wdhr . in addition to the representatives executive committee , the board consists of one hospital management representative from each of the three major cardiac centers , among whom the chairman is chosen . the board provides oversight , maintains contracts with database suppliers , sets annual membership fees , defines the strategy and goals for the wdhr , and holds the responsibility for the budget and the data quality to the danish heart registry . the board appoints a data management group , which holds the responsibility for day - to - day management including implementing database changes , preparing annual reports , and daily communication between the committee of representatives , the board , and the database suppliers . the wdhr includes all adult ( 15 years ) patients in western denmark referred for cardiac intervention , ie , invasive procedures ( coronary angiography [ cag ] or percutaneous coronary intervention [ pci ] ) , cardiac surgery ( predominantly valve surgery and coronary artery bypass grafting [ cabg ] ) , and from 2008 also computed tomography ( ct ) cag . invasive cag is performed at all cardiac centers like ct cag , except at the region hospital silkeborg , aarhus hospital , and svendborg hospital . pci and cardiac surgery are performed only at the three major cardiac centers and the varde heart centre . during 2008 more than 23,000 procedures were performed.10 by january 2010 , the wdhr contained patient data on approximately 120,000 cags , 52,000 pcis , 26,000 cardiac operations including 17,000 cabgs , and 3,000 ct cags.10 the wdhr is derived from an internet - based online system , running on an encrypted public net . data are entered by the physicians into a computer - based data management system using the cpr numbers . one interface provides physicians with a visual overview of the variables to be filled in . thus , for each procedure , physicians report administrative data , including dates of referral , admission , operation , and discharge ; and clinical data , including medical history , procedure data , lesion data , complications , and research study enrollments ( tables 13 ) . depending on the procedure type , quantifiable variables have been selected as performance indicators for the quality of the health care efforts compared with prespecified standards set by the danish heart registry ( table 3).11,12 the purpose of the performance indicators is to : assess the actual care given and its quality in order to detect care and service processes needing improvement ( process indicators , [ pi ] ) ; assess whether treatment outcomes meet a desired level ( outcome indicators , [ oi ] ) ; maintain and improve quality of care ; and inform policy making or strategy at a regional and national level.12 the wdhr performance indicators are selected independently for the following interventions : cag : adverse reaction to contrast fluid ( oi , standard < 1% ) , arrhythmia during procedure ( oi , standard < 1% ) , and bleeding complications from arterial puncture ( oi , standard < 3% ) ; pci , in addition : acute cabg during procedure ( oi , standard < 0.5% ) , 30-day mortality ( oi , standard < 5% ) , and postintervention secondary prophylaxis with clopidogrel and statins ( pi , standard 95% ) ; and cardiac surgery : 30-day mortality ( oi , standard < 5% ) , central nervous lesion or acute myocardial infarction during hospitalization ( ois , standard < 5% ) , sternum infection ( oi , standard < 3% ) , reintervention due to bleeding or within 6 months ( ois , standard < 10% ) , transfusion ( pi , standard yet to be defined ) , and postintervention secondary prophylaxis with statins ( pi , standard 95% ) . furthermore , improvements in the quality of care are also ascertained through ways other than performance indicators . as an example , the scope of surgery among patients aged 8090 years has been expanded to include complex surgery with both valve replacement and cabg . this expansion has been justified through surveillance of outcome data by means of the wdhr . upgrades to the database platform have been performed in 2003 and 2006 . the next upgrade is scheduled for 2010 . to improve data quality , it is mandatory to fill in more than two - thirds of the variables . the data quality is confirmed by automatic validation rules at data entry ( eg , blood pressure levels are restricted within prespecified limits ) combined with systematic validation procedures ( through research projects and otherwise defined by the individual departments ) and random spot checks after entry ( through research projects and by the data management group ) . data are entered by the physicians at the time of procedure and late procedure complications may , therefore , be incompletely recorded in the wdhr . for example , stent thrombosis may be incompletely registered unless the patient lives to receive revascularization treatment in connection with angiography . however , data linkage to national registries using the cpr numbers provides complete patient follow - up and ascertainment of late complications such as reinfarction , stroke , or cause of death . the proportion of registrations completed ( one minus the proportion of missing data ) is monitored at two levels : ( i ) procedure registration through independent ascertainment methods , in which the number of interventions registered in the wdhr is compared with that registered in the danish national patient registry.13 in 2008 it was 98% for cag , 98% for pci , 97% for valve surgery , and 98% for cabg;10 ( ii ) variable registration through historic data methods , in which the number of registered variables for each intervention is compared with the expected number calculated from the observed number of interventions.13 it is monitored and reported individually for the cardiac centers ( tables 13 ) . wdhr data are well suited for studying predictors for multiple outcomes following cardiac intervention , such as patient characteristics , comorbidity , medication use , and intra - interventional differences ( eg , different types of stents or anesthesia ) . furthermore , the wdhr is used as a platform for randomized controlled trials with clinical driven outcome detection . in addition to the inherent variables in the wdhr , a committee of cardiac specialists has , owing to research purposes,1418 added detailed information on stent thrombosis and cause of death . as defined by the academic research consortium , the specialist committee adjudicated the incidence of definite , probable , or possible stent thrombosis by retrieving medical records and reviewing catheterization angiograms . the committee also reviewed original paper death certificates ascertained from the national registry of causes of deaths6 to classify death according to the underlying cause as cardiac or noncardiac death . cardiac death was defined as an evident cardiac death , pci - related death , unwitnessed death , and death from unknown causes . thus , using these adjudicated outcomes from 12,395 patients undergoing pci with stent implantation , jensen et al14 concluded that the minor additional risk of stent thrombosis and myocardial infarction within 15 months after implantation of drug - eluting stents ( des ) compared with bare - metal stents ( bms ) was unlikely to outweigh the benefit of des in reducing clinically necessary target lesion revascularization.14 the reduction in target lesion revascularization was also confirmed for st - segment elevation myocardial infarction patients who were treated with primary pci19 and for diabetic patients.20 furthermore , comparing effectiveness of two types of des sirolimus - eluting stents ( ses ) and paclitaxel - eluting stents ( pes ) maeng et al17 showed that pes increased the risk of target lesion revascularization by 43% compared with ses.17 in addition , kaltoft et al16 concluded that within two years of follow - up , pes increased the risk of stent thrombosis , myocardial infarction , and 1-year mortality compared with bms and ses.16 cardiovascular outcomes have also been examined for other high risk patients with eg , spontaneous coronary artery dissection,21 or unprotected left main coronary artery stenosis treated with pci.15 obtaining information on all prescription medication through record linkage to the nationwide prescription database makes the wdhr a valuable source for pharmacoepidemiological cardiovascular research . use of nonselective nsaids and cox-2-selective enzyme inhibitors has been reported to increase cardiovascular risks in patients with coronary artery disease.22,23 schmidt et al18 examined whether this risk also related to patients undergoing coronary stent implantation , and found that overall there was no evidence to support such an association.18 in patients undergoing cardiac surgery , jakobsen et al24 investigated the cardioprotective effect of sevoflurane versus propofol anesthesia and found that sevoflurane seemed superior to propofol in patients with little or no ischemic heart disease , whereas propofol seemed superior in patients with severe ischemia , cardiovascular instability , or in acute or urgent surgery.24 in another study on drug effectiveness and safety during cardiac surgery , aprotinin treatment was found to increase the use of plasma and platelet transfusion and the risk for postoperative dialysis , but not other adverse outcomes , including short - term mortality.25 the wdhr is a valuable tool for clinical epidemiological research because it provides ongoing longitudinal registration of detailed patient and procedure data , which allows for research within invasive cardiology , cardiac surgery , anesthesia , and pharmacoepidemiology . the danish national health care system enables this research because it allows complete follow - up for medical events after cardiac intervention by linkage with multiple medical databases .
background : the western denmark heart registry ( wdhr ) has not previously been described as a research tool in clinical epidemiology.objectives:we examined the setting , organization , content , data quality , and research potential of the wdhr.method:we collected information from members of the wdhr organization , including the committee of representatives , the board , the data management group , and physicians reporting to the database . we retrieved 2008 data from the wdhr to illustrate database variables.results:the wdhr is a clinical database within a population - based health care system . it was launched on 1 january 1999 to monitor and improve the quality of cardiac intervention in western denmark ( population : 3.3 million ) and to allow for clinical and health - service research . more than 200,000 interventions , with 50150 variables each , have been registered . the data quality is ensured by automatic validation rules at data entry combined with systematic validation procedures and random spot - checks after entry.conclusions:the wdhr is a valuable research tool because it provides ongoing longitudinal registration of detailed patient and procedural data . the danish national health care system enables this research because it allows complete follow - up for medical events after cardiac intervention by linkage with multiple medical databases .
Introduction Setting Organization Study population Variables Treatment quality Data quality Research examples Conclusions
PMC4322330
various decisions are made on using technology at all levels of the health system in every country which usually include coordinating complicated medical issues with matters related to patients , organizational , economic and moral factors . also , providing appropriate inputs for health policy- makers which depend on interactions , work division and cooperation among the health experts , decision makers and practitioners is of prime importance . these decisions should be based on documented principles in which all the conditions and results of the decisions are systematically explained by scientific methods ( 1 , 2 ) . although the concept of health technology assessment ( hta ) is increasingly expanding in the industrialized world , particularly in europe , and it has also been institutionalized in northern america , it has not yet been fully institutionalized in developing and asian countries due to such factors as lack of awareness , lack of epidemiologic data and lack of a relationship between research efforts ( 3 ) . this study aimed to help the iranian health policy- makers to design and implement the hta program by investigating its current challenges in the country . this study was carried out in two phases : the study phase and the polling phase . in the first phase , the sources were investigated from the database of medline ( via pubmed ) from 2000 to november 2011 ; the scientific information database ( sid ) was also searched by the key term of health technology assessment up to 2011 in order to obtain the persian papers ; two persian papers were obtained . manual search was also done through contacting the informants as well as using the google search engine ; 3 papers were retrieved ; overall , 24 papers were collected . after studying the abstracts and eliminating irrelevant or repeated cases , 7 papers were finally selected . the second phase included the polling of informants , managers and experts of health technology assessment in iran . it should be noted that a minority of the participants were some members of the scientific committee of the health technology assessment ( 12 individuals ) who participated in this phase through a structured questionnaire designed by the authors for the purpose of data collection . all challenges extracted from the phase 1 were classified in a table , and the participants were asked to state their views based on the likert scale . in addition , they were asked to state the reason for their views and solutions for the challenges as well as other challenges not mentioned in the questionnaire ( in the form of open question based on the likert scale ) . data were analyzed by spss 16 software , and the scores given to views of the experts on the current challenges were prioritized ; and then their reasons and solutions were summarized . twenty- two hta challenges which were regarded as the most basic problems encountered by the health system s officials were specified from the 7 selected papers and were then used for designing the questionnaire and collecting data in the second phase . the findings of the second phase , which were collected through a semi - structured questionnaire and included the views of the experts on both hta and its specified challenges , suggest that the participants had relatively the same views on the mentioned challenges . informants with strongly agreeing views on the mentioned challenges regarding health technology assessment views of the experts about the challenges recognized by the hta office as demonstrated in the diagram , among the 22 mentioned challenges , the participants reached the highest consensus about the factors in the area of health system management ( 57.1% ) ; they also strongly agreed on important factors ( 42.85% ) such as stewardship , stakeholders , infrastructures , external pressures , lack of coordination at the policy making level and lack of systematic structure for decision- making in the health system organization . however , only 14.28% of the participants strongly agreed on such challenges as the involvement of partner organizations with unrelated tasks , dependence of hta activities on periodic meetings , being passive to new technologies , lack of experts in the field , lack of feasibility studies in the process of hta application and dependence hta on investors and importers . the department of health technology assessment , the main organization responsible for hta in iran , has introduced three main challenges in this field as follows : the lack of legal support for hta in the legal documentation of the national health system , insufficient resource allocation and the lack of academic courses for the involved experts ; the views of the experts are presented in the following diagram : the findings of this study imply that a high percentage of the expert participants had the same view on the three factors mentioned by the hta department . at the same time , they mentioned the following factors as the challenges affecting hta : lack of sufficient scientific capacity in the hta subject at universities , lack of interaction between public and private sectors regarding hta , conflict of interest , lack of a national organization for evidence quality control , limited access to databases , lack of a clear policy- making structure , lack of fundamental macro strategies for managing the health and education sector , weakness of intersectional organizations of the health system and lack of an integrated , perfect , evidence based structure in the health system of iran . considering the increasing importance of evidence based policy- making , the health system authorities and decision makers have focused on the production of scientific and comprehensive evidences . in recent decades , budget constraints and the need for the effective application of health technology have doubled the necessity for their assessment and prioritization . thus , by acknowledging the emphasis of world health organization ( who ) on hta and its key role in the promotion of evidence based policy- making , many countries such as iran , which follow the health reforming program , pay a particular attention to hta , the selection and application of appropriate technology within the framework of managerial and political strategies and attempt to develop hta through government support . in addition , the accurate identification of the current challenges and obstacles as well as their prioritization may lead to the advancement of the health system reforms . thus , studying the challenges of hta not only helps planning the needed reforms , it can also bring about an efficient allocation of health technologies despite resource distribution . for this purpose , such factors as health system management , stewardship , stakeholders , infrastructures , external pressures , lack of coordination at policy- making level , lack of a comprehensive system for decision- making in the health system , lack of legal support for hta in the legal documentation of the national health system , insufficient resource allocation and lack of planning for academic training of the experts involved in the program were identified as the main obstacles and have been considered in the priorities of the programs of the department of health technology assessment . although the mentioned challenges were classified into two groups , stewardship and management , we attempted to study the challenges separately regardless of their classification . in supporting the challenges investigated in this study , palesh ( 2010 ) introduced lack of need assessment in the process of technology application , lack of coordination at the policy making- level , lack of experts in the field and propaganda by importers and producers of the health technologies as the barriers to the application of the health technology assessment in iran ( 4 ) . however , sivalal stated that the lack of technical and expert forces is one of the main problems in the application of health technology assessment in asian countries including iran ( 3 ) . additionally , hosseini ( 2007 ) mentioned the followings as the main challenges of health technology assessment : lack of competence and expert labor force , constant specialized personnel shift from one field to another , lack of academic centers for training infrastructures of health policy making , health economics such as hta , lack of appropriate training of managers and experts in various sections , lack of experienced advisors in the field ( 5 ) . similarly , although south korea has initiated hta activities since 1990 , enabling human resources was one of the main challenges of the health system of this country in 2009(6 ) . although hta has not long entered into the health arena in iran , noticeable actions have been done by the department of health technology assessment , standardization and tariff department , main of which is the establishment of the hta major at the master level . tehran university of medical sciences in collaboration with the professors in this field trains many students annually in this major . moreover , the return of hta graduates and other related majors such as health policy- making from abroad universities has contributed to the strength and improvement of iran s health technology assessment program . however , considering the current need of the health system , the emphasis has continuously been on the fact that hta academic training can guarantee the application of this interdisciplinary knowledge and may improve the objectives of hta . hence , in order to increase the current knowledge of hta , the department of health technology assessment and standardization and tariff department have taken two essential steps including holding short - term hta courses and preparing training packages to accomplish consistent and structured training courses . the key role of the policy- makers in the establishment and institutionalization of hta is undeniable and should not be underestimated . as mentioned , lack of coordination at the policy- making level in the health system was put at the second rate of importance ( 42.85% ) , and inadequate understanding of the hta process by the policy- makers was placed at the third rate of importance . sivalal has noted that there is no longer a need for convincing the policy- makers and top level decision makers about the importance of hta in asian countries including iran ( 3 ) . however , hosseini ( 2007 ) stated that the followings are the main barriers and problems of hta : no serious attempt at higher levels of management , inadequate political support , lack of attitude and belief in the macro level and lack of understanding of the establishment needs in such a structure in senior executives ( 5 ) . palesh ( 2010 ) has also argued that although the policy- makers agree with hta , it seems that they do not have a perfect knowledge of it ( 4 ) . in a study by sampietro entitled as hta history in spain , it was revealed that the policy- makers poor awareness of hta prevents them from providing adequate and fixed resources for it ( 8) . in a study by pichon and colleagues entitled as facilitators and barriers for international collaboration in latin america , little government support and funding and limited resources were introduced as the main problems against hta ( 9 ) . ( 2009 ) found that the distribution and promotion of hta theories and methods in that country is under the influence of such challenges as coordination and cooperation of policy- makers , and they also maintained that although the policy- makers may have some knowledge of hta , they do not use it in practice ( 10 ) . investigating hta history in japan , hisachi noted that health policy- making in japan is based on the traditional consensus approach and that institutionalization in health policies is fully based on the individual s opinion and depends on the opinion of the health system s leaders . thus , it appears that the application and coordination of health technology assessment are considered two hta challenges among policy- makers in japan ( 11 ) . recently , it seems that health policy- makers in iran have gained a relatively positive attitude toward health technology assessment and have done the necessary efforts for its establishment ; and finally , it can be said that the formal structure of hta at the macro level has been established in the ministry of health and medical education . the second most important challenge investigated in this study was the lack of systematic and comprehensive evidence based system in the national health system . marzban ( 2007 ) found that health technology related policies at the health ministry are not the integrated parts of the overall health policies of the country ( iran ) . he defined a national health technology assessment program with the initiation of pre - evaluation activities at the university level and then considered the complete activities of the health technology assessment in a national agency . in addition , he acknowledged the necessity of observing national strategic programs and national health system priorities in allocating technologies to develop an effective pattern . he considered designing a model for health- centered organizations and their role in supplying and demanding the evaluation necessary for health technologies ( 12 ) . hamzekhanloo ( 2010 ) supported these findings and noted that such factors as the lack of systematic structure for health decision- making , dependency on investors and importers , dependency on periodic meetings and being passive to new technologies are the main barriers of application of hta ( 13 ) . therefore , necessary and important efforts have been done or are in the evolution stage in the department of health technology assessment and standardization and tariff department . projects in cooperation with medical universities of the country aiming at developing a local model for the structure and process of health technology assessment is among such efforts . in addition , designing a systematic structure for the cooperation of research centers in universities of medical sciences in the country was developed by the help of one of the university professors ( dr . yazdani ) in order to provide structured , consistent and purposeful relationship between universities and high rank authorities of the health system . in this structure , the method of health technology assessment codification and medical guidelines ( from the horizon scanning stage for health technologies to pre - assessment , quick review of health technology assessment and a complete technology review ) have been explained in detail which is now in its initial stages of implementation . therefore , relying on scientific and research centers and universities of the country has made it possible to produce a list of new technologies to utilize the country 's health technology assessment for knowledge production . it seems that management practices can eliminate and solve such problems as lack of integration with national macro policies , involvement of partner organizations with unrelated tasks , lack of hta activity in academic units , lack of a local model for the arrangement of related organizations , dependency of hta activities on periodic meetings , lack of a comprehensive and systematic structure of an effective evidence , dependency on the investors and importers and being passive to new technologies and the insufficient application of a comprehensive framework in the hta system . by considering the running programs and challenges of this field , the department of health technology assessment is going to promote and advance plans for health technology assessment in the national health system . it seems that by the appropriate use of hta horizon scanning , the department of health technology assessment has been able to collect relevant data and desired priorities to localize hta in an appropriate structure and manage it properly . in conclusion , the transformation of the results reported from the hta projects into policies will help produce practical knowledge product and will also improve evidence based policy- making . the support of the relevant legislators from hta and the macro statutes of the health system including the high council of health decisions and insurance supreme council legislations will warrant the production of scientific evidence on health technology assessment . also , empowering the hta experts and technical officers will improve the hta process in the health system of iran .
background : various decisions have been made on technology application at all levels of the health system in different countries around the world . health technology assessment is considered as one of the best scientific tools at the service of policy- makers . this study attempts to investigate the current challenges of iran s health technology assessment and provide appropriate strategies to establish and institutionalize this program . methods : this study was carried out in two independent phases . in the first , electronic databases such as medline ( via pub med ) and scientific information database ( sid ) were searched to provide a list of challenges of iran s health technology assessment . the views and opinions of the experts and practitioners on hta challenges were studied through a questionnaire in the second phase which was then analyzed by spss software version 16 . this has been an observational and analytical study with a thematic analysis . results : in the first phase , seven papers were retrieved ; from which , twenty- two hta challenges in iran were extracted by the researchers ; and they were used as the base for designing a structured questionnaire of the second phase . the views of the experts on the challenges of health technology assessment were categorized as follows : organizational culture , stewardship , stakeholders , health system management , infrastructures and external pressures which were mentioned in more than 60% of the cases and were also common in the views . conclusion : the identification and prioritization of hta challenges which were approved by those experts involved in the strategic planning of the department of health technology assessment will be a step forward in the promotion of an evidence- based policy- making and in the production of comprehensive scientific evidence .
Introduction Methods Results Discussion Conclusion
PMC4015848
the pas - gaf - phy and pas - gaf fragments from deinococcus radiodurans were expressed in the escherichia coli strain bl21 ( de3 ) and purified by affinity and size - exclusion chromatography . crystallographic data was collected at beamline id23 - 1 of the esrf ( see extended data table 1 ) . time - resolved x - ray scattering with millisecond time resolution were recorded at beamline csaxs of the swiss light source . saxs measurements were performed at beamline bm29 of the european synchrotron radiation facility ( esrf ) and analysed as summarised in extended data table 2a . time - resolved x - ray scattering data in the micro- and millisecond ranges were collected at beamline id-14-b , biocars , of the advanced photon source at argonne national laboratory . molecular dynamics simulations ( gromacs 4.5.5 ) were used to generate trial solution structures and theoretical scattering curves were evaluated using zernike expansion as implemented in sastbx . a , statistics of the static saxs ( bm29 ) data , including radii of gyration ( rg ) , maximum particle dimension ( dmax ) , porod volume ( vporod ) , forward scattering ( i0 ) . the molecular weights ( mmexp ) were estimated from the i0 of bsa using the formula m(sample ) m(bsa)*[i0(sample)/i0(bsa ) ] , where m(bsa ) = 66 kda . a - c , singular value decomposition ( s(q , t ) = u s v ) of time - resolved solution scattering data from pas - gaf - phy . two components suffice to describe the data , the final product ( n=1 ) and a transient low - q depression ( n=2 ) . a , the first three basis spectra ( 1 , 2 and 3 columns of u.s ) , and original x - ray scattering data ( black ) with reconstruction based on the first two singular values ( all columns of u.s.v red ) are shown . b , relative amplitudes of the two first basis spectra ( 1st and 2nd columns of v ) . d , the rise of the pfr product state as measured by direct integration of difference scattering curves ( < s(q ) > 1.2<=q<=2.5 ) and by absorption spectroscopy ( abs . at 754 nm ) . these data establish that the structural change occurs just after the pfr state is formed in the chromophore . note the positive signal in the absorption curve with very small amplitude at > 3 ms , which appears to decay while the structural signal rises . this could be because the absorption properties depend weakly on the large - scale rearrangement . e , direct static difference data from figure 1 , amplified by q to reveal wide - angle oscillations . extended data figure 2 | light - induced changes in the secondary structure of the evolutionally conserved phy tongue . a , secondary structure and topology of the deinococcus radiodurans pas - gaf - phy construct . the structural elements in our crystal structures are very similar to other published phytochrome structures . the phy tongue region ( box ) , however , was found to refold upon illumination . the five - stranded -sheet core of the gaf domain is extended by a small sixth -strand ( called 2 ) that interacts with the phy tongue ( see fig . the mini - sheet structure at the knot region is not included in the graph . b , omit map of the phy tongue in the dark crystal form ( upper panel ) and the illuminated crystal form ( lower panel ) . in the dark crystal form , the omit map density ( blue ) supports the built -turn secondary structure ( orange sticks ) , even though most of the side chains are poorly resolved . in the illuminated crystal form , the omit map ( blue ) clearly reveals the density of a helix with its bulky side chains ( orange ) . the omit maps were calculated by repeating molecular replacement and a refinement step ( see supplementary information ) with a structure where the phy tongue was removed . c , sequence alignment of part of the gaf domain and of phy loop region . the conserved dip motif in the gaf domain and prxsf motif in the phy tongue are marked by asterisks ( * ) . five representatives from eubacterial ( bphp ) , cyanobacterial ( cph ) , higher plant , fungi ( fph ) and pas - less phytochromes are shown . tomato t1 bphp , rhodopseudomonas palustris tie-1 bphp3 , pseudomonas aeruginosa pao1 bphp , agrobacterium fabrum str . pcc6803 syn - cph1 , microcystis aeruginosa nies-843 , nodularia spumigena ccy9414 , cyanothece sp . pcc 7822 , anabaena variabilis atcc 29413 , physcomitrella patens phy1 , zea mays phyb1 , populus trichocarpa phya , selaginella martensii phy1 , arabidopsis thaliana phya , synechococcus osb syb - cph1 , synechococcus osa sya - cph1 , nostoc punctiforme pcc73102 , lyngbya sp . extended data figure 3 | biliverdin structure and spectra in crystals . a , photographs of the crystals under cryogenic conditions at the beamline id23 - 1 . b , biliverdin omit maps of the dark ( upper left ) and illuminated ( upper right ) form support the existence of the modelled biliverdin conformations ( yellow and orange ) . comparison of the electron density around the biliverdin with published structures ( lower panels ) . in the dark form , the electron density indicates a conformation similar to the published pr structures , including a d. radiodurans structure ( 2o9c , cyan ) . however , the electron density supports neither the biliverdin as determined in the pfr structure of pabphp ( 3nhq , red ) , nor as determined in the pr structure of d. radiodurans ( 2o9c , cyan ) . therefore the rotation of the biliverdin d - ring can not be reliably determined and is modelled with both possibilities ( 15za , and 15ea , orange and yellow in upper right panel ) . omit maps were calculated as in extended data figure 2 and contoured at a sigma level of 3.0 . c , representative absorption spectra of the dark ( black ) and illuminated ( grey ) crystals , recorded at 123k . note that the terms illuminated and dark refer here to the crystallization conditions ( see supplementary information for details ) . illumination with red light in the crystallization drops at ambient temperature led to a slight increase of far - red absorption and disintegration of the crystals ( data not shown ) . the spectrum of the illuminated crystals shows that a substantial proportion ( > 50% ) of the proteins reside in pfr state . the illuminated crystals could be switched to pr - like absorption with far - red illumination . reversely , also the pfr - like features could be increased with red light ( data not shown ) with illumination at ambient temperature . exposure with light increased the scattering background in the absorption measurements . the crystals seemed unaffected by the illumination when illuminated with red light in the crystallization drops . although the spectral analyses of the illuminated crystals do not indicate a pure pfr spectrum , and the biliverdin conformation can not be fit unambiguously to the electron density , the remainder of the electron density is homogeneous ( extended data figure 2b ) . most importantly , the tongue region of the phy domain adopts the conformation resembling the pfr state of pabphp ( extended data figure 4b ) . the conformations of the four monomers in an asymmetric unit are practically identical and hence we conclude that biliverdin can co - exist in both pr and pfr states inside this crystal form and still the protein part represents the structural aspects of the pfr state only . a , comparison of the dark crystal form ( green / dark grey ) to cyanobacterial cph1 in pr state ( pdb code 2vea , orange / light grey ) . b , comparison of the illuminated crystal form ( green / dark grey ) to pabphp in pfr state ( pdb code 3nhq , orange / light grey ) . in both the pr and pfr forms , key interactions are conserved between the phytochromes ( black dashes ) , as well as the positions of three conserved tongue motives ( see extended data figure 2c ) . the residues of these three motives / a)g , prxsf , and ( w / f , y)xe , with numbering from the d. radiodurans sequence . trp451 was not modelled in our illuminated crystal structure , and part of the phy tongue has been removed for clarity . small changes in relative orientations between the difference crystal structures are observed , e.g. a slight tilt of helix of the pfr tongue . a , experimental saxs data of dark ( pr ) and pre - illuminated ( pfr ) samples . the data is merged from the concentration series ( extended data table 2b ) and normalized on 0.4 nm < q < 0.6 nm . b , guinier plot of the low - q region , shown for all concentrations . inset shows the radii of gyration ( rg ) calculated from the curves in ( a ) according to the guinier approximation . c , average difference scattering signals calculated from the solution - structural models using three methods : crysol ( default settings ) , sastbx with spherical harmonic expansion ( she , default settings ) , and sastbx with zernike polynomial expansion as described in supplementary information . the choice of calculation method does not significantly change the predicted x - ray difference scattering . d , determination of the relative pr / pfr populations represented by the bm29 data as described in supplementary information . we find that our pr sample contained only pr ( top ) whereas in the pfr sample 64% of the protein molecules adopted the pfr conformation ( bottom ) . 1 ) represent , up to a scaling factor , the relation between pure pr and pfr populations . this is in contrast to traditional saxs which report on population mixtures , because the pfr state can not be easily produced with 100% population in solution . crystal packing interactions of the ( a ) dark and ( b ) illuminated crystal forms . the dimer of an asymmetric unit is shown in red and the symmetry mates in grey . , crystal contacts are seen in the top regions of the phy domains and therefore may cause artefacts in the long scaffolding helix and in the opening of the phy domains . in the illuminated form contacts are such that the phy domains may be pushed closer together , which is consistent with the larger separation of the phy domains as refined from the solution x - ray scattering data . it is noteworthy that the relative orientation of the monomers in the dimer is different between all three known structures for pas - gaf - phy phytochromes . for pseudomonas aeruginosa the dimer is parallel with variations in between different copies of the dimer in the crystallographic unit cell , in two cyanobacterial phytochrome an antiparallel dimer is observed , and in our pr structure , the monomer have an angle of approximately 45. extended data figure 7 | solution - structural refinement . a , the distribution of phy domain separations ( rpp ) obtained from unbiased md simulations ( production runs 1 - 3 ) . 3 are indicated by crosses ( with n = 100 , m 610 ) . while the pr structures cluster in a region of high sampling , the pfr structures lie at the edges of the phy - phy distribution , suggesting inadequate sampling . to remedy this , we artificially scanned the phy domain separation in separate simulations ( production runs 4 and 5 ) to improve sampling . b , the new distribution of pulled phy domain separations in the pfr state . the final analysis and all solution - structural conclusions drawn in this study are based on the trajectories described in b. c - e , consistency test of the structural refinement procedure . c , a cutoff parameter rcut was introduced to reject all md frames rpp < rcut . the resulting average over rpp of the best n = 100 pairs is plotted as a function of rcut . it is found that rpp rcut , which indicates that the best fit to the difference x - ray scattering data is always at the highest separations available in sampling range . d and e show the dependence of the total and average error as a function of rcut , respectively . it is observed that the error decreases steeply for rcut 5 nm , and only marginally forrcut 5 nm . we therefore consider optimization in the latter range overfitting , and applied rcut = 5.0 nm in the refinement for the solution structures .
sensory proteins must relay structural signals from the sensory site over large distances to regulatory output domains . phytochromes are a major family of red - light sensing kinases that control diverse cellular functions in plants , bacteria , and fungi.1 - 9 bacterial phytochromes consist of a photosensory core and a c - terminal regulatory domain.10,11 structures of photosensory cores are reported in the resting state12 - 18 and conformational responses to light activation have been proposed in the vicinity of the chromophore.19 - 23 however , the structure of the signalling state and the mechanism of downstream signal relay through the photosensory core remain elusive . here , we report crystal and solution structures of the resting and active states of the photosensory core of the bacteriophytochrome from deinococcus radiodurans . the structures reveal an open and closed form of the dimeric protein for the signalling and resting state , respectively . this nanometre scale rearrangement is controlled by refolding of an evolutionarily conserved tongue , which is in contact with the chromophore . the findings reveal an unusual mechanism where atomic scale conformational changes around the chromophore are first amplified into an ngstrm scale distance change in the tongue , and further grow into a nanometre scale conformational signal . the structural mechanism is a blueprint for understanding how the sensor proteins connect to the cellular signalling network .
METHODS SUMMARY Supplementary Material
PMC2896865
dna microarray technology , a powerful tool in functional genome studies , has yet to be widely accepted for extracting disease - relevant genes , diagnosis , and classification of human tumor [ 13 ] . generally , genes are ranked according to their differential expression by analysis of combination of normal and tumor samples , and genes above a predefined threshold are considered as candidate genes for the cancer being studied . however in addition to the false - positive problem , the imbalance between the number of samples and genes may potentially degrade the classification accuracy and it can lead to possible overfitting and dimensional curse or even to be a complete failure in the analysis of microarray data . an efficient way to solve these problems is gene selection . in fact , a good gene - selection method that can identify key tumor - related genes is of vital importance for tumor classification and identification of diagnostic and prognostic signatures for predicting therapeutic responses [ 5 , 6 ] . identifying minimum gene subsets means discarding most noise and redundancy in dataset to the utmost extent , resulting in not only classification accuracy improvement but also tumor diagnosis cost decrease in clinical application , which is still a key challenge in gene expression profile- ( gep- ) based tumor classification . rough set theory has been successfully used in feature selection [ 7 , 8 ] . however , it is difficult to directly and effectively deal with real - valued attributes of microarray dataset . dataset discretization is usually adopted to tackle the problem , but the pretreatment may lose some useful information . to combat this problem , hu et al . first presented the basic concepts on neighborhood rough set ( nrs ) model and designed a novel feature selection method called forward attribute reduction based on neighborhood model ( farnem ) to select a minimal reduct , which avoided the preprocess of data discretization and hence decreased the information lost in pretreatment . but the reduct which satisfies criterions of higher classification performance and fewer gene numbers is not unique and full of chance . obviously , it is not appropriate to use only a gene subset ( a reduct ) to train classifier , which necessitates it to select numerous minimal gene subsets with the highest or near highest dependence on training set to avoid the selection bias problem . breadth - first search ( bfs ) , a basic graph search algorithm that begins at the root node and explores all the neighboring nodes , were adopted to implement our goals for selecting any number of optimal and minimum gene subsets . however , for n nodes , there are 2 combinations of gene subsets in total . the computational complexity is too high . to circumvent these problems , we proposed a breadth - first heuristic search algorithm based on neighborhood rough set ( hbfsnrs ) to select numerous gene subsets . the dependence function of nrs was selected as the heuristic information . to prioritize the numerous selected genes , previous studies showed that significant class predictor genes whose expression profile vector show remarkable discrimination capability among different class samples of specific cancer maybe play a crucial role in the development of cancer . we hypothesized that the occurrence probability of genes in the final selected gene subsets may reflect the power of tumor classification and the significance of them to some extent . to probe our hypothesis , hbfsnrs method was also compared with four related methods : pam , clanc , kruskal - wallis rank sum test ( kwrst ) , and relief - f to demonstrate its good performance , efficiency , and effectiveness in gene selection , prioritization and cancer classification . our proposed method is different from the traditional gene selection strategies : filters and wrappers . the filter methods are based mostly on selecting genes using between - class separability criterion , and they do not use feedback information from predictor performance in the process of gene selection , such as relative entropy , information gain , kwrst , and t - test . the wrapper methods select genes by using a predictor performance as a criterion of gene subset selection such as ga / svm and ga / knn . all of the microarray datasets , without respect to training and test dataset , were normalized per gene by subtracting the minimum expression measurements and dividing by the difference between the maximal and minimum values of that gene . the expression levels for each gene were scaled on [ 0 , 1 ] . gene preselection can improve the classification performances since it may reduce the noise , which is also the common procedure for most classification application . all of the genes on the arrays of training data were sorted according to kwrst which is suitable for multiclass problem . in this study , the p top ranking genes ( the initial informative gene set g * ) were used for finding minimum gene subsets for constructing ensemble tumor classifier with hbfsnrs . generally speaking , more than 1% of genes in the human genome are involved in oncogenesis , so we set the number of the selected top - ranked gene p = 300 . the basic concepts of neighborhood rough set ( nrs ) have been introduced by hu et al . . in our proposed algorithm , the dependence function of nrs was introduced to evaluate the goodness of selected gene subsets . here assume there are c subclasses of cancers , let d = { d1 , d2 , , dm } denotes the class labels of m samples , where di = k indicates the sample i being cancer k , where k = 1 , 2 , , c. let s = { s1 , s2, sm } be a set of samples and g * = { g1 , g2 , , gn } be a set of genes , the corresponding gene expression matrix can be represented as x = ( xij)mn , where xij is the expression level of gene gi in sample sj , i = 1 , 2 , , n , j = 1 , 2 , , m , and usually n m. given an information system for classification learning ndt = s , g * d , v , f , where s is a nonempty sample set called sample space , g * is a nonempty set of genes also called condition attributes to characterize the samples , d is a set of output variable called decision attribute ( class labels of tumor samples ) , va is a value domain of attribute a g * d , f is an information function f : s ( g * d ) v , v = ag*d va , a reduction is a minimal set of attributes bg*. given for all si s and bg * , the neighborhood b(si ) of si in the subspace b is defined as ( 1)b(si)={sj sjs,b(si , sj) } , where is the threshold and b(si , sj ) is the metric function in subspace b. there are three common metric functions that are widely used . let s1 and s2 be two samples in n - dimensional space g * = { g1 , g2 , , gn}. f(s , gi ) denotes the value xis of gi in the sample s. then minkowsky distance is defined as ( 2)p(s1,s2)=(i=1n|f(s1,gi)f(s2,gi)|p)1/p , where ( 1 ) if p = 1 , it is called manhattan distance 1 ; ( 2 ) if p = 2 , it is called euclidean distance 2 ; ( 3 ) if p = , it is called chebychev distance . here , we use the manhattan distance . , xc are the sample subsets with decisions 1 to c , b(xi ) is the neighborhood information granules including xi , and is generated by gene subset b g * , then the lower and upper approximations of the decision d with respect to gene subset b are , respectively , defined as ( 3)lowerb(d)=i=1c lowerb(xi),upperb(d)=i=1c upperb(xi ) , where lowerb(x ) = { xi | b(xi)x , xi s } is the lower approximations of the sample subset x with respect to gene subset b , and is also called positive region denoted by posb(d ) which is the sample set that can be classified into one of the classes without uncertainty with the gene subset b. upperb(x ) = { xi | b(xi)x , xi s } denotes the upper approximations , obviously upperb(x ) = s. the decision boundary region of d to b is defined as ( 4)bnb(d)=upperb(d)lowerb(d ) . the neighborhood model divides the samples into two groups : positive region and boundary region . through these neighborhood information , we can not completely be sure that these samples can be classified into the class . the samples in different gene subset subspaces will have different boundary regions and positive regions . the size of the boundary region reflects the discriminability of the classification problem in the corresponding subspaces . the greater the positive region is , the smaller the boundary region will be , and the stronger the characterizing power of the condition attributes will be . so we use the dependency degree of d to b to characterize the power of the selected gene subsets , which is defined as the ratio of consistent objects ( 5)b(d)=card(posb(d))card(s ) , where card(s ) and card(posb(d ) ) denotes the cardinal number of sample set s and posb(d ) , respectively . if b(d ) = 1 we say that d depends totally on b , and if b(d ) < 1 , we say that d depends partially . here we define (d ) = 0 , and our goal is to find the gene subset b which b(d ) is equal to the set value . informative gene selection involves evaluating the quality of the selected gene subsets and searching for good gene subsets quickly . here , the dependence function of nrs is used to measure the goodness of the selected gene subset . here , the computational cost problem is addressed as below . initially , let red = { { g1 } , { g2 } , , { gp } } be a set of gene subsets where each subset only has an informative gene . then , for redi red , redi = { gi } is expanded to ( p 1 ) subsets by adding a different genes { gl | gl g * , gl redi } into each redi , where we set temporyi = { { gig1 } , , { gigi1 } , { gigi+1 } , , { gigp } } , we will get p*(p 1 ) subsets in total . among these subsets , we select the top - ranked gene subsets by the dependence function that need to be expanded in the next iteration to reconstruct the set red , and now each element of red has 2 genes . similarly , in the next search layer , for redx red , redx = { gigj } is extended to ( p 2 ) subsets excluding the genes have listed in the redx , where we set temporyx = { { gigjg1 } , , { gigjgi1 } , { gigjgi+1 } , , { gigjgj1 } , { gigjgj+1 } , , { gigj gp } } , i < j , and we will get w*(p 2 ) subsets . among these subsets , top - ranked gene subsets were selected to be expanded in next layer as the above method . the search process continues following the above method until meeting the stop criteria . in each layer , we expend to w*(p card(red ) ) subsets and only top - ranked gene subsets were selected to reconstruct the set red from the total subsets , so that the search time will not increase exponentially with the increase of search depth . here , card(red ) denotes the cardinal gene number of the gene subset . in the virtue of the minimum construction idea , one of the techniques for the best feature selection could be based on choosing minimal gene subsets that fully describe classes of tumor classification in a given data set . therefore , when the maximal dependence of the elements of red ( e.g. , r_max = 0.9999 ) is obtained , the increment between the maximal dependence of two adjacent search levels is less than ( e.g. , = 0.0001 ) or the number of iterative steps is equal to the set value depth ( e.g. , depth = 20 ) , the searching process ends at that level . otherwise , we continue to search genes in this way until meeting the stopping criterions . the dependence function of nrs is chosen as the objective function for evaluating the goodness of the selected gene subset mainly because it is computationally fast in that it does not use the feedback information of test data in the training process . to optimize the parameter in nrs that control the size of the neighborhood , different values for from 0 to 1 with step 0.01 were tested by running forward attribute reduction based on neighborhood model ( farnem ) . values were sorted according to the classification accuracy by 3-knn classifier using the corresponding gene subset selected by farnem . but for all ( a multiclass dataset ) , the gene number of the selected minimal and optimal reduct set reach 20 or even more for some of the top five values . considering that a large gene subset with an excessive number of genes may contain much noise and redundancy , which may bias and negatively influence the tumor classification and gene prioritization , we discarded such top - ranked values and reselected five top - ranked values that produced reduct set with less than 20 genes . we adopted 3-knn classifier to evaluate the classification performance of the selected gene subsets . to improve prediction accuracy and stability , an ensemble classifier was constructed on the basis of the selected gene subset . for each , a simple majority voting strategy was applied to integrate the w individual classifier that is constructed from the selected gene subsets obtained by hbfsnrs only on training set . then , another ensemble classifier was built based on the above classification results with each value in the similar way . here , we hypothesized that genes with higher occurrence frequency are more likely to be important and cancer - related genes . therefore , we count the occurrence frequency of each gene in all the selected gene subsets to measure its significance . but for a specific cancer , different value may select different sizes of the minimum gene subset . in this case , only counting the occurrence frequency is not appropriate for measuring the significance of genes . to avoid the selection bias , the significance of genes is measured by occurrence of probability , which is defined as ( 6)sigj=1ti=1tfijni , where fij is the occurrence frequency of gene j in all the gene subsets which are selected by hbfsnrs with i ; t is the total number of neighborhood values ( we set t = 5 ) ; ni is the number of genes in a selected gene subset with i ; is the number of the final selected gene subsets by hbfsnrs ( we set = 500 ) . in order to further investigate the significance of the selected gene , two main methods were used : ( 1 ) the selected genes were regarded as predictor set or classification model ; ( 2 ) literature search and protein - protein interaction ( ppi ) network analysis . to evaluate the performance of the proposed method , seven gene expression datasets were used in this study : acute lymphoblastic leukemia ( all ) , breast cancer 30 ( gse5764 ) , breast cancer 22(gse8977 ) , colon cancer , prostate cancer 102 , and prostate cancer 34 . the two pairs of cross - platform datasets were used to evaluate the generalization performance for our cross - platform classification model . datasets of breast cancer , colon cancer , and prostate cancer are two - class classification systems that contain normal and tumor samples . the dataset contains six subtypes of all : bcr - abl , e2a - pbx1 , hyperdip>50 , mll , t - all , tel - aml1 . for breast - cancer datasets , there are too many ( 54675 ) affymetrix probe identifiers , therefore the raw data were processed following these steps : affymetrix probe identifier was converted to entrez identifier . when multiple probes corresponded to the same entrez i d , we averaged over these probe intensities . to avoid overfitting problem and improve classification accuracy and stability , an ensemble classifier was constructed on the basis of the selected gene subsets . we observed that the final integrated results ( table 2 ) were not satisfactory and no higher classification accuracy obtained compared to some individual classifiers . the main reason may be that our methods used all the selected gene subsets as classification model , which contain many redundant and tumor - unrelated genes and figure 2 shows the classification accuracy with different numbers of the top - ranked genes sorted according to the significance of genes defined as ( 6 ) , from which we found that only a few top - ranked genes were enough to obtain higher classification accuracy . meanwhile , when more genes were used as predictor set , there was only a little increase or even decrease in the classification performance . therefore , we inferred that too many selected genes involve much more redundancy and irrelevancy , which degrades the classification accuracy . in order to elaborate the effectiveness of hbfsnrs , we compared the accuracy of our approach with other common filter methods including t - test , information gain , kwrst , and relief - f . the experimental results indicate that our method is significantly superior to t - test and information gain , and slightly outperforms kwrst and relief - f in the aspect of tumor classification . for simplicity , we only present kwrst and relief - f results here ( figure 2 ) . we found that only a few top - ranked genes could achieve higher accuracy in the classification of tumor samples of different classes by our proposed search algorithm . for all dataset , the prediction accuracy by hbfsnrs is superior to other methods regardless of the much fewer genes used in cancer classification . for breast - cancer dataset , using one active gene could test outcome with the accuracy of 22.73% by relief - f , 63.64% by kwrst , whereas 100% test accuracy was obtained using one gene by the proposed hbfsnrs method . for colon - cancer dataset , using one , six active genes could get the prediction accuracy of 80% and 85% by our method , 65% , 70% by relief - f , and 65% , 75% by kwrst , respectively . for prostate - cancer dataset , when using more than ten genes for tumor classification , kwrst significantly outperformed our method and relief - f , but our method performs as well as the kwrst when only using the few top - ranked genes ( both of our method and kwrst could get 97.06% accuracy using one gene ) . what is more , we compared our method with other statistical methods pam and clanc . pam , a statistical technique for class prediction from gene expression data that uses nearest shrunken centroids , was used to identify class predictor genes . clanc ranks genes by standard t - statistics , which does not shrink centroids and uses a class - specific gene selection procedure . in our context , clanc slightly outperformed pam , so we only present the comparison with clanc here ( table 3 ) . in comparison with clanc , our method could obtain higher classification accuracy when using a few top - ranked genes . the one - gene model by our method provides the classification accuracy of 100% , 80% , and 97.06% for breast - cancer , colon - cancer , and prostate - cancer dataset , respectively , whereas clanc requires more genes to get the same accuracy . in all dataset , the test accuracies on independent test dataset are 87% with six genes , 94% with 12 genes , and 97% with 18 genes by our method . using the same six , 12 , 18 active genes could test outcome with the accuracy of 86% , 95% , and 97% by clanc , respectively , which indicates our method was comparable for all dataset . as a comparison , the minimum genes with the highest accuracy can be obtained in the classification process by hbfsnrs . in addition , results show that our method is obviously better than clanc in colon - cancer and breast - cancer cross - platform datasets . we proposed that these few genes whose expression profile vector showed remarkable discrimination capability may closely correlated to cancer and could be seen as possible disease signatures . mining genes that give rise to ontogenesis is one of key challenges in the area of cancer research . biologically the experimental results proved that the selected genes with high classification accuracy are functionally related to carcinogenesis or tumor histogenesis , so we could infer that the few top - ranked genes may be very important for tumor diagnosis . the 10 top - ranked genes according to the sig score for each tumor that were regarded as the candidate cancer genes listed in table 4 . to demonstrate our method 's ability in uncovering known cancer genes and predicting novel cancer biomarkers , we downloaded a list of 25 breast cancer biomarkers that have been annotated in the omim database . unfortunately , our used dataset ( the 300 top - ranked genes selected by kwrst ) does not include the 25 known breast cancer genes . therefore our method can not be evaluated with it in terms of uncovering known cancer genes . from another point of view , it is verified that higher differential expression of a gene does not necessarily reflect a greater likelihood of the gene being related to cancer . in other words , important genes might not be necessarily differentially expressed . but it is undeniable that higher differential expressions of genes are inevitably important in the cancer diagnosis and development . next , literature search method was used to check whether our method can predict novel cancer biomarkers . in the top 10 genes ranked by ( 6 ) for breast cancer , we found that these genes play an important role in the occurrence of breast cancer . the collagen triple helix repeat containing 1 ( cthrc1 ) , ranked the first , whose aberrant expression is widely presented in human solid cancers including breast cancer and seems to be associated with cancer tissue invasion and metastasis . the pdz and lim domain protein 4 ( pdlim4 ) , ranked the second , was frequently methylated in breast cancers but not in normal breast tissues . the keratin , type i cytoskeletal 17 ( krt17 ) , ranked the third , was specifically overexpressed in basal - like subtypes of breast cancer . the secreted frizzled - related protein 1 ( sfrp1 ) , ranked the fourth , was recently found to be associated with progression and poor prognosis in early stage of breast cancer . the collagen alpha-1 ( iii ) chain ( col3a1 ) , ranked the fifth , was up - regulated in both invasive ductal and lobular carcinomas cells when compared with normal ductal and lobular cells . the peptidase inhibitor 15 ( pi15 ) , ranked the sixth , was also differentially expressed but it was down regulated in lobular and ductal invasive breast carcinomas . the actin gamma - enteric smooth muscle ( actg2 ) , ranked the seventh , is involved in the architecture and remodeling of cytoskeleton in basal medullary breast cancer . the tissue factor pathway inhibitor 2 ( tfpi2 ) , ranked the eighth , whose aberrant hypermethylation with gene promoter was associated with metastasis in breast cancer . the serpin b5 ( serpinb5 ) , ranked the ninth , an epithelial - specific serine protease inhibitor , was a biomarker in disseminated breast - cancer cells .the fibronectin 1 ( fn1 ) , ranked the tenth , was recently suggested to be associated with the prognosis of patients with breast cancers . finally , we examined gene pathway that involved by the 10 top - ranked genes . the study is carried out using the software which can help the researchers to better understand the biological phenomenon understudied by pointing out significant cellular functions of the selected genes from the webpage results indicate that the pathways that the 10 top - ranked genes are involved in are ecm - receptor interaction ( col3a1 , fn1 ) , focal adhesion ( col3a1 , fn1 ) , vibrio cholerae infection ( actg2 ) , p53 signaling pathway ( serpinb5 ) , small cell lung cancer ( fn1 ) , wnt signaling pathway ( sfrp1 ) , regulation of actin cytoskeleton ( fn1 ) , pathways in cancer ( fn1 ) , which agree well with current knowledge on breast cancer . thus it can be seen that the selected genes that closely related to adhesion , motility , and metastasis may provide new insights in the underlying molecular mechanisms related to disease development , in designing therapy and in prognostication for patients with breast carcinoma . thus , the analysis of existing biological experiment results of breast - cancer dataset well illustrates that our method has great power of identifying tumor - related genes . furthermore , another case study for prostate - cancer dataset was presented here . in the 10 top - ranked genes , six of them ( hpn , maf , gstp1 , wwc1 , junb , and rnd3 ) have been reported to be associated with prostate cancer . the hepsin ( hpn ) , ranked the first , a cell surface serine protease that is markedly up - regulated in human prostate cancer , which is overexpression in prostate epithelium in vivo causes disorganization of the basement membrane and promotes primary prostate cancer progression and metastasis to liver , lung , and bone . the transcription factor ( maf ) , ranked the second , was down - regulated in the tumors relative to normal prostate tissue and may be regarded as the candidate tumor suppressor gene . the glutathione s - transferase p ( gstp1 ) , ranked the fourth , whose cpg island hypermethylation is the most common somatic genome alteration described for human prostate cancer . the gene wwc1 , ranked the sixth , was found to interact with histone h3 via its glutamic acid - rich region and that such interaction might play a mechanistic role in conferring an optimal er transactivation function as well as the proliferation of ligand - stimulated breast - cancer cells . the transcription factor jun - b ( junb ) , ranked the seventh , is an essential upstream regulator of p16 and contributes to maintain cell senescence that blocks malignant transformation of tac . junb thus apparently plays an important role in controlling prostate carcinogenesis and may be a new target for cancer prevention and therapy . the rho - related gtp - binding protein rhoe ( rnd3 ) , ranked the ninth , a recently described novel member of the rho gtpases family , was regarded as a possible antagonist of the rhoa protein that stimulates cell cycle progression and is overexpressed in prostate cancer . genes related to a specific or similar disease phenotype tend to be located in a specific neighborhood in the protein - protein interaction network , and a protein is likely to be coexpressed with its interaction partners and those proteins that have similar function . here , we applied a protein - network - based method to analyze the effect of neighborhood partners on the selected genes using all interactions in the human protein reference database . figure 3 indicates the protein - interaction network for each top - ranked gene of prostate cancer ( kiaa0430 has no interaction partners in hprd ) . the red - ellipse nodes represent the 10 top - ranked genes that were ranked by the sig score in ( 6 ) , among which , those with an asteroid sign means known cancer genes . the diamond nodes indicate the direct interaction partners of the selected genes that were not cancer genes , and blue - octagon nodes show those partners that are identified as known cancer genes which were collected by querying the memorial sloan kettering computational biology website , oncogene , tumor suppressor , and are shown as [ 4 , 44 ] . among the 10 top - ranked genes for prostate - cancer dataset ( figure 3 ) , 6 genes ( abl1 , junb , map , p4hb , gstp1 , and rnd3 ) that listed with an asteroid sign have been identified to be known cancer genes . here , we mainly illustrate the three genes p4hb , pex3 , and abl1 that we did not find reports on their association with prostate cancer . in the three genes , p4hb and abl1 have been known as cancer genes . pex3 is also a famous disease gene which was the cause of peroxisome biogenesis disorder , complementation group 12 , and zellweger syndrome . it can be seen that mutation in these genes can lead to many diseases and may have a close relationship with prostate cancer . in this sense , our method is effective on cancer - related gene selection . . suggest that cancer linker degree ( cld ) of a protein which was defined as the number of cancer genes to which a gene is connected is a good indicator of the probability of being a cancer gene . we analyzed the cancer linker degree ( cld ) of 10 top - ranked genes on each of the four datasets . for prostate cancer , as is shown in figure 4 , most of the top - ranked genes have a direct interaction with known cancer genes excluding the gene pex3 , and the cld of abl1 , junb , wwc1 , maf , p4hb , gstp1 , hpn , and rnd3 is 46 , 13 , 2 , 6 , 7 , 1 , 1 , and 1 , respectively . in the 10 top - ranked genes of all ( tcfl5 and lrmp have no interaction partners in hprd ) , smarca4 , dntt , and nono are known cancer genes , and the cld of smarca4 , dntt , nono , cd72 , mpp1 , and cd99 is 19 , 3 , 6 , 1 , 2 , and 2 , respectively . for breast cancer , cthrc1 , pi15 , and serpinb5 have no interaction partners in hprd . in the remaining 7 genes of 10 top - ranked genes , sfrp1 and tfpi2 are known cancer genes , and sfrp1 , tfpi2 , fn1 , col3a1 , and krt17 have a direct interaction with known cancer genes , the cld of which is 2 , 1 , 17 , 2 , and 1 respectively . for colon cancer , fuca1 has no interaction partners in hprd . in the remaining 9 genes , myh9 is a known cancer gene , the cld of des , myh9 , c3 , and 2-sep is 4 , 3 , 1 , and 1 , respectively . these results show that besides a few selected genes that typically correspond to known specific cancer mutations , a considerable portion of the top - ranked genes have many direct interactions with cancer genes , which suggests that these genes should be very likely to be involved in cancer and may play a central role in the protein network by interconnecting many known cancer genes , and thus the top ranked genes can be regarded as reliable disease biomarkers . an ongoing challenge is to identify new prognostic markers that are directly related to disease and that can more accurately predict the likelihood of gaining cancer in unknown samples . results indicate that our proposed method of gene selection by hbfsnrs has the following advantages in trying to tack this challenge . ( 1 ) our method could obtain the highest or near highest prediction accuracy of tumor classification with the minimum gene subset . ( 2 ) lists of ranked potential candidate cancer biomarkers with a specific cancer are presented by our approach . ( 3 ) our proposed method can obtain many optimal gene subsets in a short period of time , which is essential to the whole search process . ( 4 ) compared to other gene ranking methods kwrst and relief - f , our method is relatively stable and contains little chance factors . the success of our methods , gene selection by hbfsnrs , can be attributed to a combination of several aspects . first , we adopted the dependence function of nrs to evaluate the goodness of selected gene subsets . there are two main advantages for this point : time saving and tumor classification without the feedback and leaked information of the test dataset . second and more importantly , the designed process of gene search by our method can select any number of optimal gene subsets in a comparatively short time , which is an optimization of best - first search . finally , considering the selection of value in the evaluation of gene subsets has the problem that the genes with different value will have different ranked positions or relevance to cancer . to avoid this problem of selection bias , we defined a sig score to describe the significance of genes by combining five groups of results that obtained by each value . we presented two case studies on breast cancer and prostate cancer to illustrate the power of our method to identify tumor - related genes . one limitation of our approach is in data quality : current high - throughput technologies remain error prone and may be far from complete . in a recent paper , zhang et al . held that the integration of microarray data gives us more analytical power and reduces the false discovery rate . given a specific cancer , efficient ways to integrate multiple independent microarray data may be a good way to solve the issue of data quality . the other limitation is the optimization of the threshold value of neighborhood rough set . on one hand , we tried the neighborhood rough set reduction method to evaluate the goodness of the selected gene subsets to save time in tumor classification without using the feedback information of the test dataset . on the other hand , the threshold selection is obtained through the feedback information of the test set . in addition , different values may select different gene subsets , hence the genes with different value will have different positions in gene prioritization , so the selection of has become more critical for gene prioritization . fortunately , the choice of is not so important for gene ranking because the change of gene position in different values is not significant . in our study , spearman 's rank correlation coefficient was used to determine whether there is a consistency between the results of gene prioritization with different values . our proposed hbfsnrs method has improved the performance of tumor classification based on microarray and identified and prioritized lists of potential tumor - related genes from gep , our future work will benefit further from integrating other sources . recent high - throughput technologies have produced vast amounts of protein - protein interactions , which represent valuable resources for candidate - gene prioritization and give us new insights into the mechanism of disease . a great number of studies have shown that integration of multiple sources of data is more reliable for predicting cancer genes than the use of a single criterion [ 4 , 4648 ] . thus , it is an efficient method to integrate gep and protein interaction network for gene prioritization . although gene expression data and protein interaction data have been integrated for gene prioritization [ 49 , 50 ] , the results are not satisfactory .
selection of reliable cancer biomarkers is crucial for gene expression profile - based precise diagnosis of cancer type and successful treatment . however , current studies are confronted with overfitting and dimensionality curse in tumor classification and false positives in the identification of cancer biomarkers . here , we developed a novel gene - ranking method based on neighborhood rough set reduction for molecular cancer classification based on gene expression profile . comparison with other methods such as pam , clanc , kruskal - wallis rank sum test , and relief - f , our method shows that only few top - ranked genes could achieve higher tumor classification accuracy . moreover , although the selected genes are not typical of known oncogenes , they are found to play a crucial role in the occurrence of tumor through searching the scientific literature and analyzing protein interaction partners , which may be used as candidate cancer biomarkers .
1. Introduction 2. Materials and Methods 3. Results 4. Discussions and Conclusions
PMC4961830
it dissolves quickly in concentrated acids and in some fluoroalcohols yet its reactivity is low . other relevant properties of this biopolymer are its high molecular weight and porous structure which favors high water absorption [ 1 , 2 ] . on the other hand these associations act as a matrix that interacts with other constituents such as phenolic tannins in insects and minerals in the carapaces of crustaceans [ 3 , 4 ] . deacetylation is the nonenzymatic process whereby chitosan is obtained by removing r - nhcoch3 residue and treating it with strong alkali at high temperatures . when the degree of deacetylation is greater than 50% , the biopolymer becomes soluble in acidic aqueous solutions and behaves as a cationic polyelectrolyte due to the protonation of amine groups in the presence of h ions [ 5 , 6 ] . however , they are not used on an industrial scale owing to the high commercial cost of enzymes ( deacetylases ) and their low productivity , while nonenzymatic chemical processes are widely used because the cost of doing so is low and the processes are efficient [ 7 , 8 ] . the development of new applications for chitosan is strongly based on the fact that this polymer can be obtained from renewable sources such as fisheries ; it is nontoxic , nonallergenic , biodegradable , and present in antimicrobial activity . studies with planktonic crustaceans such as daphnia longispina resting eggs indicate that these crustaceans can be exploited as a source of chitin due to their high chitin content ( 23~25% ) . leptinotarsa decemlineata , also known as the colorado potato beetle , is a major pest of potato crops . the adult and larva have 20% and 7% of chitin content , respectively . however , the chitin from adult colorado potato beetles had a more stable structure than that from the larvae . investigation has indicated that the adult potato beetle is more appropriate as a chitin source , both because of its chitin and chitosan content and because of its antimicrobial and antioxidant activities . . conducted studies on potential sources of chitin in the orthoptera order of insects calliptamus barbarus and found this to be 20.5 0.7% and 16.5 0.7% for oedaleus decorus , the yield of chitosan being 7476% , with a deacetylation degree of 7075% . the insects showed potential as alternative sources of chitin and chitosan on account of their antimicrobial and antioxidant properties for the food / animal feed industry . among the most common applications are their uses as complexing material metal ions , such as edible coatings with antifungal and bactericidal action [ 1315 ] and as a basic element for making controlled drug delivery matrices . thus , the objective of this study was to investigate the efficiency of different methodologies to obtain chitosan from the waste of litopenaeus vannamei shrimps since this raw material comes from renewable resources and it is economically viable to produce high - value added compounds from it . shrimp residues of the species named as litopenaeus vannamei were washed in running water and a 2.5% hypochlorite solution . thereafter , they were dried at room temperature and then crushed and passed through a 16-mesh knit . the bacteria stenotrophomonas maltophilia ( ucp-1600 ) , s. maltophilia ( ucp-1601 ) , bacillus subtilis ( ucp-1002 ) , and enterobacter cloacae ( ucp-1603 ) were kindly supplied from the culture collection ucp ( universidade catlica de pernambuco ) , recife , pe , brazil . these microorganisms were used in the tests of evaluation of minimum inhibitory concentration ( mic ) and the minimum bactericidal concentration ( mbc ) . the microorganisms were maintained at 25c in nutrient agar medium ( peptone 0.5% , beef extract 0.3% , nacl 0.5% , agar 1.5% , and distilled water , and ph is adjusted to neutral 7.4 ) . the extraction of chitin and chitosan was performed according to the methods described by zamani et al . ( method 1 ) and arantes ( method 2 ) . in order to eliminate the proteins of the residue , naoh solutions 0.5 m ( 30 : 1 v / m , 90c , 2 h ) and 0.3 m ( 10 : 1 v / m , 80c , 1 h , under agitation ) , respectively , were used . then , the alkali - insoluble fraction ( ifa ) was separated by centrifugation at 4000g for 15 minutes and/or by vacuum filtration . subsequently , to demineralize the precipitate obtained , 10% acetic acid ( 100 : 1 v / m , 60c , 6 h ) and 0.55 m hydrochloric acid ( 10 : 1 v / m , room temperature , 1,5 hours ) were used . to obtain purified chitosan , treatments with 1% sulfuric acid ( 121c/20 min ) and 50% naoh ( 100c , 10 h ) two milligrams of chitin and chitosan samples was dried overnight at 60c under reduced pressure ; then , this was homogenized with 100 mg of kbr . discs with the prepared kbr were dried for 24 h at 110c under reduced pressure . the chitin and chitosan samples from shrimp shell ( litopenaeus vannamei ) waste were analyzed at 4000625 cm using an infrared ray fourier transform spectrometer ( ft - ir , bruker mod . a kbr disc was used as reference . to determine the maximum absorption intensity of bands , the baseline was used . the degree of acetylation and deacetylation of chitosan was determined using an infrared ray spectroscopy , ir 22 , applying the band a1655/a3450 which was calculated as per ( 1)ad%=a1655a34501001,33 . to evaluate the minimum inhibitory concentration ( mic ) , the serial dilution technique was used with the tested microorganisms , in accordance with qi et al . . an initial chitosan solution was prepared at 0.5% in 1% acetic acid and ph = 5.0 . then , serial dilutions were performed of 1 : 1 to 1 : 512 and decreasing concentrations ranging from 0.00005% to 0.25% . for each microorganism , a standard bacterial suspension 10 l bacterial suspension was transferred to each one in the series of tubes and incubated at 37c for 24 hours . for the evaluation of minimum bactericidal concentration ( mbc ) , a qualitative technique was used according to the method of qi et al . . the series of chitosan solutions , which determined the mic , were used to evaluate mbc . from the reading of the mic , the tubes that showed no visible turbidity had 10 l plated on blood agar , ph 7.0 , and were incubated for 24 h at 37c , and observations were made on whether or not the colonies of microorganisms grew . according to elementary studies and analyses of different crustaceans ( shrimp , lobster , and squid ) , there was great variability of this composition when chitin amounts were varied for squid of approximately 1.8% , pink shrimp 22% ( under study ) , and lobster 36% . hence , there is a need to develop efficient demineralization and deproteinization processes to remove mineral content ( 2030% ) and protein content of approximately 40% in order to obtain chitin that is free of inorganic and protein content . this study showed that different concentrations of naoh and demineralization with hydrochloric acid and acetic acid influenced the yield of the extraction process used to obtain chitin and chitosan . similarly , it was proved that the methods used also had an effect on the degree of deacetylation ( table 1 ) . to confirm that the biopolymer was chitosan , the product obtained with the commercial chitosan sigma ( sigma aldrich corp . , st . louis , mo , usa ) was characterized and compared by infrared spectrometry . the residual mass from shrimp exoskeleton after demineralization and deproteinization processes showed well preserved chitin structure as described by stamford et al . this was higher than the values obtained by tenuta filho and zucas , with 14% of chitin pink shrimp waste ( penaeus brasiliensis ) and by beaney et al . with 10% yield of biopolymer from nephrops norvegicus . . found that the chitin content of bat guano species rhinolophus hipposideros collected from karacamal cave was 28% . it was noted the chitosan productivity corresponding to 79% from isolate chitin is superior to our results from l. vannamei using two different methodologies . the results showed the isolation of alpha chitin and were confirmed by infrared spectroscopy , thermogravimetric analysis ( tga ) , x - ray diffraction ( xrd ) , and scanning electron microscopy ( sem ) techniques . more recently , the production of a new morphology of chitin from the wings of periplaneta americana has been studied by kaya and baran . they showed the surface of the chitin has oval nanopores ( 230510 nm ) without nanofibers . the chitin surface had a pore in the center and six or seven other pores distributed around these , corresponding to structures similar to cell walls . alternatively , studies with chitin content of the structure of the exoskeleton of seven species from grasshopper of the four genera were carried out . the contents of chitin varied between 5.3% and 8.9% and had a low molecular weight ( between 5.2 and 6.8 kda ) . a large amount of waste is formed from invasive and harmful species that have been killed by the use of insecticides , and the authors suggest that these be collected and evaluated as an alternative chitin source . some parameters in the deacetylation reaction are cited as fundamental factors on the end characteristics of chitosan . tsaih and chen studied the influence of temperature and processing time on polymer chitosan characteristics and found that both have a significant effect on the deacetylation degree and molecular weight . the results obtained also showed a higher yield than that found for the chitin extracted from shrimp penaeus brasiliensis , which was 5.3% and 2.5% of chitosan . santos et al . showed a lower percentage with 5.9 and 5.06% of chitin and chitosan , respectively . thus , the maximum chitosan obtained from chitin deacetylation ( 57.5% ) was similar to the reported value for the extraction from the polymer using the shrimp macrobrachium rosenbergii ( ~65% ) . however , the results obtained by battisti and campana - filho showed that 80% of chitin was transformed into chitosan . the spectrophotometric analysis of commercial chitosan ( figure 1(a ) ) and the chitosan obtained by the methods used ( figures 1(b ) and 1(c ) ) enabled the bands to be characterized as follows : peak 1 ( ~1650 cm ) corresponded to acetylated residues ( nhcoch3 ) of chitosan ; peak 2 ( ~1590 cm ) identified the nh2 groups present in the deacetylate residues ; and peak 3 ( 3440 cm ) corresponded to the vibration of the oh molecule . analysis by ft - ir estimated the amount of free amine groups present in the molecule of chitosan obtained from the two methodologies , namely , 76% and 81.7% , respectively ( table 2 ) . however , the higher deacetylation degree of chitosan is generally controlled by processing the native polymer with alkali and increasing time and temperature . these values are consistent with commercial chitosan , obtained from crustaceans , since this reaches between 75 and 90% deacetylation in industrial processing . in a study proposing a simple and efficient method of deacetylation of chitosan using acetate of 1-butyl-3-methylimidazole , as the reaction catalyst , dd% = 86 was obtained , a value similar to that found in our study in the best condition for producing the biopolymer ( dd = 81.7% ) . santos et al . determined the degree of deacetylation of chitosan obtained from the shrimp saburica ( macrobrachium jelskii ) , which was approximately dd 76% , using linear potentiometric titration . the results using fourier transform infrared spectroscopy obtained in this study are in agreement with the data in the literature , which may vary from 50 to 92.3% [ 36 , 37 ] . hennig analyzed obtaining chitosan from penaeus brasiliensis and obtained a degree of deacetylation ( dd% ) of 87% . this value was similar to that reported in the literature for weska et al . and to those obtained in the best condition . furthermore , it was shown that the chitosan produced has characteristics comparable to commercial chitosan , the degree of deacetylation ranging between 70 and 95% [ 36 , 39 ] . recently , kaya et al . undertook studies on chitin obtained from insecta ( melolontha melolontha ) and crustacea ( oniscus asellus ) and compared their physical and chemical properties . the results showed chitin content for dry weights of m. melolontha and o. asellus corresponding to 13 - 14% and 6 - 7% , respectively . the results observed that chitin nanofibers of o. asellus adhered to each other ; nanofibers of m. melolontha were nonadherent and were considered the more attractive chitin source . studies were carried out with fomitopsis pinicola , a medicinal fungus in asia , and found 30.11% of chitin and yield of 71.75% of chitosan from the dry weight . the chitin showed acetylation of 72.5% and deacetylation of chitosan was 73.1% , and the maximum chitin temperature of degradation was 341c . results clearly revealed a significant deacetylation degree of chitosan from waste shrimp shell litopenaeus vannamei using two methodologies in comparison with deacetylation degree of chitosan determined to f. pinicola . fourier transform infrared spectroscopy ( ft - ir ) , elemental analysis ( ea ) , thermogravimetric analysis ( tga ) , x - ray diffractometry ( xrd ) , and scanning electron microscopy ( sem ) were used to investigate chitin structure isolated from both sexes of four grasshopper species , and it was observed that the amount of chitin was greater in males than females . the results showed those chitins properties are affected in different parts of the body ( head , thorax , abdomen , legs , and wings ) of the honey bee related to the extraction method . physical and chemical properties form a parameter involved with taxonomy , and the chitin extracts from different parts of the body are different . the influence of chitosan extracted by the methods proposed on inhibiting the growth of stenothrophomonas maltophilia ( ucp-1600/ucp-1601 ) , enterobacter cloacae ( ucp-1603 ) , and bacillus subtilis ( ucp-1602 ) is shown in table 2 . however , the more acceptable hypothesis is related to a change in the permeability of the cell due to interactions between the biopolymer chitosan , when it is positively charged ( ph below 6.5 ) , and the cell membrane of microorganisms when negatively charged . in the present study , the results demonstrated that mic and mbc were more significant in gram - negative bacteria when compared with gram - positive ones but were effective in both cases . these results are in agreement with reports in the literature that have documented antimicrobial activity of chitosan against a large number of microorganisms , the mic ranging between 0.1% and 1% [ 45 , 46 ] . thus , the efficiency of the chemical - physical characteristics is also related , as well as species or strains of bacteria in the same study [ 47 , 48 ] . wang demonstrated that , for bactericidal action of chitosan on e. coli , solutions with concentrations between 0.5 and 1% at 48 hours had to be used , and to obtain the same effect at 24 h , higher concentrations of 1% chitosan were prepared . in addition , tsai and su demonstrated that solutions of chitin and chitosan of high molecular weight and a high degree of deacetylation had a lethal effect on e. coli and shigella dysenteriae when concentrations between 50 and 500 ppm were used . according to chung et al . , the hydrophilicity of the cell wall and the negatively charged cell surface was greater in gram - negative bacteria in relation to gram - positive bacteria . in addition , the distribution of negative charges on their cell surfaces was very different when compared with gram - positive bacteria , thus supporting the results found in this study . in a study conducted to evaluate the bactericidal activity of glucose - chitosan complex of e. coli , pseudomonas , staphylococcus aureus , and bacillus cereus , it was determined that the minimum inhibitory concentration of chitosan was around 0.05% , these results being the same as those found in this study . moreover , on using the chitosan extracted from rhizopus arrhizus and cunninghamella elegans to evaluate the mic and mbc on listeria monocytogenes , staphylococcus aureus , pseudomonas aeruginosa , salmonella enterica , escherichia coli , and yersinia enterocolitica , it was observed that the mic values ranged from 200 gml for e. coli to 500 gml of l. monocytogenes . however , for the mbc , the results were between 400 g and 1000 g / ml , respectively . thus , in this research study , the effect of the chitosan obtained by the proposed methods was proved to be effective as an antimicrobial agent on the microorganisms tested . the previous results recommend method 2 for the chemical reaction as it offers a clean , cheap , and convenient method for extracting chitosan from chitin extracted from shrimp wastes . within the results in this work , the conclusion was reached that shrimp wastes are an excellent source for chitin . the yields and acetylation degree of chitosan decreased the concentration of naoh solution , the temperature , and the length of treatment . different chitosans were tested and markedly inhibited the growth of most bacteria tested ; however , the inhibitory effects differed depending on the types of chitosan and the bacteria tested , there being greater antimicrobial activity against gram - positive bacteria than against gram - negative bacteria .
this research aims to study the production of chitosan from shrimp shell ( litopenaeus vannamei ) of waste origin using two chemical methodologies involving demineralization , deproteinization , and the degree of deacetylation . the evaluation of the quality of chitosan from waste shrimp shells includes parameters for the yield , physical chemistry characteristics by infrared spectroscopy ( ft - ir ) , the degree of deacetylation , and antibacterial activity . the results showed ( by method 1 ) extraction yields for chitin of 33% and for chitosan of 49% and a 76% degree of deacetylation . chitosan obtained by method 2 was more efficient : chitin ( 36% ) and chitosan ( 63% ) , with a high degree of deacetylation ( 81.7% ) . the antibacterial activity was tested against gram - negative bacteria ( stenotrophomonas maltophilia and enterobacter cloacae ) and gram - positive bacillus subtilis and the minimum inhibitory concentrations ( mic ) and the minimum bactericidal concentration ( mbc ) were determined . method 2 showed that extracted chitosan has good antimicrobial potential against gram - positive and gram - negative bacteria and that the process is viable .
1. Introduction 2. Materials and Methods 3. Results and Discussion 4. Conclusion
PMC3066716
transcutaneous electrical nerve stimulation ( tens ) is widely used in western and developed countries to relieve a wide range of painful conditions , including non - malignant acute and chronic pain and pain resulting from cancer and its treatment [ 13 ] . tens can be self administered by patients following simple training and because there is no potential for toxicity , patients can titrate the dosage on an as - needed basis . during tens pulsed electrical currents are generated by a small battery operated tens device that can be kept in the pocket or attached to the user 's belt . currents from the tens device are delivered through the skin by two self - adhering electrode pads ( figure 1 ) . a standard tens device . maximal pain relief is achieved when tens generates a strong non - painful electrical sensation beneath the electrodes . pain relief is usually rapid in onset and stops shortly after tens is turned off . for this reason patients are encouraged to deliver tens for as long as needed , which may be for hours at a time and throughout the day . , tens devices can be purchased without prescription , although this is not the case in some european countries . tens devices , including electrode leads , pads and battery , retail for approximately 30gbp although bulk buying can markedly reduce cost . interestingly , tens does not appear to be widely available for patient use in developing countries . in this review the basic science behind tens will be discussed , the evidence of its effectiveness in specific clinical conditions will be provided and a case for its use in pain management in developing countries will be made . the ancient egyptians are usually acknowledged as the first people who used electrogenic fish to apply electricity for pain relief . yet , the first documented use of this kind of pain relief is of a roman physician in 46 ad . in 1786 , luigi galvani , an italian doctor , demonstrated that the leg of a frog contained electricity . this observation and other advancements in generating electricity lead to a resurgence in the use of electricity to treat different illnesses and relieve pain . however , increased use of pharmacological agents to manage pain resulted in the decline of the electrotherapy at the end of the 19th century . in 1965 , ronald melzack from mcgill university in montreal canada and patrick wall from university college london uk , published their seminal paper which proposed a gating mechanism in the central nervous system to regulate the flow of nerve signals from peripheral nerves en - route to the brain . according to this gate - control theory of pain , activity in large diameter low threshold mechanoreceptive ( touch - related ) nerve fibers could inhibit the transmission of action potentials from small diameter higher threshold nociceptive ( pain - related ) fibers through pre and post synaptic inhibition in the dorsal horn of spinal cord . because nociceptive fibers ( a - delta and c - fibers ) have a higher threshold of activation than mechanoreceptive fibers ( a - beta fibers ) melzack and wall proposed that it would be possible to selectively stimulate mechanoreceptive fibers by titrating the amplitude of electrical currents delivered across the surface of the skin ( ie tens ) . this would prevent signals from nociceptive fibers from reaching higher centres of the brain , thus reducing pain ( figure 2 ) . diagrammatic representation of the principle of conventional tens . by selectively activating a - beta fibers , tens shuts the pain gate on a - delta and c fibers preventing pain - related signals reaching the brain . in addition to interrupting nociceptive signals , at spinal cord dorsal horn , we now know that tens analgesia also involves a descending inhibitory mechanism that could be partially prevented by spinalization . activity in nerve fibers descending from the brain can also block onward transmission of peripheral nerve signals within the spinal cord . humans utilise this mechanism whenever they mentally distract themselves to not feel pain despite the presence of tissue damage ( figure 2 ) evidence gathered from animal studies suggested that low frequency tens effects may be due to release of endogenous opioids . this explains why analgesia may persist for hours after electrical stimulation has stopped because endorphins have long lasting effects in the central nervous system . the released opioids may generate their analgesic action at peripheral , spinal and supraspinal sites [ 7 , 8 ] . however , other neurochemicals have been implicated in tens analgesia including gaba , acetylcholine , 5-ht , noradrenaline and adenosine . in health care the term tens refers to the delivery of currents using a standard tens device ( table 1 ) . however , there are a variety of devices that deliver electrical currents through the skin but have technical output characteristics that differ from a standard tens device . we have previously described these as tens - like devices and include trancutaneous spinal electroanalgesia , interferential therapy , microcurrent stimulation and pain gone pens ( see [ 3 , 14 , 15 ] for a review of these devices ) . low - intensity pulsed currents are administered at high - frequencies ( between 10200 pulses per second , pps ) at the site of pain . the user experiences a strong , non - painful tens sensation often described as tingling or pleasant electrical paraesthesiae. physiologically , conventional tens activates large diameter non - noxious afferents which has been shown to close the pain gate at spinal segments related to the pain . another technique , which is used less often is acupuncture - like tens ( al - tens ) using high - intensity and low - frequency ( less than 10pps , usually 2pps ) administered over muscles , acupuncture and trigger points . the purpose of al - tens is to activate small diameter afferents which has been shown to close the pain gate using extra - segmental mechanisms . tens can also be used as a counter - irritant , termed intense tens , using high - intensity and high - frequency currents ( table 2 , figure 3 ) . the user can control the amplitude ( intensity ) , duration ( width ) , frequency ( rate ) and pattern ( mode ) of the pulsed electrical currents . in western clinical practice tens has been shown to have a role in pain management . there are many systematic reviews on tens although evidence is often inconclusive because of shortcomings in rct methodology . early systematic reviews suggested that tens was of limited benefit as a stand alone pain therapy for acute pain . judged there to be no benefit of tens for postoperative pain because 15 of 17 rcts found no differences in pain relief between active and placebo tens . re - assessed the evidence and concluded that tens reduced post - operative analgesic consumption if tens was applied using adequate tens technique . systematic reviews have also concluded that there was no evidence for tens producing beneficial analgesic effects during childbirth [ 19 , 20 ] and insufficient evidence to determine the effectiveness of tens in reducing pain associated with primary dysmenorrhoea . rcts suggest that tens is effective for acute orofacial pain , painful dental procedures , fractured ribs and acute lower back pain ( for review see . previously , systematic reviews suggested that tens may be of benefit for chronic pain but definitive conclusions were hindered by shortcomings in rct methodology [ 23 , 24 ] . reviews on rheumatoid arthritis of the hand , whiplash and mechanical neck disorders , chronic low back pain , poststroke shoulder pain and chronic recurrent headache were inconclusive . in contrast , systematic reviews have demonstrated tens efficacy for knee osteoarthritis and chronic musculoskeletal pain . rcts have also demonstrated effects for a range of other chronic pain conditions including localized muscle pain , post - herpetic neuralgia , trigeminal neuralgia , phantom limb and stump pain and diabetic neuropathies ( for review see . a recent cochrane review by robb et al . concluded that there is insufficient available evidence to determine the effectiveness of tens in treating cancer - related pain [ 32 , 33 ] . the international association for the study of pain ( iasp ) speculate that the prevalence of most types of pain may be much higher in developing countries than in developed countries , although epidemiological evidence is lacking . it is known that there is a higher incidence of pain conditions associated with epidemics such as hiv / aids in the developing world . an iasp developing countries task force , which included africa and the middle east reported that pain management in the general population was inadequate , although there was considerable variations between regions . limited resources , ignorance by health care professionals and a lack of pain specialists were contributing factors . this has impacted significantly on pharmacological therapy with many drugs commonly used in the developed world being unavailable to the general public because of the weak economy and limited purchasing power of citizens . in addition , drugs even when available may be fake , adulterated , passed their expiry date or perished due to inadequate storage . the cost of a tens device is 30gbp , although devices are available for less than 15gbp if bought in bulk . once purchased a tens device will not perish or deteriorate and devices in the developed world are used for many decades without the need for further servicing or repair . often clinics purchase tens devices in bulk and loan them to patients for short and long term use , on the proviso that the patient returns the device to the clinic when it is no longer needed . manufacturers recommend that individual pads have longevity of one month on daily use , although patients often use them for far greater lengths of time , especially if they take care to store them carefully . electrode costs can be reduced by using carbon rubber electrodes which are smeared with electrode gel and attached to the skin with micropore tape , rather than using self adhering electrodes . in general , tens has no known drug interactions and so can be used in combination with pharmacotherapy to reduce medication , medication - related side effects and medication costs . tens has very few side effects with no incidents of serious or adverse events reported in the literature . tens has a rapid onset of action , unlike medication , and there is no potential for toxicity or overdose . clearly , there is a case to use tens for pain management in the developing world . however , research is needed to determine the feasibility of tens use in developing countries . perhaps health promotion authorities should alert the public and heath care practitioners to the role of tens as an important aid in the fight against pain .
transcutaneous electrical nerve stimulation ( tens ) refers to the delivery of electrical currents through the skin to activate peripheral nerves . the technique is widely used in developed countries to relieve a wide range of acute and chronic pain conditions , including pain resulting from cancer and its treatment . there are many systematic reviews on tens although evidence is often inconclusive because of shortcomings in randomised control trials methodology . in this overview the basic science behind tens will be discussed , the evidence of its effectiveness in specific clinical conditions analysed and a case for its use in pain management in developing countries will be made .
Introduction Physiological principle of TENS induced pain relief TENS and TENS-like devices TENS Techniques Clinical effectiveness of TENS Pain Management in developing countries: Could TENS help?
PMC4385722
current standards in root canal treatment are based on cleaning and shaping the root canal prior to filling.1 an important innovation that has a major impact on these procedures has been the introduction of rotary nickel - titanium ( niti ) instruments.2,3 a considerable number of rotary niti instruments with particular design characteristics ( cross - section , cutting angle , helical angle , radial grooves / edge , flutes , etc . ) have been introduced in the market over the last years4 - 7 and previous studies have been listed the main advantages of their use in the preparation of curved root canals , such as : maintain working length ( wl ) , allowing root canal preparation to be more centered and better tapered , creating fewer procedural errors when compared to stainless steel instruments , in addition to being faster.2,3,5,6 several methods have been proposed to evaluate the performance and quality of root canal preparation with niti rotary instruments , such as histological , radiographic , sectional anatomical , scanning electron microscopy and computed tomography.3,7 - 12 however , the destruction of the specimens may impede the simultaneous investigation of different parameters of root canal preparation , and place limitations on these methods.8,13,14 cone beam computed tomography ( cbct ) has been used for several clinical and investigational purposes in endodontics , such as study of root canal configuration , evaluation of root canal preparation and filling , retreatment , three - dimensional ( 3d ) simulation of internal and external tooth structures and diagnosis and treatment of bone lesions.15 - 21 its ability to reduce or eliminate the superimposition of surrounding structures makes cbct superior to conventional periapical films.15 compared with medical tomography , cbct has some advantages : lower radiation dose , higher scanning resolution and more accuracy of volume measuring in different directions due to voxels being isotropic which make them different.16 possible procedural errors that may affect the prognosis of the root canal treatment should be considered and evaluated before choosing a new endodontic instrument to be used.22 thus , the purpose of the present study was to evaluate procedural errors occurred during root canal preparation using rotary niti instruments employing cbct imaging method . this study was approved by the research ethics committee of the federal university of gois , brazil ( protocol number 042 - 2011 ) , and written informed consent was obtained from all patients . a total of 100 extracted human mandibular molars were obtained from the dental urgency service of the school of dentistry of the federal university of gois , brazil . the teeth were stored in 0.2% thymol solution and then immersed in 5% sodium hypochlorite ( naocl ) ( fitofarma , lt . preoperative radiographs of each tooth were taken to confirm the absence of calcified root canals , previous root canal treatment , prosthetic pins and internal and external resorption , and the presence of a fully formed root apex . radiographic images were acquired using a spectro x70 electronic x - ray unit ( dabi atlante , ribeiro preto , sp , brazil ) , 0.8 mm 0.8 mm tube focal spot , kodak insight film - e ( eastman kodak co , rochester , ny , usa ) and paralleling technique . a radiographic platform was used to standardize all radiographs . all films were processed in an automatic processor , and images were evaluated in a dark room using a light box under a magnifying glass . only three - canalled teeth were used in the study ( mandibular molars with distal , mesiobuccal and mesiolingual root canals ) . all teeth were shorter than 22 mm , and mesial roots had a moderate curvature ( r > 4 and 8 mm ) . the root curvature radius ( r ) was determined according estrela et al.23 after taking periapical radiographs , standard access cavities were made by an endodontist using round diamond burs ( # 1013 , # 1014 ; kg sorensen , barueri , sp , brazil ) and endo z bur ( dentsply - maillefer , ballaigues , switzerland ) , with a high - speed hand piece and air - water spray cooling . the wl was determined using # 10 and # 15 k - flexofiles ( dentsply - maillefer , ballaigues , switzerland ) , which were introduced into the root canals until being visible at the apical foramen . the root canals were randomly divided into five experimental groups of 20 teeth each , and prepared using the following instruments : g1 - biorace ( fkg dentaire , la chaux - de - fonds , switzerland ) ; g2 - k3 ( sybronendo , orange , ca , usa ) ; g3 - protaper universal ( dentsply - maillefer , ballaigues , switzerland ) ; g4 - mtwo ( sweden - martina , padova , italy ) ; g5 - hero shaper ( micro mega , besancon , france ) . the root canals were shaped at a rotational speed of 300 rpm ( x - smart , dentsply - maillefer ) and 2.9 ncm torque . in g1 , br0 ( # 25/0.08 ) , br1 ( # 15/0.05 ) , br2 ( # 25/0.04 ) , br3 ( # 25/0.06 ) , br4 ( # 35/0.04 ) and br5 ( # 40/0.04 ) were used . in g2 , the sequence used was # 25/0.06 and # 25/0.04 ( to prepare of cervical and middle thirds ) , # 25/0.02 , # 30/0.02 , # 35/0.02 and 40/0.02 ( to prepare of apical third ) . in g3 , sx were used for the cervical root preparation , and s1 , s2 , f1 , f2 , and f3 were used until the wl . in g4 , the sequence used until the wl was # 10/0.04 , # 15/0.05 , # 20/0.06 , # 25/0.06 , # 30/0.05 , # 35/0.04 and # 40/0.04 and in g5 , the sequence used was # 25/0.06 and # 25/.04 ( to prepare of cervical and middle thirds ) , # 25/0.02 , # 30/0.02 , # 35/0.02 and 40/0.02 ( to prepare of apical third ) . two endodontist with more than 5 years of experience , registered at the brazilian dentistry association ( goinia , go , brazil ) , prepared the root canals . the operators had an 8 h theoretical course on rotary instrumentation associated with clinical applications . during preparations , the root canals were irrigated at each change of instrument with 3 ml of 1% naocl solution using a syringe with a 30-gauge needle ( injecta , diadema , sp , brazil ) . root canals were dried and filled with 17% ethylenediaminetetraacetic ( ph 7.2 ) ( biodinmica , ibipor , pr , brazil ) for 3 min to remove the smear layer . ( san mateo , ca , usa ) , thickness : 0.100 mm ( dimensions 1.170 mm 1.570 mm 1.925 mm , fov : 56.00 mm , voxel 0.100 mm , 33.5 s ( 1.024 views ) . exposure time was 33.5 s. images were examined with the scanner s proprietary software prexion 3d viewer ( terarecon inc . , foster city , ca , usa ) in a pc workstation running windows xp professional sp-2 ( microsoft corp . , redmond , wa , usa ) , with processor intel core 2 duo-6300 1.86 ghz ( intel corp . , santa clara , ca , usa ) , nvidia geforce 6200 turbo cache videocard ( nvidia corporation , santa clara , ca , usa ) , and monitor eizo - flexscan s2000 , resolution 1600 1200 pixels ( eizo nanao corp . a total of 2 examiners ( a radiologist and an endodontist ) were calibrated using 20% of the specimens , and all images were evaluated to detect the presence or absence of fractured instruments , root perforations ( coronal , middle or apical thirds ) and deviation from the original trajectory of the root canal ( apical transportation ) . when a consensus was not reached by the two examiners that interpreted the procedural errors using cbct , a third observer ( an endodontist ) made the final decision . procedural errors detected using cone beam computed tomography images ; canal transportation ( a ) , instrument fracture , ( b ) and perforation ( c ) . data were analyzed using the ibm spss for windows 21.0 ( ibm corporation , somers , ny , usa ) , including frequency distribution and cross - tabulation . comparative statistical analysis was performed using chi - square test , and the level of statistical significance was set at 5% . a total of 100 extracted human mandibular molars were obtained from the dental urgency service of the school of dentistry of the federal university of gois , brazil . the teeth were stored in 0.2% thymol solution and then immersed in 5% sodium hypochlorite ( naocl ) ( fitofarma , lt . preoperative radiographs of each tooth were taken to confirm the absence of calcified root canals , previous root canal treatment , prosthetic pins and internal and external resorption , and the presence of a fully formed root apex . radiographic images were acquired using a spectro x70 electronic x - ray unit ( dabi atlante , ribeiro preto , sp , brazil ) , 0.8 mm 0.8 mm tube focal spot , kodak insight film - e ( eastman kodak co , rochester , ny , usa ) and paralleling technique . all films were processed in an automatic processor , and images were evaluated in a dark room using a light box under a magnifying glass . only three - canalled teeth were used in the study ( mandibular molars with distal , mesiobuccal and mesiolingual root canals ) . all teeth were shorter than 22 mm , and mesial roots had a moderate curvature ( r > 4 and 8 mm ) . the root curvature radius ( r ) was determined according estrela et al.23 after taking periapical radiographs , standard access cavities were made by an endodontist using round diamond burs ( # 1013 , # 1014 ; kg sorensen , barueri , sp , brazil ) and endo z bur ( dentsply - maillefer , ballaigues , switzerland ) , with a high - speed hand piece and air - water spray cooling . the wl was determined using # 10 and # 15 k - flexofiles ( dentsply - maillefer , ballaigues , switzerland ) , which were introduced into the root canals until being visible at the apical foramen . the root canals were randomly divided into five experimental groups of 20 teeth each , and prepared using the following instruments : g1 - biorace ( fkg dentaire , la chaux - de - fonds , switzerland ) ; g2 - k3 ( sybronendo , orange , ca , usa ) ; g3 - protaper universal ( dentsply - maillefer , ballaigues , switzerland ) ; g4 - mtwo ( sweden - martina , padova , italy ) ; g5 - hero shaper ( micro mega , besancon , france ) . the root canals were shaped at a rotational speed of 300 rpm ( x - smart , dentsply - maillefer ) and 2.9 ncm torque . in g1 , br0 ( # 25/0.08 ) , br1 ( # 15/0.05 ) , br2 ( # 25/0.04 ) , br3 ( # 25/0.06 ) , br4 ( # 35/0.04 ) and br5 ( # 40/0.04 ) were used . in g2 , the sequence used was # 25/0.06 and # 25/0.04 ( to prepare of cervical and middle thirds ) , # 25/0.02 , # 30/0.02 , # 35/0.02 and 40/0.02 ( to prepare of apical third ) . in g3 , sx were used for the cervical root preparation , and s1 , s2 , f1 , f2 , and f3 were used until the wl . in g4 , the sequence used until the wl was # 10/0.04 , # 15/0.05 , # 20/0.06 , # 25/0.06 , # 30/0.05 , # 35/0.04 and # 40/0.04 and in g5 , the sequence used was # 25/0.06 and # 25/.04 ( to prepare of cervical and middle thirds ) , # 25/0.02 , # 30/0.02 , # 35/0.02 and 40/0.02 ( to prepare of apical third ) . two endodontist with more than 5 years of experience , registered at the brazilian dentistry association ( goinia , go , brazil ) , prepared the root canals . the operators had an 8 h theoretical course on rotary instrumentation associated with clinical applications . during preparations , the root canals were irrigated at each change of instrument with 3 ml of 1% naocl solution using a syringe with a 30-gauge needle ( injecta , diadema , sp , brazil ) . root canals were dried and filled with 17% ethylenediaminetetraacetic ( ph 7.2 ) ( biodinmica , ibipor , pr , brazil ) for 3 min to remove the smear layer . another 3 ml of 1% naocl solution was used for final irrigation . ( san mateo , ca , usa ) , thickness : 0.100 mm ( dimensions 1.170 mm 1.570 mm 1.925 mm , fov : 56.00 mm , voxel 0.100 mm , 33.5 s ( 1.024 views ) . exposure time was 33.5 s. images were examined with the scanner s proprietary software prexion 3d viewer ( terarecon inc . , foster city , ca , usa ) in a pc workstation running windows xp professional sp-2 ( microsoft corp . , redmond , wa , usa ) , with processor intel core 2 duo-6300 1.86 ghz ( intel corp . , santa clara , ca , usa ) , nvidia geforce 6200 turbo cache videocard ( nvidia corporation , santa clara , ca , usa ) , and monitor eizo - flexscan s2000 , resolution 1600 1200 pixels ( eizo nanao corp . a total of 2 examiners ( a radiologist and an endodontist ) were calibrated using 20% of the specimens , and all images were evaluated to detect the presence or absence of fractured instruments , root perforations ( coronal , middle or apical thirds ) and deviation from the original trajectory of the root canal ( apical transportation ) . instruments fractures during preparation were also detected ( figure 1 ) . when a consensus was not reached by the two examiners that interpreted the procedural errors using cbct , a third observer ( an endodontist ) made the final decision . procedural errors detected using cone beam computed tomography images ; canal transportation ( a ) , instrument fracture , ( b ) and perforation ( c ) . data were analyzed using the ibm spss for windows 21.0 ( ibm corporation , somers , ny , usa ) , including frequency distribution and cross - tabulation . comparative statistical analysis was performed using chi - square test , and the level of statistical significance was set at 5% . in a total of 300 root canals prepared , 43 ( 14.33% ) procedural errors were detected ( table 1 ) . the frequency of procedural errors detected using cbct according to niti systems is described in table 1 . the root canals prepared with biorace had significantly less procedural errors compared with those instrumented with other instruments ( p < 0.05 ) . most of procedural errors were observed in the mesiobuccal root canal ( n = 21 ; 48.84% ) , followed by distal ( n = 14 ; 32.56% ) and mesiolingual ( n = 8 ; 18.60% ) root canals . during a root canal preparation , several challenges may be encountered , such as understanding root canal curvature , determination of anatomical diameter and development of apical sanitization.24 thus , the selection of an appropriate instrument for root canal preparation is of importance for the outcome of root canal treatment.13 in the continuous search to find an ideal instrument , instruments with different design were developed from niti alloy . unfortunately , there is no perfect niti rotary system,25 and a number of accidents and complications can be observed during root canal preparation.6,10,21,26 the present study intended to evaluate the operative procedural errors during root canal treatment preparation , with five commercially available rotary niti instruments , using cbct scans . the assessment of procedural errors during root canal preparation by cbct represents an expressive advance of information in clinical endodontics studies and contributes in planning , diagnosis , the therapeutic process and prognosis of root canal treatment . different imaging resources had been routinely used before , during and after endodontic procedures.7,11 conventional radiographic images provide a two - dimensional ( 2d ) rendition of a 3d structure , which may result in interpretation errors . periapical lesions of endodontic origin may be present but not visible on conventional 2d imaging methods.16 - 18 diagnostic accuracy is critical for endodontic treatment success.21 the formation of artifacts , especially near bodies of high density , such as metal pieces ( intraradicular cores , crowns and metal restorations ) and filling materials may interfere with cbct images leading to misdiagnosis . thereby , precautions must be taken to deal with the effect of solid materials in the interior space of root canals on cbct images.20 in the present study , the cbct images were obtained after the root canal preparation and no root filling procedure was accomplished . in addition , the images were analyzed by a specialist in endodontics and a specialist in dental radiology , presenting expertise in 3d tracking , which used the of map - reading strategy on cbct images to reduce the problems related to difficult evaluation conditions.20 the rotary niti instruments used in this study were biorace , protaper universal , mtwo , k3 and hero shaper and the root canals were enlarged according to the manufacturers recommendation . comparison of instruments with different tapers was intentional , since one of the main concerns during the preparation of curved root canals is the determination of transversal enlargement that does not cause perforations or excessive wear . the samples were carefully selected and comprised teeth with mesial roots with moderate curvatures ( r > 4 and r 8 mm ) . a total of 300 root canals were prepared in this study and 43 procedural errors were identified ( 14.33% ) . these results confirm the low frequency of procedural errors during root canal preparation using rotary niti instruments.12,22 the frequency of procedural errors according to the instrument used was statistically significant ( p < 0.05 ) . the root canals prepared with biorace had significantly less procedural errors ( n = 2 ; 0.67% ) ( table 1 ) . this result is similar to that observed by alves et al.,22 which found low frequency of operative errors made by undergraduate students with the use of biorace . higher number of procedural errors was observed in the groups instrumented with mtwo ( n = 15 ; 5.00% ) and protaper ( n = 14 ; 4.67% ) rotary niti instruments , root perforations were the main operative procedural error in both groups . bonaccorso et al.27 compared the shaping ability of protaper , mtwo , biorace and biorace + s - apex instruments in simulated canals and observed that protaper instruments caused more pronounced canal transportation in the apical curvature than the others instruments and that the use of biorace + s - apex resulted in significantly fewer canal aberrations . for the authors the occurrence of operative accidents ( ledges , zips / elbows and instrument failures ) in teeth prepared with protaper and mtwo might be explained by the increase in the taper 0.04 ( s2 ) to 0.07 ( f1 ) in the protaper system and by the fewer spirals per unit length in the mtwo files . the overall frequency of fractured instrument , in this study , was found to be 6.57% ( n = 17 ) . it was found a significant difference in the number of fractured instruments between the rotary niti instruments systems ( table 1 ) . the same was observed by bonaccorso et al.27 de alencar et al.12 reported a rotary instrument breakage rate of 3.33% while alves et al.22 of 3.88% . the instrument fracture may be associated with operator s knowledge , experience and technique and instrument s design and surface treatment.28 panitvisai et al.29 determined , by a systematic review and meta - analysis , the impact of a retained instrument on root canal treatment outcome . two case - control studies were identified and included , covering 199 cases . weighted mean healing for teeth with a retained instrument fragment was 91% . the two studies were homogeneous , with the risk difference of the combined data of 0.01 , indicating that a retained fragment did not significantly influence healing . for spili et al.30 in the hand of experienced operators , endodontic instrument fracture had no adverse influence on the outcome of root canal treatment and retreatment and the presence of preoperative periapical radiolucency is a more clinically significant prognostic indicator . interestingly , in the present study , just one canal transportation was identified ( k3 group ) . this result contrasts with the results observed in previous studies,10,31,32 which reported higher levels of canal transportation . zer32 compared the shaping ability ( apical transportation and straightening ) of 3 niti rotary instruments ( protaper universal , hero 642 apical and flexmaster ) in curved root canals using cbct and observed that apical transportation occurred with all the instruments despite their non - cutting tips . using a similar method , oliveira et al.31 identified 26 canals transportations . most of them were observed after mechanical preparation with nitiflex and k - flexofiles activated by reciprocating system . the small number of canal transportation identified in the present study may be explained by not achievement of the root filling procedure . canal transportations are best viewed when the root canals are filled.22 despite the existence of one ever - present risk factor , the root canal preparation outcome with rotary niti instruments is mostly predictable . further researches should be conducted with the purpose of adding knowledge that will produce answers to questions , as the best instrument .
background : this study investigated procedural errors made during root canal preparation with nickel - titanium ( niti ) instruments , using cone beam computed tomography ( cbct ) imaging method.materials and methods : a total of 100 human mandibular molars were divided into five groups ( n = 20 ) according to the niti system used for root canal preparation : group 1 - biorace , group 2 - k3 , group 3 - protaper , group 4 - mtwo and group 5 - hero shaper . cbct images were obtained to detect procedural errors made during root canal preparation . two examiners evaluated the presence or absence of fractured instruments , perforations , and canal transportations . chi - square test was used for statistical analyzes . the significance level was set at a=5%.results : in a total of 300 prepared root canals , 43 ( 14.33% ) procedural errors were detected . perforation was the procedural errors most commonly observed ( 58.14% ) . most of the procedural errors were observed in the mesiobuccal root canal ( 48.84% ) . in the analysis of procedural errors , there was a significant difference ( p < 0.05 ) between the groups of niti instruments . the root canals instrumented with biorace had significantly less procedural errors.conclusions:cbct permitted the detection of procedural errors during root canal preparation . the frequency of procedural errors was low when root canals preparation was accomplished with biorace system .
Introduction Materials and Methods Tooth selection Root canal preparation Image evaluation Statistical analysis Results Discussion Conclusions
PMC4595939
in the recent times , colon - specific technologies have utilized single or combination of the following primary approaches , with varying degrees of success : ( 1 ) ph - dependent systems , ( 2 ) time - dependent systems , ( 3 ) prodrugs , and ( 4 ) colonic microflora - activated systems [ 1 , 2 ] . among the different approaches to achieve colon specific drug delivery system , the use of polymers specifically degraded by colonic bacterial enzymes ( such as -glucoronidase , -xylosidase , -galactosidase , and azoreductase ) holds promise . microbially activated delivery systems for colon targeting are being developed to exploit the potential of the specific nature of diverse and luxuriant microbiota associated with the colon compared to other parts of the gastrointestinal ( gi ) tract . these colonic microbiotas produce a large number of hydrolytic and reductive enzymes which can potentially be utilized for colonic delivery [ 1 , 2 ] . most of these systems are based on the fact that anaerobic bacteria in the colon are able to recognize the various substrates and degrade them with their enzymes . natural gums are often preferred to synthetic materials due to their low - toxicity , low - cost , and easy availability . a number of colon - targeted delivery systems based both on combination of ph , polysaccharides and biodegradable polymers have been designed and developed by various research groups for successful delivery of drugs to the colonic region [ 3 , 4 ] . it is insoluble in water , hydrates quickly , and swells into a homogenious hydrogel consistency or mass which pose difficulty for its use as polysaccharide coat . but , it seemed to be an interesting polymer for the preparation of hydrophilic matrix tablets [ 6 , 7 ] . however , sterculia gum in the form of hydrophilic matrix can not protect the drug from being released in stomach and small intestine . besides , sterculia gum is expected to retard drug release due to its higher swelling index , and at the same time its degradation by the colonic microflora would make it ideal to deliver drugs in the colon . the property of higher swelling index would provide greater surface area for more bacterial enzymatic attack . this property of the gum could be used to produce hydrostatic pressure in the design of microflora triggered colon targeted drug delivery system ( mcdds ) . in this system , the hydrostatic force is produced by osmotic agents and polymer swelling which concurrently drives the drug out of the system through the pores created by the pore - forming agent in the inner coating after exposure of the system to the colonic fluid [ 8 , 9 ] . in addition , eudragit rlpo polymer has been reported to increase the permeability to colonic fluid due to the presence of higher number of quaternary ammonium groups . hence , the objectives of present investigation was to design mcdds based on swelling property of sterculia gum and to study the influence of different independent variables on dependent variables . the design of mcdds comprises of an osmotic tablet core containing model drug azathioprine ( aza ) , sterculia gum as binder , and other excipients ; an inner semipermeable coating which is over coated with enteric layer to provide acid and intestinal resistance . the study includes the optimization of chitosan / eudragit rlpo mixed film coating for colonic delivery of polysaccharide core and to investigate the effects of the polymer blend ratio , concentration of pore former in the coat and coating thickness on the resulting drug release and to propose the drug release mechanism of the system . the innermost layer of chitosan / eudragit rlpo provides desired intestinal resistance , but controlling drug release in the colon . eudragit l100 was deposited in order to protect the delivery system from the gastric acidic conditions . a multilayered approach was selected , since such a dosage form was less likely to undergo dose dumping , and also , it may facilitate the spreading of the drug over the inflamed regions of the colonic lumen . the feasibility of the novel mcdds was studied using aza as a model anti - inflammatory drug via in vitro evaluation of drug release characteristics and in vivo assessment of pharmacokinetics in rabbits [ 11 , 12 ] . citric acid monohydrate , anhydrous lactose , magnesium stearate , disodium hydrogen phosphate ( na2hpo4 ) , and potassium dihydrogen phosphate kh2po4 were purchased from loba chemie , mumbai , india . eudragit rlpo and eudragit l100 were obtained from rhom pharm , darmstadt , germany ) . peg 400 , acetone , isopropyl alcohol and 95% ethanol , triethyl citrate , and talc were purchased from rankem , mumbai . microflora degradation studies of sterculia gum were conducted in phosphate buffer solution ( pbs ) ph 7.4 containing rat caecal content [ 13 , 14 ] . the caecal contents were dispersed in pbs under anaerobic environment ( bubbled with co2 gas ) , and the concentration of the caecal contents was adjusted to 4.0 , 8.0 and 12.0% ( w / v ) in the pbs . finely grounded sterculia gum powder 100 mg was added into 10 ml of caecal pbs and incubated at 37c under anaerobic condition . the ph of caecal pbs was measured at 2 h interval up to 8 h using a ph meter . the core tablets of aza having an average weight of 240 5 mg were prepared by direct compression using a single stroke tablet punching machine fitted with 8 mm round standard concave punches . sterculia gum was used as binder cum hydrophilic matrix former , anhydrous lactose as diluents , citric acid as ph regulating excipient , and magnesium stearate as lubricant [ 15 , 16 ] . in the initial trial , a coating solution of eudragit rlpo ( 10% w / v ) in propan-2-ol : acetone ( 60 : 40 ) containing 15% w / w or 25% w / w concentration of chitosan was used to apply a semipermeable coat on the core tablet . peg 400 ( 25% of total coating materials ) was added to improve the physicomechanical property of eudragit rlpo film . the coating conditions were as follows : stainless steel pan , 200 mm diameter , four baffled , rate of rotation of the coating pan ; 40 rpm , nozzle diameter of spray gun ; 1 mm , spray rate ; 5 ml / min , spray pressure ; 2 bar , drying temperature ; 40c . after coating , the tablets were dried for 8 hours at 3540c in order to remove the residual solvent . a full 3 factorial design was used for optimization of coating solutions [ 18 , 19 ] . the concentration of chitosan was selected by using central composite design ( ccd ) under design expert software ( version 8.0 ) . the studied factors ( independent variables ) were concentration of pore former , chitosan ( x1 ) , and weight gain in coating thickness , eudragit rlpo ( x2 ) . the dependent variables selected for the study include lag time for drug release up to 2% in scf ( y1 ) and percent drug release in 12 hours ( y2 ) and 18 hours ( y3 ) . the thickness , hardness , drug content uniformity and weight uniformity were determined in a similar manner as stated for conventional oral tablets in the accredited pharmacopoeia . in order to optimize the coating formula containing different concentration of pore former , in vitro dissolution studies of core coated with different proportions of coating materials were carried out in usp dissolution test apparatus , type i ( campbell electronics , mumbai , india ) in 900 ml of simulated colonic fluid ( scf is phosphate buffer medium , ph 7.4 containing rat caecal content 4% and -glucosidase 2% w / v ) for 18 hours under anaerobic environment [ 20 , 21 ] . aliquots of dissolution fluid were analyzed at specified time intervals to determine the release of aza by uv - visible spectrophotometer at wavelength of 281 nm . the response values ( lag time in hour , % drug release in 12 hour and 18 hour resp . ) of coated tablets based on 3 factorial design were subjected to analysis by response surface reduced quadratic model with the help of design expert software ( version 8.0 ) . statistical validity of the polynomial was established on the basis of anova provision in the design expert software , and significant terms ( p < 0.05 ) were chosen for final equations [ 18 , 22 ] . the optimized chitosan - eudragit rlpo coated tablets were further over coated with enteric coating using 10% w / v of eudragit l100 in 95% ethanol . eudragit l100 was dissolved in 95% ethanol under high stirring condition until a clear solution was obtained . triethyl citrate ( tec ) , 10% w / w of total dry polymer was added as plasticizer and talc ( 1.5% w / w of dry polymer ) as a glidant . dissolution data of the optimized formulation was fitted to various mathematical models in order to describe the mechanism of drug release [ 24 , 25 ] . the corelation coefficient ( r ) was taken as the criteria for choosing the most appropriate model . the selected formulations were tested for a period of 8 weeks at different storage conditions of 25c and 40c with 60% rh and 75% rh , to evaluate their drug content , hardness , and in vitro dissolution rate . in the present method , the plasma 6-mercaptopurine ( 6-mp ) rather than aza concentration was measured because after oral administration aza is quickly converted into its active metabolite 6-mp . the 6-mp concentration in plasma was determined according to the hplc method reported by shao - jun et al . . the hplc system consisted of a rheodyne isocratic pump ( model - lc-10 , shimadzu corp . , kyoto , japan ) a model 2250 pump ( bischoff , germany ) , and a uv detector ( model - spd , shimadzu corp . , kyoto , japan ) set at a wavelength of 325 nm ( max ) . the samples were chromatographed on a reverse phase hypersil ods c18 column ( 5 m , 25 cm 4.6 mm i.d . , thermo electron company , bellefonte , north america ) protected with a guard column ( 40 4 mm ) packed with the same material . the mobile phase was consisting of 80 parts of 0.01 m kh2po4 and 20 parts of acetonitrile ( 80 : 20 , v / v , ph 4.5 ) . it was pumped at a flow rate of 1 ml / min for the run time of 10 min under the experimental conditions with an injection volume of 20 l [ 28 , 29 ] . the column was thermostated at an ambient temperature 30 2c throughout the study . the pharmacokinetics of marketed tablet ( mkt ) , enteric coated tablet ( ec ) , and mcdds of aza were assessed and compared in rabbits in a randomized , two - period crossover study . six rabbits each weighing from 1.5 to 2.0 kg were used in this study . the rabbits were fed standard laboratory chew diet with water and fasted overnight before the experiments . the animals used in the experiments received care in compliance with the principles of laboratory animal care and guide for the care and use of laboratory animals . experiments followed an approved protocol from department of pharmaceutical sciences , dibrugarh university institutional animal ethical committee . the mkt , ec , and mcdds ( containing 50 mg / kg of drug ) were orally administered in rabbits . at time intervals , two milliliters of blood samples were collected from marginal ear vein into heparinized tubes and centrifuged at 5000 rpm for 15 min at 4c to separate plasma . the plasma samples , 0.2 ml , were deproteinized with 2.0 ml of methanol and acetonitrile mixture ( 1 : 1 , v / v ) , vortexed for 5 min , centrifuged at 6000 rpm for 15 min , and supernatants were collected . the residues were reconstituted in 200 l of mobile phase , and then 20 l of each solution was injected into the hplc column for analysis of the drug in vivo . blood sampling time points were 0 , 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 12 , 14 , 16 , 18 , 20 , 22 , and 24 hours after administration of the ec and mcdds . for the mkt tablet of aza , blood samples ( 2.0 ml ) were drawn at 0 , 0.5 , 1 , 2 , 4 , 5 , 6 , and 24 h after administration . the drug concentration of plasma samples was determined using a validated hplc procedure as described by shao - jun et al . . pharmacokinetic parameters were calculated by noncompartment analysis based on statistical moment theory using microsoft excel software . the pharmacokinetic parameters , such as maximum plasma concentration ( cmax ) and time of maximum concentration ( tmax ) , were obtained directly from the plasma concentration - time plots . the area under the plasma concentration - time curve up to the last time ( t ) ( auc0t ) , area under curve extrapolated to infinity ( auc0 ) and area under the first moment curve extrapolated to infinity ( aumc0 ) were calculated using the linear trapezoidal rule . in all the cases , a value of p < 0.05 was considered statistically significant . microflora degradation studies of sterculia gum revealed that the ph of caecal - pbs was decreased markedly from ph 7.4 to 5.0 after incubation for 2 h with sterculia gum . the rate of decrease of ph was depended on the concentration of caecal contents within the 8 h of incubation ( figure 1 ) . the decrease in ph was due to the appearance of degradation products of sterculia gum such as organic acids by the bacterial enzyme present in rat caecal contents . the weight of each tablet was determined to be within the range of 240 5 mg in order to maintain the relatively constant volume and surface area . the core tablet ( 240 mg each ) was prepared at average tensile strength of 4.0 kg / cm and average diameter of 8 mm and thickness 4 mm . the incorporation of citric acid in the core composition increased the hydration of large amount of the gum and expanded its volume to great extent . the weight variation was in the range of 275 2.09 to 287 1.98 mg and friability was less than 0.5% . uniformity in drug content was found among different batches of the tablet , and the drug content was more than 95% . the core tablet was successfully coated by conventional pan coating technique with varying proportion of chitosan - eudragit rlpo provided by central composite design . the coating composition of the various formulations under 3 factorial designs are presented in table 2 . the results of the in vitro dissolutions studies of different batches of coated tablets indicated that increase in concentration of chitosan from 15% to 25% w / w and keeping constant weight gain in thickness of polymers at 10% w / w , the lag time ( the time required for drug release up to 2% in scf ) was significantly decreased from 0.60 h to 0.25 h ( fc1 < fc4 < fc7 ) . the lag time was determined by separately running dissolution studies of chitosan / eudragit coated tablets in scf for one hour at minimum time intervals . the amount of chitosan present in the eudragit coat was the key factor for such lag time . lower amount of chitosan shows longer lag time , and higher amount shows shorter lag time . to study the effect of concentration of chitosan , its concentration in the coating solution was kept at 15% w / w for the batch fc1 , 20% w / w for fc4 , and 25% for fc7 . the result of the in vitro release profile from these formulations is shown in figure 2 . the formulation fc7 containing highest concentration ( 25% w / w ) of chitosan in the coating composition released more than 90% of aza after 18 h of the dissolution study . this might be due to the reason that an increased in the amount of chitosan ( fc7 > fc4 > fc1 ) , it became more susceptible to bacterial attack creating pores immediately resulting in shorter lag time ( 0.15 h ) for drug release . it was observed that increased in the level of weight gain from 10% , 12% , and 14% in the batches of fc1 , fc2 , and fc3 and keeping the concentration of chitosan constant at 15% w / w made chitosan particles less susceptible to bacterial attack , resulting in longer lag time and lesser percentage of drug released in 18 h owing to less accessibility of the chitosan particles across the eudragit coat by the colonic bacteria . figure 3 shows that as the coating thickness was increased , drug release was decreased , as evidenced by the difference factor f1 value which was lower than 15 . for the calculation of f1 and f2 ( similarity factor ) values , only one data point at which more than 85% of the drug release had been released was taken into consideration . anova of the dependent variables indicated that the assumed regression models were significant ( p < 0.0001 ) and valid for each considered response ( table 3 ) . the response values of the coated tablets based on factorial design generated a mathematical model , which indicated that both the level of pore former and coating thickness had significant influence on percentage of drug release in the simulated colonic fluid at ph 7.4 . the equations of the responses were found to be as follows : ( 1)y1 = 0.300.12 x1 + 0.094 x2 + 0.025 x1 x2 + 0.069 x12 + 0.019 x22y2=47.91482 + 0.93240 x12.5969 the above second - order polynomial equations represent the quantitative effects of independent variables ( x1 and x2 ) upon the responses ( y1 , y2 , and y3 ) . the validity of the above equations was justified by substituting the values of x1 and x2 in ( 1 ) to obtain the predicted values of y1 , y2 , and y3 . the observed and predicted values for the y2 response were found to be in good agreement ( table 4 ) . the three - dimensional response surfaces plots were drawn to estimate the effects of the independent variables on each considered response ( figure 4 ) . the best colonic drug delivery system based on coating with microporous eudragit rlpo containing optimum amount of chitosan would be a system that could protect drug release in the higher parts of the small intestine and deliver the drug only at the colonic region . chitosan particles in the rlpo coat remained undigested in the intestinal fluid due to absence of bacterial enzyme , but degraded in the colonic fluid due to the presence of vast anaerobic bacteria and allowed the drug release to occur . therefore , the concentration of chitosan in the eudragit coat could be the key factor for lag time . the lag time was inversely related to the level of chitosan in the eudragit coat . the lag time in colonic environment ( ph 7.4 ) was considered as response y1 and optimum duration for the response was considered to be 30 minutes . during this lag time , the chitosan in the eudragit coat comes in contact with the colonic bacteria formed in situ delivery pores for release of the drug . thus , the percent of drug release in 12 h and 18 h was considered as response y2 and y3 with a constraint of minimum of 40% and 80% release , respectively . a suitable formulation which could meet these target responses would be able to release the maximum amount of drug in the colon despite its 2 h lag time in simulated gastric fluid ( sgf , 0.1 m hcl at ph 1.2 containing 3.2 mg / ml pepsin ) and 4 h lag time in simulated intestinal medium ( sif , phosphate buffer media at ph 6.8 containing 5 mg / ml pancreatin ) . the best formulation showing drug release corresponded to 18.96% of chitosan ( pore former ) and 11.3% of coating thickness of eudragit rlpo film provided the desired release as shown in figure 5 . the above quantity ( x1 and x2 ) of formulation was substituted in ( 1 ) to obtain the predicted responses . the validity of the optimization procedure was confirmed by preparing a new batch of coating formulation with the concentration provided by the software and the observed response were found to be inside the constraints and close to the predicted responses . results of in vitro dissolution study showed that the over coating with 10% w / w of enteric coating material ( eudragit l100 , dissolves above ph 6.0 ) provided the desired acid and intestinal resistance of the optimized chitosan - eudragit rlpo coated tablet . figure 6 shows the in vitro release profile of optimized mcdds in sequential phosphate buffer medium at different ph releasing more than 90% of the drug within 24 h duration . release kinetic data revealed that the optimized mcdds was fitted well into first - order model and apparent lag time was found to be 6 hour , followed by higuchi spherical matrix release . it was evident that r ( 0.9888 ) value was higher in first - order kinetic model as compared to the other release models . the reason for first - order kinetic release was due to the presence of enzyme degradable chitosan in the eudragit rlpo film which led to the formation of in situ orifices by bacterial enzyme and leaching out drug into the surrounding medium from the central polysaccharide core tablet containing sterculia gum . when the majority of chitosan particle in the eudragit coat was degraded by colonic bacterial enzymes , it ruptured due to swelling pressure of the gum core and a gradual increase in drug release was observed , as swelling increases greater surface area of sterculia gum available for bacterial action . from the stability study , the developed mcdds was found to bestable , because there was no significant change in the percentage drug content and hardness after six month of stability study stored at 40c 2c/75% 5% rh . a novel simple , precise , selective , specific , reproducible , and low cost routine reverse phase hplc method was developed and validated as per ich guidelines . there were no such interfering peaks observed between the retention time of 6-mp and is . a good resolution was obtained between 6-mp and is with retention time of 7.88 minutes for 6-mp and 4.9 minutes for is . the method was found to be linear ( r = 0.999 ) within the analytical range of 53.32 to 4975.00 ng / ml . maximum recovery of the drug was obtained by using methanol : acetonitrile mixture ( 1 : 1 ) . the results of the method validation were proved to be accurate and reproducible , and the drug was stable in rabbit plasma up to one month period at room temperature and at three freeze - thaw cycles . mean plasma 6-mp concentration versus time profiles after a single oral dose of mkt , ec , and mcdds are depicted in figure 7 . mkt , the peak plasma concentration ( cmax ) of 6-mp was obtained within 1.5 h of administration , indicating the immediate absorption of aza from the gastrointestinal tract and quick conversion into its active metabolite , 6-mp in blood . the cmax value of 6-mp following oral administration of mkt tablet was found to be 1430.08 ng / ml at the time maximum ( tmax ) of 1.5 h. the cmax value of 6-mp for ec tablet of aza without containing sterculia gum was found to be 847.5 ng / ml at tmax of 5.0 h. from the results of in vitro release study , it was observed that the drug was released after 2.0 h of dissolution study which was quite desirable , due to the fact that the drug would be released from the tablets after passing the stomach region as the tablets were enteric coated . the results of in vivo studies of ec tablets showed that drug was not released in the stomach up to 2.0 h and therefore it gives tmax of 5.0 h. thus , the in vivo finding has good correlation with the in vitro results . a lag time of 6.0 h was observed from the mcdds which revealed that the tablet had passed through the git and after reaching the colon only the drug was released and appeared in plasma as 6-mp . therefore , the cmax value of 6-mp for the mcdds could found to be 453.56 ng / ml at tmax of 9.0 h after oral administration . the results of anova revealed that there was significant difference of auc0- between the mcdds , ec and mkt formulation ( p < 0.05 ) . the results explained that the mkt formulation was more rapidly absorbed from the upper gastrointestinal tract of rabbit . but the ec and mcdds were not absorbed from the upper git due to which they showed greater value of auc0 as shown in the table 5 . it is evident that auc for mcdds was higher as compared to the reference formulation ec and mkt formulations ( mcdds < ec < mkt ) . result suggests that the extent of absorption of aza from the developed mcdds was decreased from the large intestine , but increased from the upper part of the git as seen in case of ec and mkt formulation . from the in vivo studies , the cmax of mcdds was found to be almost half of the ec tablet without containing sterculia gum . the longer tmax value ( 9.0 h ) and low cmax value ( 453.56 ng / ml ) of mcdds as compared to the reference formulations had proved that the mcdds released drug only at the colonic region of the rabbit intestine . this reveals localization of the drug in the colonic mucosa from the mcdds and thereby , possibly reducing the risk of systemic toxicity . microflora degradation study revealed that sterculia gum can be used to release drug in the colonic region by utilizing the action of enterobacteria . the developed mcdds exhibit gastric and small intestinal resistance but were susceptible to bacterial enzymatic attack and the potential of the system as a carrier for drug delivery to the colon is confirmed . the swelling property of sterculia gum can be used to produce hydrostatic pressure inside the tablet if it is coated with semipermeable membrane and can be used to target drug to the colon . chitosan - eudragit rlpo mixed film coating provided the favourable characteristics to the sterculia gum core tablets to deliver it directly into the colon . chitosan in the mixed film coat was found to be degraded by enzymatic action of the microflora in the colon . the degradation of chitosan was the rate - limiting factor for drug release in the colon . drug release from the mcdds was directly proportional to the concentration of chitosan , but inversely related to the weight gain in thickness of eudragit rlpo coat . the enteric layer of eudragit l100 could protect eudragit rlpo membrane containing chitosan from formation of pore or rupture before scf dissolution procedure . drug release from optimized mcdds fitted well into first - order kinetic model followed by higuchi spherical matrix release model . the hplc method developed shows good resolution to evaluate the pharmacokinetic parameters of the drug . pharmacokinetic studies revealed that the mrt value ( 13.81 h ) was higher for mcdds as compared to the other two reference formulations , which were 3.60 h for mkt and 6.62 h for ec tablets , respectively . finally , in vivo evaluation of mcdds in rabbit showed delayed tmax , prolonged absorption time , decreased cmax , and decreased absorption rate constant ( ka ) indicating that drug was slowly absorbed from the colon making the drug available for local action in the colon , thereby , reducing the risk of systemic toxicity of the drug as compared to other dosage forms .
the purpose of this study is to explore the possible applicability of sterculia urens gum as a novel carrier for colonic delivery system of a sparingly soluble drug , azathioprine . the study involves designing a microflora triggered colon - targeted drug delivery system ( mcdds ) which consists of a central polysaccharide core and is coated to different film thicknesses with blends of chitosan / eudragit rlpo , and is overcoated with eudragit l00 to provide acid and intestinal resistance . the microflora degradation property of gum was investigated in rat caecal medium . drug release study in simulated colonic fluid revealed that swelling force of the gum could concurrently drive the drug out of the polysaccharide core due to the rupture of the chitosan / eudargit coating in microflora - activated environment . chitosan in the mixed film coat was found to be degraded by enzymatic action of the microflora in the colon . release kinetic data revealed that the optimized mcdds was fitted well into first - order model , and apparent lag time was found to be 6 hours , followed by higuchi release kinetics . in vivo study in rabbits shows delayed tmax , prolonged absorption time , decreased cmax , and absorption rate constant ( ka ) , indicating a reduced systemic toxicity of the drug as compared to other dosage forms .
1. Introduction 2. Materials and Methods 3. Results and Discussion 4. Conclusions
PMC3018347
mycobacterium tuberculosis is an extraordinarily successful human pathogen with the ability to replicate within the normally hostile environment of host macrophages . after being phagocytosed by the macrophage , m. tuberculosis resides in a membrane - bound vacuole , the phagosome , which normally undergoes maturation into the phagolysosome that is essential for eliminating invading microbes and for antigen presentation.(1 ) however , m. tuberculosis is able to arrest phagosomal maturation by interfering with ca signaling and trafficking of the rab family of small gtpases , two important processes for organelle membrane fusion . when residing within a phagosome , the live bacilli secrets specific proteins such as tyrosine phosphatases to reduce the phagosomal level of phosphatidylinositol 3-phosphate or inhibit host proteins regulating vacuolar sorting , which all lead to impaired phagolysosomal fusion . arrest of phagosomal maturation ( or block of phago - lysosome fusion ) is critical for m. tuberculosis persistence in host macrophages.(1 ) apart from secreted proteins , the exotic cell wall components of pathogenic mycobacteria are thought to be key modulators of host immune processes , but in most cases , their molecular effects on host cells are not well understood.(7 ) the best characterized are unique mycobacterial lipoglycans termed lipoarabinomannans ( lam ) , which are noncovalently associated with the bacterial plasma membrane and extend to the exterior of the cell wall . notably , pathogenic m. tuberculosis produces mannose - capped lipoarabinomannan ( manlam ) structures,(10 ) whereas the fast - growing nonpathogenic species mycobacterium smegmatis synthesizes lam molecules capped with phosphatidyl - myo - inositol ( termed pilam)(11 ) ( figure 1 ) . for instance , macrophages show significantly reduced phagosomelysosome fusion after engulfing manlam - coated microbeads.(12 ) manlam interacts with the mannose receptor and blocks the rise in cytosolic ca that would otherwise accompany mycobacterial entry into macrophages . free manlam can insert into host cell membranes , leading to membrane reorganization and disruption of signaling pathways.(15 ) by contrast , pilam has none of these effects . instead , it is an agonist of toll - like receptor 2 , which results in secretion of various cytokines and activation of apoptotic pathways , though the notion of its proinflammatory activity has been challenged recently.(19 ) motivated by reports suggesting different effects of manlam and pilam on macrophages , we sought to probe how the two mycobacterial lipoarabinans affect the molecular properties of host phagosomes . previously , we developed a method to purify the membrane fraction from latex bead - containing ( lbc ) phagosomes in order to profile its proteome using mass spectrometry.(20 ) here we modified our method by combining tube - gel digestion(21 ) and itraq labeling(22 ) to study quantitative changes in the macrophage phagosomal proteome upon exposure of cells to manlam or pilam . escherichia coli - derived lipopolysaccharide ( lps ) was used as a third stimulus in order to identify lam - specific effects . from a total list of 823 proteins found in the phagosomal membranes , we identified 47 proteins that were significantly up- or down - regulated ( > 1.25-fold , p < 0.05 ) by exposure of macrophages to manlam but not the other two lipoglycans . motivated by this observation , we investigated the effects of mycobacterial lams on lc3 recruitment to the phagosome as well as autophagy activation in cultured macrophages . manlam inhibited chemical - induced accumulation of autophagosomes , suggesting a previously unrecognized function of this virulence factor in undermining host defense responses . the rat anti - lamp1 mab and mouse anti - syntaxin 6 mab were obtained from bd biosciences . the goat anti - eea1 pab and goat anti - cathepsin d pab were obtained from santa cruz biotechnology . the rabbit anti - lc3b pab and mouse anti - tubulin mab were obtained from sigma . acrylamide / bisacrylamide solution ( 40% , 29:1 ) , ammonium persulfate ( aps ) , and tetramethylenediamine ( temed ) were obtained from bio - rad . latex beads ( 1.0 m microspheres , polysciences ) were washed twice in 0.05 m carbonatebicarbonate buffer ( ph 9.6 ) by centrifugation at 15 000 g for 5 min in 1.5-ml presiliconized low - retention microtubes ( fisher ) . the beads in each tube were then resuspended in 900 l of carbonatebicarbonate buffer before a certain amount of a specific lipoglycan was added . usually , 180 g of manlam or pilam , or 1.8 g of lps was used to coat a total of 4.55 10 latex beads . the beads and lipoglycans were incubated for 1 h at 37 c on an eppendorf thermomixer ( shaking at 1400 rpm ) . the beads were washed once and then incubated in 1 ml of pbs buffer containing 5% bsa ( sigma , endotoxin tested ) for 0.5 h at 37 c to block nonspecific binding sites . the latex beads were washed with 0.5% bsa in pbs , resuspended in 1 ml of rpmi-1640 cell culture media , and stored at 4 c until use ( stable for 1 week ) . the presence of mannose residues on latex beads after manlam coating was confirmed by immunofluorescence microscopy using fitc - conjugated con a. the other lipids were coated onto beads using the same protocol , which was presumed to be effective for all lipoglycans . the murine macrophage cell line raw 264.7 was cultured as a monolayer in rpmi-1640 ( gibco , formulated with hepes and glutamine ) supplemented with 10% fetal bovine serum and 100 units / ml penicillin / streptomycin . cells were first chilled with 4 c pbs for 5 min in order to synchronize phagocytosis . four populations of cells were incubated ( 2 h , 37 c ) with latex beads coated with a specific bacterial lipoglycan or control beads without lipid coating at a multiplicity of infection ( moi ) of 50:1 to generate phagosomes . after gentle cell lysis using a dounce homogenizer to reach 9095% breakage , the bead - containing phagosomes were isolated by ultracentrifugation on a sucrose gradient as described by desjardins and co - workers.(23 ) the latex bead - containing ( lbc ) phagosome fraction was collected from the top of the sucrose gradient ( 1020% interface ) and resuspended in pbs containing protease inhibitor cocktail ( calbiochem ) . the lbc - phagosome fraction was washed by ultracentrifugation at 40 000 g for 20 min . each lbc - phagosome pellet from a given treatment was resuspended in 1.5 ml of 0.2 m na2co3 ( ph 11.0 ) containing proteases inhibitors . the lbc - phagosomes were disrupted by passaging the suspension 57 times through the needle of a 1-ml syringe ( 25 g ) . the resulting sample was kept on ice for 30 min before the membrane fraction was pelleted by centrifugation for 45 min at 200 600 g at 4 c . the pellet was washed with a low - salt buffer ( 10 mm triethylammonium bicarbonate ( teab ) , sigma ) . a third centrifugation was performed at 100 000 g for 15 min to acquire the final membrane pellet . the phagosomal pellet samples were stored at 80 c prior to protein extraction . a solution of 4% sds in 400 mm teab buffer ( 50 l ) the resuspended solution was agitated for 30 min at 10 c on a thermomixer to extract proteins from the membrane fraction . after centrifugation at 14 000 g for 10 min , the supernatant was collected and diluted with additional ddh2o so that the final protein extract was in 1% sds and 100 mm teab buffer ( 4-fold dilution ) . for measuring protein concentration , we diluted the extracts by another 5-fold with 50 mm tris - hcl buffer before carrying out the bca assay ( pierce ) . for in - solution digestion , 4 g of protein derived from the phagosome membrane extract was diluted either 5-fold or 20-fold in 100 mm teab , resulting in a final sds concentration of 0.2% or 0.01% , respectively . each protein sample was treated with tcep ( 5 mm , 37 c for 30 min ) and iodoacetamide ( 15 mm , for 30 min in the dark ) to reduce and alkylate cysteine residues , then digested with trypsin overnight ( e / s = 1:20 w / w ) . in addition to this single enzyme digestion , we tested the efficiency of dual enzyme digestion with both lys - c and trypsin . a 5-fold diluted sample was first digested with lys - c ( promega ) for 4 h at 37 c ( e / s = 1:80 w / w ) before the trypsinization under the same condition as above . all digests were acidified with 2% formic acid and desalted with a c18 ziptip ( millipore ) . to test the efficiency of in - tube digestion , 5 g of protein derived from the phagosome membrane extract the protein sample was then mixed with 5 l of acrylamide / bisacrylamide ( 40% , 29:1 ) , 0.7 l of 1% aps , and 0.3 l of temed . the gel impregnated with membrane proteins was cut into small pieces and washed three times with 50% acetonitrile ( acn ) in 100 mm teab buffer . the gel pieces were then dehydrated with pure acn and dried using a speedvac . a total of 0.5 g of trypsin ( in 25 mm teab buffer , e / s = 1:10 w / w ) was absorbed by the dried gel , which was then incubated overnight at 37 c . peptides were extracted from the gel with 50 l of 25 mm teab buffer , 100 l of 0.1% tfa , and 150 l of acn in 0.1% tfa , sequentially . the solutions were combined and concentrated using a speedvac . for tube - gel digestion of phagosome membrane proteins before itraq labeling , 20 g of protein from each extract from cells treated with different lipid - coated beads was incorporated into gels using the same protocol , except that 10 mm methyl methanethiosulfonate ( mmts ) replaced iodoacetamide and the reagent amounts were increased 4-fold . the residual protein solution on top of the gel was removed and subjected to a second solidification process . each peptide extract was concentrated to roughly 20 l , and 1 m teab buffer was added to adjust the final buffer concentration to 400 mm . four - plex itraq ( applied biosystems ) reagents were reacted with individual protein digests according to the manufacturer s protocol . the phagosome membrane protein preparation and digestion procedures were performed twice to acquire two biological replicates . for comparison of the in - solution and tube - gel digestion efficiencies , 0.5 g of each protein digest was analyzed by lcms using esi linear ion trap mass analyzer ( ltq , thermo , inc . ) in a data - dependent acquisition mode . lc separation was performed on a house - made microcapillary column ( 0.100 100 mm , 3 m magic c18 packing material , michrom bioresources ) at a flow rate of 300 nl / min using buffers 0.1% formic acid ( fa ) in water ( a ) and 0.1% fa in acn ( b ) . the gradient was 210% b for 5 min , 1040% b for 60 min , 4090% b for 10 min , and 90% b for 10 min . the major parameters for ltq data acquisition were : scan range , 4002000 m / z ; precursor ion selection , six most abundant peaks per scan for ms / ms ; minimal ion signal , 500 ; and normalized collision energy , 35.0% . the itraq - labeled peptide mixtures from tube - gel digestion of two experimental replicates were separated by two - dimensional liquid chromatography and analyzed by esi - q - tof mass spectrometry . briefly , the peptide mixture was separated by off - line strong cation exchange ( scx ) chromatography using an ultimate hplc with a uv detector ( dionex - lc packings , sunnyvale , ca ) . labeled samples were resuspended in scx running buffer ( 5 mm kh2po4 , 25% acn , 0.1% fa ) and loaded onto a polylc polysulfethyl a column ( 2.1 mm 200 mm , the nest group , southborough , ma ) . peptides were eluted with increasing concentrations of 800 mm kcl , 5 mm kh2po4 , 25% acn , and 0.1% fa using a three step gradient : 010% for 10 min , 1025% for 30 min , and 25100% for 10 min . fifteen fractions were collected at a flow rate of 300 l / min according peaks observed by uv absorption at 214 nm . fractions were partially evaporated to remove acn on a speedvac and desalted using c18 macrospin columns ( the nest group , southborough , ma ) . each fraction was injected onto a pepmap100 trapping column ( 0.3 mm 5 mm ) . reversed - phase separation was performed on a lc packings pepmap c18 column ( 3 m , 0.075 150 mm ) at a flow rate of 300 nl / min using buffers 2% acn , 0.1% fa ( a ) and 80% acn , 0.1% fa ( b ) . the gradient was 035% b for 100 min , 30100% b for 10 min , and 100% b for 10 min . the samples from one experiment were injected into a qstar pulsar - i hybrid quadrupole tof ( applied biosystems , framingham , ma ) , while the samples from another replicate were analyzed by a qstar elite hybrid quadrupole tof ( applied biosystems , framingham , ma ) . the major parameters for q - tof data acquisition were : ms scan range , 3501800 m / z ; ms / ms scan range , 702000 m / z ; precursor ion selection , three most abundant peaks per scan for ms / ms ; minimal ion counts , 30 ; and automated collision energy was applied ( fragment intensity multiplier = 2.0 ) . lcms / ms data acquired using a ltq ion trap mass spectrometer ( thermo ) were searched against the mouse swiss - prot protein database using the sequest algorithm provided with bioworks 3.2 . sequest results were filtered by xcorr ( + 1 > 1.8 ; + 2 > 2.5 ; + 3 > 3.5 ) , deltacn > 0.8 and the requirement of at least two different peptides from a protein . parent and product ion mass errors were set at 1.2 and 0.8 da , respectively . lcms / ms data acquired on a q - tof for itraq - based quantification were searched against a mouse international protein index ( ipi ) database ( mouse , version 3.27 , 56 000 entries ) using the paragon algorithm within proteinpilot version 2.0 software ( applied biosystems ) . the major parameters in the software were explained by pierce et al.(24 ) protein identification was based on a confidence level > 95% and at least two different peptides assigned to the protein . a search performed against a concatenated database containing both forward and reversed sequences allowed estimation of the false discovery level ( below 1% ) . the number of transmembrane helices in each protein was predicted using the tmhmm online program ( http://www.cbs.dtu.dk/services/tmhmm/ ) . for relative protein quantification , each protein ratio reported by protein pilot was associated with a p - value ( evaluating the statistical difference between the observed ratio and unity ) and ef ( error factor ) for each protein id.(25 ) the ef term indicates that the actual ratio lies between ( reported ratio)/(ef ) and ( reported ratio ) ( ef ) at a 95% confidence . the following criteria were required to consider a change in protein level significant : the protein i d had a p - value < 0.05 and a meaningful ef ( < 2 ) , at least two unique peptides were identified , and the fold difference was greater than 1.25 ( i.e. , the ratio was > 1.25 or < 0.80 ) as explained in results and discussion . proteins from the lbc - phagosome membrane extracts were separated by sds - page ( 412% acrylamide for most experiments , 12% acrylamide gel for lc3 detection ) and analyzed by western blot using appropriate antibodies . for each sample , 3 g of protein was loaded . raw264.7 cells stably expressing gfp - fused lc3 ( kindly provided by patrick fitzgerald , st . jude s children s research hospital , memphis , tn ) were used to investigate lc3 trafficking after internalization of lipoglycan - coated latex beads . the cells were cultured in dmem ( gibco , containing high - glucose , glutamine , and sodium pyruvate ) supplemented with nonessential amino acids ( gibco ) , 10% fetal bovine serum , and 100 units / ml penicillin / streptomycin . cells were seeded onto 8-well , nunc lab - tek chambered coverglass microscopy slides ( fisher ) 20 h before images were acquired . after incubation with lipoglycan - coated beads for 2 h , cells were gently washed with pbs twice to remove free beads . the cells were then treated with lysotracker red dnd-99 ( molecular probes ) ( 100 nm ) for 15 min . the cells were rinsed and incubated in fresh warm media and imaged using a zeiss 200 m epifluorescence deconvolution microscope . all images were processed using the nearest neighbor deconvolution algorithm in the instrument software slidebook 4.2 ( intelligent imaging innovations ) . in a separate experiment , the cells were incubated with the autophagy inducer chloroquine ( 50 m ) and a given bacterial lipoglycan at various concentrations . after 2 h , the cells were washed with pbs and placed in warm media for fluorescence imaging . previously , we reported a workflow for lbc phagosome isolation , membrane fractionation , and profiling of the phagosome membrane proteome by mass spectrometry.(20 ) here , we modified our platform to include the use of the itraq method for multiplexed protein quantitation . figure 2 illustrates the workflow of the entire experiment . first , we coated latex beads with a lipoglycan from m. tuberculosis ( manlam ) , m. smegmatis ( pilam ) , or e. coli ( lps ) . the efficiency of lipid coating was verified by specific staining of manlam - coated beads with fitc - conjugated concanavalin - a , a lectin that binds terminal mannose residues(27 ) ( supporting information si figure 1 ) . experimental design for quantitative analysis of the phagosomal membrane proteome under different lipoglycan treatments . lbc - phagosomes were isolated using the method initially reported by desjardins and colleagues(23 ) and modified in our previous work.(20 ) to enrich membrane - bound components , we fractionated the phagosome based on our previous protocol with one modification : we performed an additional wash of the membrane pellet using a low - salt buffer to remove a greater proportion of highly abundant luminal proteins . this adjustment allowed the identification of a greater number of membrane - associated proteins ( vide infra ) . a consequence of enriching the samples for membrane - associated proteins is the requirement of high detergent concentrations ( > 4% sds ) for their solubilization , which poses a problem for in - solution trypsin digestion due to the loss of enzyme activity . additionally , the in - gel digestion method , though compatible with high detergent concentrations , is not amenable to subsequent chemical labeling reactions as required by the itraq method . we attempted to solve these problems using reduced amounts of sds ( 0.2% ) in solution as well as a more detergent - resistant protease , lys - c , prior to trypsin digestion , but neither of these modifications gave satisfactory results . we turned our attention to the recent method of tube - gel digestion , developed by lu and zhu , which has proven effective for proteomic analysis of membrane proteins . an advantage of this method is that detergent can be washed from the protein - impregnated gel prior to trypsinization . furthermore , the resulting peptide products can be released from the gel into aqueous solution prior to chemical labeling . we modified the tube - gel digestion method in three ways to better suit the analysis of phagosomal membrane extracts : ( 1 ) nh4hco3 was replaced with teab , a primary amine - free buffer compatible with itraq reagents ; ( 2 ) the residual protein solution that was excluded from the gel matrix was subjected to a second gel solidification process , so as to minimize sample loss ; and ( 3 ) we performed the cysteine reduction and alkylation steps prior to gel formation . using phagosomal membrane extracts as substrates , we compared the efficiency of this modified tube - gel digestion protocol to in - solution methods and analyzed the peptide products by lcms . the total number of proteins identified using different digestion protocols in three separate experiments are shown in supporting information table 1 . the modified tube - gel protocol identified three times as many proteins as the best in - solution procedure ( using dual enzymes and 0.2% sds ) . a recent publication by han et al . also highlights the robustness of the tube - gel method for processing membrane proteins prior to itraq labeling.(29 ) the digested peptides derived from untreated , manlam- , pilam- , or lps - treated samples were labeled with itraq114 , itraq115 , itraq116 , or itraq117 , respectively . the pooled peptide mixture was subjected to 2d lcms / ms analysis both for protein identification and relative quantitation . biological replicates were analyzed either on a qstar pulsar i or elite mass spectrometer , and the combined data sets produced a total of 823 nonredundant proteins ( supporting information table 2a ) . notably , 540 of these proteins were not found in our earlier phagosome membrane proteomics study.(20 ) a summary of overlapping and unique protein ids from both data sets is present in supporting information table 3 . we attribute the superior performance of the new method to ( 1 ) the additional low - salt wash , which depleted more soluble proteins and enriched membrane proteins accordingly ( supporting information figure 2b ) , and ( 2 ) the reduced sampling handling required of the tube - gel method , which minimized contamination with keratins . the itraq isotopic labels were used to calculate relative protein levels from lipoglycan - treated or untreated samples . supporting information table 2b provides a list of 658 quantitative ratios ( lipoglycan treated : untreated ) measured in two experimental replicates . the criteria for selecting ratios that indicate significant changes in protein abundances are described in experimental procedures . applying these filters allowed us to identify four protein subsets that were regulated by ( i ) manlam alone , ( ii ) manlam and pilam but not lps , ( iii ) manlam and lps but not pilam , or ( iv ) all three lipoglycans . because manlam has been implicated in m. tuberculosis virulence , we focused our studies on the phagosomal proteins that were specifically regulated by this lipoglycan . apart from the p - value filter , we applied a threshold of 1.25-fold difference because : ( 1 ) the average technical and biological variations for itraq - based quantitation have been reported to be 11% and 25% , respectively;(30 ) ( 2 ) precedents have established that in some cases changes in protein level as low as 1.2-fold can be biologically relevant ; ( 3 ) after applying these criteria , we found 42 proteins specifically regulated by manlam ( table 1 ) , with the remaining unchanged proteins covering 94% of all the proteins quantified in this study . this value is higher than the percent of unchanged proteins found in an analysis of biological replicates using itraq.(30 ) the protein ratios represent the relative abundance of a specific protein in the phagosome bearing latex beads coated with manlam , pilam , or lps vs uncoated beads . macrophage cells were treated by latex beads coated with three different bacterial lipoglycans or uncoated beads , and proteins were extracted from phagosomal membranes for quantitative comparison using itraq isotopic labels . the protein ratios indicate relative changes of protein levels in the phagosomes under a specific lipoglycan treatment in comparison with no treatment . forty - two proteins were found to be significantly changed only by manlam treatment , which were summarized in this table . the ratio with an asterisk is the mean of two replicate measurements . of the 42 proteins found to be regulated by manlam , lysosome - associated membrane protein 1 ( lamp1 ) , the late endosome membrane marker cd63 , and the late endosome - specific small gtpase rab7 were all in this group , a finding that agrees with previous microscopy studies showing diminished recruitment of these proteins to phagosomes containing manlam - coated beads(13 ) or live m. tuberculosis.(3 ) in addition , lysosomal enzymes such as the cysteine protease cathepsin d ( catd ) , and subunits of lysosomal vacuolar atpase ( v - atpase ) were down - regulated by 3040% ( table 1 ) . loss of v - atpase , a proton transporter , could explain the reduced lysotracker staining of phagosomes containing manlam - coated beads reported earlier.(15 ) interestingly , we noticed that the phagosomes containing manlam - coated beads partially share the features of those harboring live m. tuberculosis . russell group has reported earlier that mycobacterium phagosomes exclude v - atpase to restrict vacuole acidification(33 ) as well as reduce acquisition of the mature form of catd.(34 ) in western blot analysis , we also observed reduced production of the mature catd in the manlam - containg phagosome ( figure 3a ) , though to a less extent than that observed in the live mycobacterium phagosome.(34 ) in summary , our proteomic data are consistent with classical cell - based or biochemical assays and suggest that manlam contributes to phagosome maturation arrest . ( a ) immunoblot of phagosomal membrane extracts from cells treated with different lipoglycans using antibodies specific for endosomal markers and a lysosomal protease . each ratio represents the changes of a particular protein in the presence of manlam- , pilam- , or lps - coated beads relative to the uncoated beads . ( c ) immunoblot of syntaxin 6 in phagosome membrane extracts under different lipoglycan treatments . note : this protein was not identified in the proteomic analysis . among the down - regulated species , we were particularly interested in a protein of unknown function containing a zinc finger fyve domain ( ipi00554920.5 ) . this domain is known to bind phosphatidylinositol 3-phosphate ( pi3p),(35 ) a regulatory phospholipid that mediates membrane trafficking , endosomal protein sorting , and multisubunit enzyme assembly via multiple effectors . importantly , pi3p plays an essential role in phagosomal acquisition of lysosomal constituents , and it is excluded from phagosomes containing live mycobacteria.(4 ) early endosome antigen 1 ( eea1 ) , a protein that is known to be required for phagolysosome biogenesis and that also contains a fyve domain , was also down - regulated by manlam . by analogy , the uncharacterized protein may be a new pi3p - binding effector involved in phagosome maturation and is possibly modulated by the m. tuberculosis lipoglycan . eighteen proteins increased in abundance when cells were treated with manlam - coated beads ( table 1 ) . the small gtpase rab14 is critical for stimulating vesicle fusion specifically between phagosomes and early endosomes rather than late endosomes.(40 ) overexpression of this regulatory protein is known to prevent phagosomes containing dead mycobacteria from maturing into bactericidal phagolysosomes.(40 ) manlam - induced upregulation of rab14 in phagosomes may promote phagosome - early endosome fusion thereby stalling the maturation process . another protein in this group , vacuolar protein sorting - associated protein 41 ( vam2p homologue ) , is also known to concentrate in early endosomes.(41 ) the protein interacts with multiple snare family members , which are important vesicle fusion regulators.(41 ) the observation that manlam up - regulates at least two known early endosomal proteins suggests that its host phagosome resembles early endosomes , a proposal that been previously put forth for m. tuberculosis - containing phagosomes . we performed immunoblotting to validate quantitative changes of certain phagosomal proteins in response to manlam treatment . lamp1 , catd , and eea1 were down - regulated in the presence of manlam - coated beads compared to samples treated pilam - coated , lps - coated or uncoated beads ( figure 3a ) . the observed down - regulation of eea1 concurs with deretic and co - workers report that eea1 is depleted from phagosomes containing live m. tuberculosis.(44 ) by contrast , transferrin receptor ( tfr ) , an early endosome marker , was found to be up - regulated in phagosomes containing manlam - coated beads . this coincides with a previous report by clemens and horwitz , showing that only the phagosomes harboring live mycobacterium acquire transferrin added exogenously.(43 ) these trends revealed by immunoblotting matched those observed by quantitative mass spectrometry ( figure 3b ) . the reduced levels of eea1 in phagosomes containing manlam - coated bead may have broad consequences for vesicle fusion and membrane trafficking . eea1 is recruited to organelles via its association with pi3p on organelle membranes(45 ) and also interacts with syntaxin 6 , a snare protein that participates in the vesicular traffic between the tgn and the endocytic system including phagosomes . inhibition of eea1 s function causes reduced accumulation of syntaxin 6 in phagosomes.(12 ) likewise , we considered the possibility that syntaxin 6 is depleted from phagosomes bearing manlam - coated beads . indeed , western blot analysis showed that syntaxin 6 was down - regulated in the presence of manlam - coated beads ( figure 3c ) while four other syntaxins ( stx3 , 7 , 8 , 12 ) and two syntaxin - binding proteins ( stxbp2 , 5 ) identified by our proteomic analysis did not exhibit reduced levels upon manlam treatment ( supporting information table 2b ) . loss of syntaxin 6 from phagosomal membranes may cause a global reduction in the delivery of lysosomal components , including v - atpase and catd , from the tgn to phagosomes . taken together , the proteomic and biochemical data are consistent with the proposal that manlam undermines trafficking from the tgn to the phagosome , which is reported to depend on the production of pi3p by the kinase vps34.(48 ) interestingly , we also observed lamp1 down - regulated by manlam treatment , yet this protein is thought to be delivered to phagosomes in a pi3p - independent manner . therefore , manlam may affect multiple trafficking pathways that contribute to maturation of the phagosome and its proteome . a connection was recently revealed between the pathways of phagosome maturation and autophagy activation.(20 ) autophagy serves as a means for the removal of intracellular bacteria and viruses , apart from its primary function of maintaining cytoplasmic homeostasis.m . tuberculosis survival in infected macrophages is suppressed by artificial induction of the autophagy response.(53 ) we previously observed that the autophagosomal marker lc3 is enriched on lbc phagosomal membranes upon autophagy activation.(20 ) lc3 is not only a widely used marker but also an essential component of the autophagy machinery.(54 ) upon the induction of autophagy , the 18-kda cytosolic precursor lc3-i is cleaved at its c - terminus and conjugated to phosphatidylethanolamine , generating a 16-kda form termed lc3-ii.(55 ) lipid - modified lc3-ii integrates into the membranes of autophagosomes and undergoes either recycling or degradation when autophagosomes fuse with lysosomes.(56 ) given the importance of the autophagy pathway in the macrophage response to m. tuberculosis and the role of lc3 in this process , we sought to determine whether manlam affects the cellular distribution of lc3 . our proteomic data suggested that lc3 levels were reduced by about 30% in phagosomes containing manlam - coated beads ( figure 4a ) , an observation that was confirmed by western blot analysis of phagosomal membrane extracts prepared in the same manner ( figure 4b ) . by contrast , lc3 was up - regulated in phagosomes containing lps - coated beads and essentially unchanged in phagosomes containing pilam - coated beads ( figure 4 ) . ( a ) proteomic quantitation ; ( b ) and immunoblot of phagosomal membrane extracts using an anti - lc3 antibody . to determine the effects of manlam treatment on subcellular localization of lc3 , we monitored a gfp - lc3 fusion protein in raw264.7 murine macrophages upon treatment with various lipid - coated and uncoated beads ( figure 5a ) . the average number of microbeads bearing bright fluorescence of gfp per cell provided a metric of the relative amount of lc3 in the phagosomes ( figure 5b ) . basal levels of gfp - lc3 were observed in phagosomes containing uncoated beads , reflecting the endogenous low autophagy activity in resting macrophages . additionally , these phagosomes stained with lysotracker , indicating successful fusion with lysosomes to become acidic vacuoles . phagosomes bearing pilam - coated beads showed similar levels of gfp - lc3 fluorescence and lysotracker staining as in the presence of uncoated beads . in contrast , phagosomes containing manlam - coated beads exhibited weaker gfp - lc3 fluorescence than observed with uncoated beads , as well as weaker lysotracker staining . lps - coated beads affected lc3 translocation to phagosomes in an opposite manner compared to manlam - coated beads . in response to lps - coated beads , gfp - lc3 levels were increased in phagosomes , consistent with previous observations that lps activates the autophagy pathway.(57 ) manlam reduces gfp - lc3 fluorescence in lbc phagosomes . ( a ) translocation of gfp - lc3 to lbc phagosomes under different lipoglycan treatment . raw cells stably expressing gfp - lc3 were allowed to internalize latex beads ( 3 m ) coated with a lipoglycan for 2 h. control cells were treated with lipid - free beads . ( b ) quantitation of gfp - lc3 colocalization with phagosomes containing beads coated with different lipoglycans . a total of 200250 cells were sampled in each experiment to find out the average number of lbc phagosomes bearing gfp fluorescence in each cell . next , we investigated the effects of the bacterial lipids on lc3 distribution in the presence of chloroquine , an anti - inflammatory drug that causes accumulation of autophagosomes by preventing their fusion with lysosomes.(58 ) the drug has been previously employed to facilitate measurements of autophagic flux in vivo.(58 ) raw264.7 cells stably expressing gfp - lc3 were treated by chloroquine in the absence or presence of a given lipoglycan suspended in medium . trypan blue staining was performed to confirm that the chemical and lipid treatments did not affect cell viability ( supporting information figure 3 ) . cells treated with chloroquine alone showed punctate fluorescence derived from gfp - lc3-ii ( figure 6a ) , an indicator of elevated autophagy activity . such punctate fluorescence was barely observed in untreated cells due to lysosomal degradation of autophagosomal lc3 in the absence of chloroquine.(58 ) cells treated with pilam and chloroquine demonstrated a distribution of gfp - lc3-ii that was similar to that observed with chloroquine treatment alone . however , gfp - lc3ii fluorescence was significantly more diffuse in cells that were treated with manlam ( figure 6a ) . quantitation of the average number of gfp - lc3-ii puncta ( > 1 m ) per cell in the presence of the various bacterial lipids is shown in figure 6b . similar data were obtained from cells treated with three different doses of manlam , suggesting that the minimal effective concentration could be even lower than the 2 g / ml used in our assay . collectively , these data indicate that manlam , but not pilam or lps , suppresses the accumulation of autophagosomes induced by chloroquine . thus , manlam appears to interfere with the endogenous formation of autophagic vacuoles , an early stage of autophagy activation . ( a ) macrophages expressing gfp - lc3 were incubated with chloroquine ( 50 m ) in the presence or absence of either manlam or pilam for 2 h. arrows , representative lc3 punctate stains . ( b ) quantitation of lc3 punctate structures ( > 1 m ) in cells incubated with chloroquine in the absence or presence of a specific lipoglycan . using a new platform for quantitative comparison of the membrane proteomes of macrophage phagosomes , we identified 823 proteins of which 42 were significantly regulated by manlam from m. tuberculosis but not pilam derived from nonpathogenic mycobacteria or e. coli lps . several manlam - regulated proteins are known to be involved in vesicle trafficking pathways and phagosome maturation . others have unknown function ( e.g. , fyve domain - containing protein ) , but their regulation by manlam suggests a role in membrane trafficking events important for endosomal fusion and interaction with phagosomes . we also found that manlam suppresses the accumulation of lc3-ii in both lbc phagosomes and autophagosomes , whereas pilam has no such effects . given the established importance of phagocytosis and autophagy in the macrophage response to m. tuberculosis infection , it is possible that manlam s functions include interference in these critical processes of innate immunity . proposed vps34 , the kinase responsible for pi3p production , as a possible target,(44 ) and indeed phagosomes containing m. tuberculosis show retarded and reduced acquisition of this critical traffic - regulating lipid.(45 ) delineating the downstream effectors of manlam is likely to reveal new players in phagosome maturation , autophagy activation , and other pathways . as well , understanding the mechanisms by which manlam undermines the macrophage response may reveal new therapeutic avenues .
the mycobacterial cell wall component lipoarabinomannan ( lam ) has been described as one of the key virulence factors of mycobacterium tuberculosis . modification of the terminal arabinan residues of this lipoglycan with mannose caps in m. tuberculosis or with phosphoinositol caps in mycobacterium smegmatis results in distinct host immune responses . given that m. tuberculosis typically persists in the phagosomal vacuole after being phagocytosed by macrophages , we performed a proteomic analysis of that organelle after treatment of macrophages with lams purified from the two mycobacterial species . the quantitative changes in phagosomal proteins suggested a distinct role for mannose - capped lam in modulating protein trafficking pathways that contribute to the arrest of phagosome maturation . enlightened by our proteomic data , we performed further experiments to show that only the lam from m. tuberculosis inhibits accumulation of autophagic vacuoles in the macrophage , suggesting a new function for this virulence - associated lipid .
Introduction Experimental Procedures Results and Discussion Conclusions
PMC2910403
stroke patients should be medically treated in specially designed facilities , so - called stroke units , because of the high efficacy of the care provided there and recommended by the german stroke society ( dsg ) . in germany , a total of 195 stroke units has been set up so far , enough to provide care for approximately half of all stroke patients in germany . an increase in the number of stroke units to 250 has been planned in order to bring this effective care to almost 85% of stroke patients on the long - term . however , acute care will continue to be provided in hospitals without stroke units in the future , especially in rural areas , where economic and staff limitations prevent the establishment of specialized neurological stroke units . the question then arises , whether the use of modern teleneuromedicine methods could contribute to reducing the deficit in stroke patient care , particularly in less populated regions . in these areas , which lack neurological expertise , acute medical treatment of stroke patients usually occurs in the internal medicine department [ 5 , 6 ] . within the context of teleneuroconsultation , discussion and deliberation between two or more doctors regarding the best diagnostic and therapeutic approach for the acute stroke patient can take place . in teleneuromedical settings , the stroke expert is connected by video and sound transmission , observing the examination of the patient , which is carried out by the doctor at the regional hospital . in addition , the radiological image data ( ct or mri ) collected in the regional hospital is electronically transmitted to a server platform that can be accessed by the stroke expert . on the basis of this information as well as the clinical impressions that the stroke expert receives during the video conference , the remote diagnosis and related therapeutic instructions or recommendations are determined and communicated to the doctor with a datasafe consultation sheet ( see figure 1 ) . the study considers to what extent hospitals in germany have already fulfilled the requirements for participation in a telehealth care network and to what extent these methods are already in use in germany . finally , the findings of the investigation are discussed in relation to existing teleneuromedical networks and questions are answered as to what demands these special networks can meet in the future . the study included a multicentered , completely standardized survey of physicians in hospitals by means of a computerized online questionnaire . those selected for the investigation received an e - mail letter inviting them to participate . it also included a link to an internet site where participants could fill out the questionnaire . in the analysis of the data , bar graphs and pie charts ) and bivariate ( crosstabulation ) methods were used in the analysis according to gcp guidelines ( ek 255102007 ) . of a total of 2,104 hospitals in germany , university hospitals , specialized hospitals and facilities with stroke units in a neurology department this selection reduced the total by 845 to be 1,259 hospitals , constituting the basic population of the survey . however , not all of them could be reached on - line , since in 494 cases the hospital e - mail address was not available . therefore , a total of 765 hospitals were surveyed ( 61% ) and 346 ( 45% ) hospitals could be directly reached with a general address in the hospital ( i.e. , info@ ... ) . out of the 765 hospitals that fulfilled the inclusion criteria , 134 hospitals entered the website and completed the survey , amounting to a return rate of 18% . out of the 134 participating hospitals , 15 had in addition to an internal medicine also a newly opened neurology department . these facilities also did not meet the inclusion criteria for the empirical analysis and were not included in the final analysis . the following descriptions and evaluations thus refer to 119 acute hospitals without any neurology department . as shown in figure 2 predominantly hospitals with less than 200 beds ( 48% ) or between 200400 beds ( 45% ) in small towns ( 5,000100,000 inhabitants ) took part in the survey ( 80% ) . ( > 100,000 inhabitants ) and 5% to rural regions ( < 5,000 inhabitants ) . regarding the current status of teleneuromedicine in hospitals without neurology departments , 36% of the responding hospitals are already connected to a teleneuromedical network and 14% of the participants are in negotiations with a stroke unit , whereas 30% plan to become active in telemedicine stroke treatment in the future . for only 11% of the participating hospitals telemedicine does not seem an option , while 8% did not answer this question . the availability of an hospital network with high transmission speed is queried . in addition to these specific technical requirements , certain organizational conditions must be fulfilled in order for a hospital to take part in a teleneuromedical program . according to recommendations by the german neurological society ( dgn ) the time period from the arrival of the patient in the emergency room of the cooperating hospital to the beginning of the diagnostic phase should be no longer than 25 minutes ( time to ct ) . therefore the distance between the emergency room and the location of the imaging center within the cooperating hospital is an important factor . besides fast and easy access it is also required that the ct can be performed at any time 24 hours/7 days . the arithmetic mean of the responses time to ct was 14.42 minutes with a median of 12.50 minutes and a standard deviation of 9.95 minutes . 17 participants ( 16% ) could not comply with the required time period ( see figure 3 ) . in 114 of the participating hospitals ( 96% ) , the performance of a ct is possible , only 4% can not meet this requirement . 82% ( 94/119 ) reported that their ct meet this requirement but different possibilities for remote data transmission are used ( see table 1 ) . networking connectivity was possible in 85% ( 101/119 ) of the hospitals with different transmission speeds performed : < 100 mb / s in 3% , 100 mb / s in 28% , 1 gb / s in 7% , > 1 gb / s in 8% , and answer not sure but > 100 mb / s in 39% . in case of complications ( e.g. , malignant brain edema or intracerebral bleeding ) associated with thrombolytic therapy , the transfer of a stroke patient to a neurosurgery department may be necessary . moreover , for bridging concepts in individual cases it is expedient to transfer the patient to an comprehensive stroke unit center for specialized neuroradiology interventions . according to the code other complex neurological treatments of acute stroke ( g - drg ops 8 - 98b ) such concerning early rehabilitation measures , 117 participants ( 98% ) responded that they have the capacity to carry out early physiotherapy . speech and language therapy is available to only 71% of the hospitals , and occupational therapy to 59% . the most important reason for participating in a teleneuromedical network is the improvement in treatment quality ( 82% ) and the ability to avoid unnecessary patient transport ( 76% ) . easier and faster access to expertise ( 72% ) and the related improvement in establishing a final diagnosis ( 66% ) constitute further medical arguments . furthermore , participants had to assess whether the use of teleneuromedicine increased the competitiveness of their hospital , to which 67% responded in the affirmative . table 2 shows that mainly small hospitals with < 200 beds voted for this statement while hospitals with > 200 beds do not have a clear tendency . the ability to carry out systemic ( i.v . ) thrombolysis within their own hospitals was considered by 59% of participants and the improved professional training of doctors within the hospital was also seen as a possible motive . the reduction of length of hospital stay ( los ) in stroke patients was not associated with a main incentive for the cooperating hospitals ( see figure 4 ) . the most significant problem area for the hospitals seems to be the financing system of telemedicine with regard to the acquisition costs of the technical equipment . this aspect represents the main factor in the modal value of the answers ( 43% ) . in addition to this the compensation for the stroke - unit center with the specialist 's consultation service ( 31% ) as well as the legal aspects of teleneuromedicine ( 27% ) present a risk . however , data security reasons ( 18% ) , potential conflict between different specialties with competency boundaries ( 15% ) as well as operating costs ( 12% ) are deemed much less significant . the least problematic aspect according to the hospitals is the acceptance of teleneuromedicine among doctors ( 10% ) . 90% of the participating hospitals accept teleneuromedicine as a good supplement to the well established stroke care ( see figure 5 ) . with stroke becoming increasingly even more important and neurological stroke units lacking in rural areas , teleneuromedicine emerges as a promising technique in stroke care . the present work focuses on quality improvement with the help of a teleneuromedical network system . the advantages of teleneuromedicine in acute stroke include faster and more accurate diagnoses in participating hospitals , since they have easier access to neurological expertise . to reach this goal it is , however , also necessary that such a health care configuration be accepted by participating doctors and that these doctors in turn be accurately advised . it is possible that doctors in cooperating hospitals may not always be prepared to submit to the competencies of colleagues in the treatment of their stroke patients . a negative attitude toward the use of teleneuromedicine might be related to this aspect . however , most participants showed a ready acceptance of teleneuromedicine and ranked possible conflicts arising from competency boundaries very low . interestingly , improved competitiveness was cited as a predominant argument for teleneuromedicine by hospitals with less than 200 beds . for example , changes in the hospital system could also affect a facility 's catchment area . if a neighboring hospital expands , its attractiveness increases and consequently its catchment area as well , to the detriment of other hospitals , which must adjust to a reduced load . thus , adjustments to the range of services and capacity are necessary , the more so as with reduced patient numbers high fixed costs can no longer be covered . the result will be a decline in the attractiveness of the facility , bringing with it a further reduction of the catchment area . this process is already ongoing in germany at present , and will continue into the next few years . such adjustment mechanisms or focusing on just a few care providers may even be a political aspiration . conversely , when a hospital , especially a small one , connects to a teleneuromedicine network , thereby improving stroke care and treatment results , its attractiveness among regional care providers will be increased . this hospital would thus have a good argument for justifying its health care mandate and for securing its existence . on the other hand , the competitive advantage can become a direct competition . assuming each of two neighboring general hospitals , a and b , with very similar services and equipment , care for stroke patients in their respective catchment areas . hospital a connects to a teleneuromedicine network and can then possibly attract additional stroke patients from the catchment area of hospital b , as a result of the advantages of teleneuromedicine . the precondition is that the rescue service is informed about the teleneuromedical capabilities of hospital a and delivers stroke patients primarily to this facility . however , at the same time further direct treatment costs arise from the expansion of diagnostics and treatments for the additional patients . furthermore , one might ask if hospital a has sufficient capacity for the increased patient numbers ; although if , as described , the length of in - hospital stay is reduced by the use of teleneuromedicine , it may be possible to care for more patients without increasing the hospital 's capacity [ 7 , 12 ] . it must be noted , however , that the reduction of length of stay was largely not confirmed by the data provided by participating hospitals in this study . as was to be expected , however , this aspect is not very prominent in hospitals already active in teleneuromedicine ( 14% ) . these hospitals are mainly located in bavaria , where they are often financially supported by the state or health insurances ( e.g. , tempis or steno ) . but facilities which are in negotiations with network partners or plan to become active in teleneuromedicine might see a substantial , negotiable factor between the involved parties . especially in those facilities for which a connection to a teleneuromedicine network is out of the question the problem of financing is quite prominent . considering the necessary investments ( hardware , software , it support , broadband etc . ) , lack of sufficient financial resources can be a significant reason for the failure of teleneuromedicine in a cooperating hospital . this problem can be discussed with health insurers and the state since both can profit from the use of teleneuromedicine . as a result of quality improvement in stroke care throughout the state this sort of health care configuration in less developed regions can reduce the number of those needing long - term care and the direct and indirect costs associated with it . besides this the problem is that , these effects are not immediately evident and are only noticed by the cooperating hospitals in the long term . which operating network costs or savings result concretely are not as easily enumerated on basic acquisition . the consequence of this lack of internalization and the associated lack of incentive could be a reason that the use of teleneuromedicine is still partly out of the question for a given facility . if a list of advantages and problems is composed , however , the advantages will prevail . it should therefore be mentioned that the opportunities provided by teleneuromedicine are ranked higher by the hospitals taking part in the survey than the risks connected with it . a generalization of this statement , however , can not be made offhand . in the discussion of the survey results , it must be kept in mind that only a certain percentage of the total population of all acute hospitals in germany responded and that those hospitals are particularly interested in teleneuromedicine . hospitals for which the use of telemedicine in stroke care is out of the question tend , perhaps , not to answer . moreover , aspects related to the implementation of thrombolysis therapy are important . a large portion of the facilities can in principle envision performing thrombolysis with the help of a teleneuromedically connected stroke expert . doctors in cooperating hospitals are prepared to trust their colleagues from the connected stroke unit and to rely on their evaluation . while the doctors in the cooperating hospitals bear the entire responsibility for thrombolysis treatment , for most of them the medical advantages and the associated benefits to the stroke patients are certainly in the foreground , so that when weighing the options , the arguments in favor of a thrombolysis treatment or bridging concept win out . the economic aspect of reducing unnecessary transfers , which are often very time consuming , can be largely confirmed by participating hospitals . if a stroke patient does not reach the cooperating hospital within 4.5 hours , after onset of symptoms , intravenous thrombolysis is no longer justified . therefore , time should be used as effectively as possible , since up to the point of administration of the medication , appropriate examinations as well as the teleconsultation must be completed , and the success rate of thrombolysis decreases with each passing minute . the brevity of this time period and especially the lengthy pre - hospital time periods are the most significant reasons for low thrombolysis rates in cooperating hospitals . given the logical and literature - supported supposition that time - to - recanalization is crucial , rapid and safe recanalization is a primary goal . the initial teleconsultation with ct and ct angiographic findings can quickly determine whether the patient is a suitable candidate for an interventional neuroradiological procedure , although up to now the ideal method by which a rapid and safe recanalization is achieved is not clear . in addition to limited recanalization rates , current ia therapies , particularly ia thrombolytics and mechanical devices , can take hours to achieve recanalization . in this case a so - called bridging procedure with initially intravenous thrombolysis can first be begun , and the patient can then be transferred to a specialized interventional neuroradiology while undergoing treatment without time delay [ 9 , 1719 ] . in acute basilar artery occlusion , m1 occlusion of the middle cerebral artery and occlusion of the internal carotid artery , intra - arterial thrombolysis or / and endovascular mechanical recanalization may result in higher recanalization rates than intravenous thrombolysis alone . bridging iv / ia thrombolytic therapy for such acute stroke patients appears to be safe and yields higher recanalization and improved survival rates , as well as an overall improved chance for a better outcome . however many patients are admitted to community hospitals , where endovascular therapy is usually not readily available . in this setting a teleneuromedical supported proper selection of stroke patients is mandatory , who will benefit from an initialization of thrombolysis within a community hospital with simultaneous referral to a comprehensive stroke center , thus leading to a better functional outcome of stroke patients . however randomized controlled trials will have to confirm the expected benefit of bridging iv / ia thrombolysis with subsequent on - demand mechanical recanalization on clinical outcome [ 2022 ] . the recommendation of the german stroke society , that stroke patients undergo a ct scan within 25 minutes at most , was fulfilled with an average of 14.42 minutes of all responses . even with a standard deviation of approx . it can be concluded that the internal structures and procedures within most of the participating hospitals seem to be efficient enough to guarantee quick access to a ct scan . in addition , the response patterns show that the technical requirements do not stand in the way of becoming active in teleneuromedical care . the question arises whether in those hospitals which do not meet this requirement , inefficient internal organization could be the culprit . another reason might be that these hospitals share the use of a ct with other facilities , for example , an external radiology service , and therefore delays may occur . because of limited availability and the resulting longer time to mri , it can be deduced that in hospitals having both imaging methods at their disposal , ct will often be primarily used for initial diagnosis in the treatment of stroke patients . both methods may be used later in the course of the treatment to strengthen the validity of the diagnostics [ 24 , 25 ] . failure to meet the 4.5 hour deadline , after which intravenous thrombolysis can not be initiated for some , the symptoms of stroke are not known , at least not sufficiently so that calling the ambulance is often too late . for this reason there is a continuous necessity to educate the population to the importance of seeking stroke treatment as quickly as possible . it remains questionable to what extent a cooperating hospital can contribute to this , particularly as only one third of the facilities consider this a way to raise the thrombolysis rate . the cooperating hospitals consider the medical exclusion criteria much more significant for the low thrombolysis rate [ 4 , 6 , 26 ] . telemedicine technologies have been shown to be useful and effective in the remote neurological evaluation and treatment of acute stroke patients and is now used at several hospitals in europe and the united states as an option for stroke patients to have access to cerebrovascular expertise . the effect of this concept was evaluated in the tempis project , where five regional hospitals with a telestroke concept were matched with five regional hospitals without a telenetworking system . during two years stroke patients were monitored and the three - month outcome was studied . in an multivariate analysis the stroke treatment in the tempis project showed a significantly better result compared with the nonnetworking hospitals and the thrombolysis rate was ten times higher . many physicians , especially nonneurologists , remain hesitant to use rt - pa in acute stroke patients , suggesting that additional training methods and tools are desperately needed in many communities . since the ninds - sponsored trial of rt - pa in acute stroke was conducted at a relatively small number of experienced stroke centers , one commonly expressed concern is that similar results might not be obtained when rt - pa is used in a variety of clinical settings . after publication of the ninds trial results more than a dozen reports of experience with rt - pa in open - label , routine clinical use have been published . in 2639 treated patients , the symptomatic intracerebral hemorrhage rate was 5.2% ( 95% confidence interval 4.36.0 ) slightly lower than 6.4% rate of the ninds trial ( national institute of neurological disorders and stroke ) . the mean total death rate ( 13.4% ) and proportion of subjects achieving a very favorable outcome ( 37.1% ) were comparable to the ninds trial results . as a result community hospitals will increasingly face medicolegal risks both for treating and for not treating patients with newly available agents . with a back up of stroke experts in a professional telenetworking system patients and family members can be assured that they speak with the expert online in the emergency unit and that all treatment options are standardized and discussed . this will release a huge burden from the less stroke experienced doctors in the local hospitals in rural areas since a major problem confronting all community hospital stroke programs is one that has been called the frequency factor . since a small number of stroke patients will qualify for acute interventions such as tpa thrombolysis , a stroke team could have difficulties in running effective . a study investigating the routine use of systemic tpa thrombolysis reports an increase in in - hospital mortality after administration of tpa in hospitals with < 5 thrombolytic therapies within 1 year . these findings underline the need to have an experienced stroke expert involved in the management of an acute stroke patient since urgent therapeutic decisions in emergency stroke care have to be made on the basis of brain imaging and a structured clinical examination . with a good knowledge of functional and vascular cerebral anatomy the stroke expert can quickly determine the neuroanatomic localization of the brain lesion and can guide special treatment options . since such experience and resources are available mainly in stroke centers of teaching hospitals a networking system can allow each hospital to have access to the experience of all programs in the network . for example , the neuronet constitutes an implementation of teleneuromedicine in clinical practice and represents an intraregional teleneuromedicine network of hospitals , wherein only hospitals belonging to the helios hospital group take part . this project has been in operation since 2006 [ 9 , 31 ] counting among its hospitals five certified stroke unit centers , who serve as providers of teleneuromedical expertise including neurologist , neurosurgeons and neuroradiologists . these stroke unit centers rotate their on - call services weekly whereas the cooperating hospitals make up the regional facilities or the consumers of teleneuromedical expertise [ 32 , 33 ] . in neuronet ( http://www.helios-neuronet.de ) the implementation costs were covered by the hospital group and every cooperating hospital has a mobile workstation ( vimed teledoc 2 ) with a video conferencing system , and the consult is processed by a central server that can send and receive digital radiological images in dicom format . a great advantage is that the mobility of this workstation allows for its use regardless of the location of the consulting physician . since every minute lost when dealing with acute stroke implies a loss of viable brain tissue , the network system strives to carry out thrombolysis on site or , with a bridging concept , to start it on site followed by transport to a comprehensive stroke unit center after the diagnosis and initiation of therapy . part of the whole concept is also supervision and quality management with standard operation procedures ( sop ) to further develop the network . adjusted to suit available regional configurations , these networks are intended to lead to a uniform optimization of care for stroke patients . therefore training of stroke doctors , stroke nurses and therapists are continued and training equipment like the stroke lysis box ( ipc : a61f-17/00 , patent number 200 09 172.7 ) and a nihss training dvd ( http://www.physiothek.com ) were indroduced . within a systematic peer review process quality data from all hospitals are continuously evaluated and routine data with mortality rates are observed over the years to monitor the improvements in quality of stroke care . furthermore , the society of hospitals in saxony ( kgs ) , together with the regional association of health care providers ( lvsk ) and the saxon ministry of social welfare ( sms ) , have after almost two years of negotiations with all involved parties agreed on a financial framework for improving telestroke care uniformely throughout saxony , especially in rural areas , through the establishment of teleneuromedicine networks . the comprehensive stroke unit centers are the university hospital dresden for eastern saxony ( sos - net , http://www.neuro.med.tu-dresden.de/sos-net/ ) and for southwestern saxony a trio of comprehensive stroke unit centers with aue , chemnitz and zwickau ( tns - net ) . the criteria for the participation of hospitals is clearly defined and a team from the stroke unit centers visits the potential cooperating hospital in order to check the quality criteria . it also instructs doctors , nurses and therapists particulary in operating the equipment and evaluating stroke patients with certain stroke scales . in these networks advanced and professional training is ongoing , and especially nursing staff is educated in a stroke nurse training program lasting six months ( http://www.dsg-info.de/pdf/pflegefortbildung_helios_akademie.pdf ) . the german stroke society ( dsg ) and the german neurological society ( dgn ) recently published standards for telestroke services that will now lead to a general certification of these networks . but this safe and effective telestroke and tele - thrombolysis service with experienced stroke experts for stroke management requires a 24 hour on demand teleconsultation service that needs to be reimbursed . in tempis , the expenses for this service account 300,000 per year . based on the calculated savings of subsequent costs by each thrombolysis between 3300 to 4200 the absolute increase of systemic thrombolysis of 75 tpa treatments within one year would result in a total saving between 250,800 and 319,200 . therefore the teleconsultation service results to be cost - efficient regarding only the consultations of possible thrombolyses . in denmark the budgetary impact and cost - effectiveness of the national use of thrombolysis for stroke administered via telemedicine was estimated . the incremental cost - effectiveness ratio was calculated to be approximately 50.000 $ when taking a short time perspective ( 1 year ) but thrombolysis was both cheaper and more effective after 2 years and cost effectiveness improved over longer time scales . however studies conducted from a societal perspective compared with those conducted from an institutional perspective have a tendency to overestimate the total revenue . in the absence of ongoing government grant support , any telestroke sponsoring institution must devise a business model that produces a self - sustaining profitable break - even program . the health economic model computations suggest that the macroeconomics costs may balance with savings in care and rehabilitation after as little as 2 years and that potentially large long - term savings are associated with thrombolysis devilered by teleneuromedicine . in the united states several telestroke projects are established mostly by governmental grants and published in journals ( partners telestroke center ; starr ; stroke doc ; reach ; run - stroke ; clinicaltrials.gov ) . interestingly the specialist on call ( soc , http://www.specialistoncall.com ) project is a privat business model that operates with 15 neurologists covering 65 hospitals in six states dealing with 3.600 teleconsultations per year . the soc offers flat rates for hospitals where the hospital pays one time 30.000 for the technical equipment and a monthly fee that is adjusted to the size of the hospital . the stroke neurologists are hired by soc and monthly payed for their stroke expert teleneuromedicine service . however in germany the teleprojects are always connected to comprehensive stroke unit centers and mostly supported by the state system . in saxony the costs for the technical equipment of a stroke unit center and a cooperating hospital are covered by the saxon ministery of social welfare so that the implementation of the network infrastructure was possible for each hospital that could fulfill the standardized inclusion criteria for the statewide telestroke network configuration ( 30.000 one time payment for each hospital ) . together with the health insurance companies in saxony it was possible to discuss a model , where each teleconsultation can be billed with 1/3 receiving the cooperating hospital and 2/3 receiving the stroke unit center . with this billing model per teleconsult all costs have to be covered including broadband costs , it specialists , savings for new equipment , and of course payments for the stroke experts and teaching costs . the state concept allows a stable routine teleconsultation service with incentives to use the system and possibilities for growth in order to improve the quality of the telenetworking system . since an european consensus statement has set the goal of having all persons with acute stroke admitted to specialized treatment facilities establishing such a teleneuromedicine networking system in nonurban areas might be the solution to the difficult reimbursement situation of insurance companies and the problems in finding enough stroke neurologists . in summary , broad implementation of thrombolysis in stroke is supported by the expanding telestroke networks . with the use of modern teleneuromedicine network systems especially the fact that comprehensive coverage by stroke units throughout germany does not appear to be feasible at present , teleneuromedicine represents a valueable supplement to the existing system of stroke care . however , the use of teleneuromedicine alone can not replace optimal care provided by comprehensive stroke units . overall , the hospitals in a network must upgrade and optimize their internal structures in a holistic operational process to improve stroke care but project data show that telestroke and telethrombolysis are practicable and can contribute to the improvement of stroke care in rural hospitals that are too distant from a specialized stroke unit . this investigation shows that there is good acceptance for teleneuromedicine among networking hospitals and that doctors see the opportunities offered by this sort of health care configuration . technical and organizational requirements must be fulfilled in the hospitals and medical product guidelines should be followed along with a stable financing system and the possibility for deductions by the insurers so that even small hospitals can participate in this health care configuration . from a socio - economic point of view it is precisely these facilities in rural areas far away from stroke unit centers that can profit most , in terms of telestroke care . patient benefits and resulting potentials for cost savings have to be addressed in the teleprojects and in discussion with health care structures as can be seen in saxony .
indroduction . at present , modern telemedicine methods are being introduced , that may contribute to reducing lack of qualified stroke patient care , particularly in less populated regions . with the help of video conferencing systems , a so - called neuromedical teleconsultation is carried out . methods . the study included a multicentered , completely standardized survey of physicians in hospitals by means of a computerized on - line questionnaire . descriptive statistical methods were used for data analysis . results . 119 acute hospitals without neurology departments were included in the study . the most important reasons for participating in a teleneuromedical network is seen as the improvement in the quality of treatment ( 82% ) , the ability to avoid unnecessary patient transport ( 76% ) , easier and faster access to stroke expertise ( 72% ) as well as better competitiveness among medical services ( 67% ) . the most significant problem areas are the financing system of teleneuromedicine with regard to the acquisition costs of the technical equipment ( 43% ) and the compensation for the stroke - unit center with the specialists ' consultation service ( 31% ) as well as legal aspects of teleneuromedicine ( 27% ) . conclusions . this investigation showed that there is a high acceptance for teleneuromedicine among co - operating hospitals . however these facilities have goals in addition to improved quality in stroke treatment . therefore the use of teleneuromedicine must be also associated with long term incentives for the overall health care system , particularly since the implementation of a teleneuromedicine network system is time consuming and associated with high implementation costs .
1. Introduction 2. Methods 3. Results 4. Discussion of the Empirical Analysis 5. Examples of Exsisting Telestroke Networks in Routine Practise 6. Conclusions
PMC3627515
x ( re = rare - earth metal , t = transition metal , x = b , si , al , ga , ge , sn ) systems . these compounds are extremely diverse in their structural and physical properties . among them are the following : ( i ) phases that form by stacking of binary cacu5-type fragments and slabs of laves phases with mgzn2 and mgcu2 type ( and/or their ternary ordered derivatives ) and play an important role for improvement of technological characteristics of re ni - based negative electrode material in ni metal hydride batteries ; ( ii ) compounds that can yield magnets appropriate for high - temperature application , namely , reco7 of tbcu7 type , where part of the atoms in the ca site of the cacu5 structure are substituted by the dumbbells of the transition metal and the third element like ti , zr , hf , cu , ga , si , and ag is required to stabilize the structure and increase the magnetoanisotropy ; ( iii ) magnetic materials re2co17 revealing the intergrown cacu5- and zr4al3-type slabs structures where the interstitial sites can be occupied by elements of iiia , iva , or via groups , thus leading to the increase in curie temperature , uniaxial anisotropy , and spontaneous magnetization . the small atomic radius of boron imposes replacement of the cu atom at the wyckoff position 2c in the cacu5 structure ( space group p6/mmm ; ca in 1a ( 0,0,0 ) , cu1 in 2c ( 1/3,2/3,0 ) , cu2 in 3 g ( 1/2,0 , 1/2 ) ) and formation of the ordered ternary substitution derivative ceco3b2 ( ce in 1a , co in 3 g , and b in 2c ) . as relevant to the study presented herein , the ceco3b2 structural unit intergrown with fragments of different structures reveals a variety of borides exhibiting different degrees of structural complexity . for example , the family of structures where the slabs of ceco3b2 are stacked with slabs of the binary cacu5 type or laves phases are frequently encountered among ternary rare - earth borides with co and ni . formation of these structures in unexplored yet multinary systems may result in unpredicted changes of expected properties , and their investigation is necessary to understand and control the behavior of alloys . the diversity of cacu5-derivative structures is enhanced if the ternary rare - earth boride phases with noble metals are considered . for example , ( i ) the series of compounds formed by stacking blocks of ceco3b2 and carh2b2 ( thcr2si2 ) type were observed in eu rh b , y os b , and la ru b systems ; ( ii ) prrh4.8b2 revealed ceco3b2-type slabs and hexagon - mesh rhodium nets . in this respect , systems containing pt and re metals were not investigated ; the only information available on cacu5-type derivatives concerns the crystal structure and physical properties studies for the rept4b series ( ceco4b type , p6/mmm , re = la , ce , pr , sm ) . in this article b system focused on the pt - rich concentration range where we observed a series of new cacu5-related structures . our interest in this investigation was driven not only by the structural flexibility and diversity of cacu5-type derivative phases but also by the interesting physical properties which europium compounds may exhibit , such as , for example , mixed - valence states or magnetic ordering at comparatively high ordering temperatures , observed in eu - based binary laves phases . results presented herein expand knowledge on a family of cacu5 derivative structures to ( i ) the eu5pt18b6x phase showing a new structural arrangement formed by stacking of inverse thcr2si2-type slabs with cacu5- and ceco3b2-type fragments along the c - axis direction , ( ii ) a new member of rather simple structural series exhibiting the combination of cacu5- and ceco3b2-type slabs , namely , eupt4b where a mixed - valence state of eu has been observed , and ( iii ) a new compound eu3pt7b2 , composed of ceco3b2- and laves phase- ( mgcu2- ) type fragments showing interesting transport properties accompanied by a relatively large sommerfeld coefficient . 0.52 g , were prepared by argon arc - melting elemental pieces of europium ( 99.99% , metal rare earth ltd . , china ) , platinum foil ( 99.9% , gussa , a ) , and crystalline boron ( 98% , alfa aesar , d ) . due to the low boiling point and high vaporization of europium , eu weight losses were compensated by adding carefully assigned extra amounts of eu before melting . for homogeneity , samples were remelted several times . part of each alloy was wrapped in mo foil , sealed in an evacuated silica tube , and heat treated for 10 days at 1020 k prior to quench by submerging the capsules in cold water . single crystals suitable for x - ray diffraction studies were isolated from a fragmented annealed alloy of eu3pt7b2 ; however , for eupt4b and eu5pt18b6x crystals of good quality were obtained from as cast samples . crystal quality , unit cell dimensions , and laue symmetry of the specimens were inspected on an axs - gadds texture goniometer prior to x - ray intensity data collections at room temperature on a four - circle nonius kappa diffractometer equipped with a ccd area detector employing graphite - monochromated mo k radiation ( = 0.071069 nm ) . no absorption corrections were performed because of the rather regular crystal shapes and small dimensions of the investigated specimens . space groups were determined from analysis of systematic absences performed with the help of the absen program . structures were solved and refined with the aid of the wingx-1.70.00 software package applying shelxs-97 and shelxl-97 programs . reduced cell calculations and noncrystallographic symmetry tests were performed applying the program platon2003 in order to check for higher lattice symmetry . data collection and refinement parameters for the three structures are listed in tables 1 and 2 . isotropic ( uiso ) and anisotropic atomic displacement parameters ( uij ) are given in [ 10 nm ] . isotropic ( uiso ) and anisotropic atomic displacement parameters ( uij ) are given in [ 10 nm ] . x - ray powder diffraction patterns collected from the eupt4b and eu3pt7b2 alloys annealed at 1020 k for 10 days employing a guinier huber image plate system with monochromatic cu k1 radiation ( 8 < 2 < 100 ) revealed single - phase materials ; however , for eu5pt18b6x several samples synthesized with identical conditions but varying boron concentration were multiphase and contained mixtures with eupt4b and/or unknown neighboring phases , suggesting a different temperature range of existence ( figure 1 , supporting information ) . structural parameters obtained from rietveld powder data refinement of all three compounds validate those from the single crystal . the third phase , eu5pt18b6x , is an interesting material with respect to magnetism according to structural and bonding features described below , and currently , special efforts are aimed at optimization of synthesis conditions for this new compound . a superconducting quantum interference device ( squid ) served for determination of the magnetization from 2 to 300 k and in fields up to 7 t where bar - shaped polycrystalline specimens of about 20 mg were used . specific heat measurements on samples of about 1 g were performed at temperatures ranging from 1.5 to 120 k by means of a quasi - adiabatic step heating technique . electrical resistivity and magnetoresistivity of bar - shaped samples ( about 1 1 5 mm ) were measured using a four - probe ac - bridge method in the temperature range from 0.4 k to room temperature and fields up to 12 t. systematic extinctions in the single - crystal x - ray data were consistent with three possible space group types : fmmm , fmm2 , and f222 . the structure was solved in the centrosymmetric space group , and direct methods provided 9 atom positions , 3 of which in further refinement were assigned to europium . refinement of the structure with all sites fully occupied resulted in rather large anisotropic displacement parameters for two pt sites ( pt1 in 16 m and pt5 in 8i ) with high residual electron densities ( 29 000 and 20 000 e / nm ) near these atoms . accordingly , the platinum atom sites were split in two near - neighboring positions ( pt1pt11 0.037 nm , pt5pt55 0.029 nm ) with occupancy parameters refined to 0.8/0.2 for pt1/pt11 and pt5/pt55 . in the next step , all atoms were refined with anisotropic displacement parameters except for the atoms on split sites . subsequent difference fourier synthesis revealed two significant residual peaks ( about 11 400 and 7000 e / ) at a distance of about 0.2 nm from the nearest platinum atoms , which were attributed to boron atoms ( 16 m and 8f ) . refinement with both boron positions fully occupied led to a low reliability factor ( rf = 0.0353 ) ; however , it showed a large isotropic thermal parameter for b2 in 8f . refinement of the b2 occupancy parameter did not improve the value of uiso but reduced the occupancy factor to about 0.5 . at this point collected data were also processed in space group f222 , which provides the possibility to split the b 8f site into two crystallographically distinct sites 4c ( 1/4,1/4,1/4 ) and 4d ( 1/4,1/4,3/4 ) . despite almost identical agreement factors , refinement did not improve the isotropic displacement parameters of boron atoms which were found to partially occupy both 4-fold sites ( occupancy ratio 33%:67% ) , suggesting a disordered distribution of boron atoms . the small difference between these solutions concerned also the positions of b1 and the pt1pt11 split sites which all were slightly shifted along the a axis due to the free x parameter of the 16k ( x , y , z ) site ( pt1 x = 0.5010(13 ) , y = 0.1745(2 ) , z = 0.28417(5)/pt11 x = 0.008(5 ) , y = 0.1455(9 ) , z = 0.2098(2 ) ; b1 x = 0.011(12 ) , y = 0.167(2 ) , z = 0.1071(6 ) ) . except these slender deviations , refinement in f222 revealed eu and pt atoms in the wyckoff positions corresponding to those in space group fmmm . accordingly , the test for higher symmetry applying platon indicated space group fmmm . reconsidering the refinement in fmmm , it was found out that the isotropic thermal parameter of b2 can be improved by fixing the occupancy to a predetermined value : in agreement with the results obtained for space group f222 , the most satisfactory value for the thermal displacement parameter was observed for an occupation factor fixed at 0.5 for b2 in 8f . final refinement in fmmm resulted in a reliability factor as low as 0.0331 and residual electron densities smaller than 3710 e / nm . the structure of eu5pt18b6x , shown in figure 1a , can be considered as built up of two kinds of blocks alternating along the c direction . one block is composed of two layers of edge - connected triangular prisms which share their faces to form hexagonal channels linked via the 3 net of platinum and europium atoms ( figure 1b ) , while the other one consists of chains of edge - connected platinum tetrahedra interlinked via europium atoms ( figure 1c ) . the 3636 kagom nets of platinum atoms are slightly puckered , while 3 nets are flat . one europium atom , namely , eu3 , is located in the center of a hexagon formed by pt6 ( at z = 0 ) , while eu2 is slightly shifted along z from the b1 atoms plane ( eu2 in 8i , z = 0.39475 ; b1 in 16 m , z = 0.1070 ) . depending on which split position pt1/pt11 is locally occupied , the chains of tetrahedra are bridged via ( i ) europium ( for pt1 ) or ( ii ) europium and platinum atoms ( for pt11 , pt11pt11 0.2740 nm ) to form sheets extending perpendicular to c. platinum tetrahedra are centered by borons ; however considering that the b2 site within this block is fractionally ( and randomly ) occupied ( occ . ( a ) crystal structure of eu5pt18b6x with anisotropic displacement parameters for atoms from single - crystal refinement . ( b ) boron - centered pt triangular prisms ( pt2 , pt3 , pt4 , pt5/pt55 ) and 3 net of pt6 accommodating eu3 atoms at z within 0.330.66 ( perspective view along the c axis ) . eu ( c ) perspective view of boron - centered pt tetrahedra ( b2 in 8f ( 1/4 , 1/4 , 1/4 ) ) along the c axis . eu1pt1 and eu1 is coordinated by 19 atoms revealing the combination of an elongated rhombic dodecahedron with the coordination polyhedron , which is typical for the rare - earth atom in the cacu5 structure . both fragments are stacked via the hexagon formed by pt1 atoms ( figure 2a , table 1 ) . the hexagonal face formed by platinum atoms ( pt3 and pt5 ) is capped by a eu2 atom . the shapes of coordination polyhedra of eu2 and eu3 are similar : both atoms are coordinated by 12 pt atoms forming two hexagonal faces of the coordination sphere . while in the eu2 polyhedron they are linked by pt2pt3 and pt4pt5 contacts and form a hexagonal prism with the hexagonal faces capped by eu atoms ( eu1 and eu3 ) , in the case of eu3 the distances between the hexagonal faces are long ( > 0.4 nm ) and preclude formation of pt2pt2 and pt4pt4 bonds . the coordination sphere of eu3 includes also six pt6 atoms located around the waist of an imaginary hexagonal prism ; thus , those 18 platinum atoms form a cage elongated in the direction of the c axis ( pseudo - frank kasper polyhedron , which can also be described as two face - connected hexagonal antiprisms ) with the hexagonal faces capped by eu2 atoms . in the case of eu2 , 6 boron atoms centering the rectangular faces of the hexagonal prism are located too far from the central atom to infer strong bonding ( eu2b2 distances are 0.316 and 0.322 nm ) . ( c ) coordination polyhedron of pt3 as representative of the atom environment for pt2 , pt4 , and pt5 . ( d g ) coordination polyhedra of pt6 ( d ) , b2 ( e ) , pt11 ( f ) , and pt55 ( g ) . pt1 has 10 atoms at coordination distances , and its polyhedron is derived from a tetragonal antiprism ( figure 2b ) . pt2 , pt3 , pt4 , and pt5 ( figure 2c ) are surrounded by distorted icosahedra with one additional platinum atom ; for all four atoms the icosahedra are formed by three almost interperpendicular rectangles made of ( i ) 4 pt , ( ii ) 2 pt and 2 b , and ( iii ) 4 europium atoms . the coordination sphere of pt6 resembles the coordination polyhedron of cu ( 2c ( 1/3 , 2/3 , 0 ) ) in cacu5 and includes only platinum and europium atoms ( in total 13 , 2 of them are at a rather long distance pt62pt6 , 0.3256 nm ) ( figure 2d ) . b1 is coordinated by 6 pt atoms forming a trigonal prism ; three eu atoms are located against the triangular faces of the trigonal prism ( b1eu2 0.316 nm and b12eu2 0.322 nm ) ( figures 1b and 2a ) . b2 centers the tetrahedron made of near - neighboring platinum atoms ; four eu atoms complete the coordination sphere , forming a tetragonal antiprism ( figure 2e ) . the short distances between split positions ( pt1pt11 0.03682 nm and pt5pt55 0.03021 nm ) and their occupancies ( 0.80/0.20 for both pt1/pt11 and pt5/pt55 ) allow us to assume that on average the pt1 and pt5 atoms are present in four of five unit cells while one is filled with pt11 and pt55 . coordination spheres for the atoms when the split sites pt11 and pt55 are considered replicate the shape of those with ( or for ) pt1 and pt5 differentiating slightly in coordinating distances due to small shifts of atom positions , with the exception of pt11 which has one more pt11 in contact distance ( figure 2f and 2 g ) . unit cell dimensions of the eupt4b single crystal and x - ray powder diffraction spectra recorded from both the annealed and the as - cast alloys suggested isotypism with ceco4b - type structure . accordingly , the positions of eu and pt atoms obtained from structure solution in the space group p6/mmm from the x - ray single - crystal data were consistent with the ceco4b type . refinement of the structure with anisotropic displacement parameters converged to r = 0.0259 with residual electron densities smaller than 4940 e / nm and revealed full occupancy of all atom sites ( table 2 ) . the structure of eupt4b consists of ceco3b2- and cacu5-type slabs alternating along the c axis . in the coordination sphere of boron ( ceco3b2-type fragment ) , six pt2 atoms are located at equal distance from the central atom , forming a regular triangular prism [ bpt26 ] ( figure 3a ) . three europium atoms are located against the rectangular faces of the trigonal prism , revealing a b eu1 distance 0.32428(1 ) nm which is larger that the sum of atomic radii of the elements . thus , eu1 is surrounded by 12 pt2 arranged in the shape of a hexagonal prism ; two eu2 atoms cap the hexagonal prisms of eu1 at a distance of 0.3720 nm from the central atom ( figure 3b ) . similar to the prototype structure and to eu5pt18b6x , eu2 is located inside a rather large cage elongated along z ( deu2pt2 = 0.3611 nm ) and built of 18 platinum atoms , forming two face - connected hexagonal antiprisms ; two eu1 atoms cap the hexagonal faces . both atoms in the cacu5 block ( eu2 and pt1 ) reveal unusual thermal displacements : eu2 exhibits a rather large thermal motion along the hexagonal axis with u33:u11(u22 ) of ca . 4:1 , while pt1 shows enlarged values u11 and u22 ( u11(u22):u33 of about 13 ) . no fractional or partial occupancies for atom positions were observed from single - crystal data , and refinement of atoms on split positions was not successful . trial refinements in the space group types with lower symmetry ( p62 m , p6m2 , p6 mm , p622 , p6/m , p6 , p6 ) yielded inferior results . comparable features of thermal ellipsoids of atoms located in the channels formed by transition - metal atoms were hitherto also observed for the boride structures related to the ceco3b2 type , such as in5ir9b4 ( p62 m , a = 0.5590 nm , c = 1.0326 nm ) and lani3b ( imma , a = 0.4970 nm , b = 0.7134 nm , c = 0.8300 nm ) ; for the latter structure , the pronounced anisotropy of the atom thermal displacements prelude the symmetry change upon hydrogenation . in the case of eupt4b , a possible reason may be that in order to optimize the distance to its pt2 neighboring atoms eu2 is delocalized between the layers of pt2 , consequently affecting ( since pt1 and eu2 are at the same height ) the thermal displacements of pt1 . enlarged displacement parameters of eu1 are probably indicative of a weak rattling of the europium atom within the 14-atom cage . ( a ) perspective view of the eupt4b structure along the c - axis direction emphasizing the boron - centered pt2 triangular prisms . ( b d ) coordination spheres of eu1 and eu2 ( b ) , pt2 ( c ) , and pt1 ( d ) . the pt1pt2 bonds are omitted in b. for eu3pt7b2 , systematic extinctions characteristic for the trigonal space group r3m and unit cell dimensions proposed isotypism with the ca3al7cu2-type structure . structure solution by direct methods confirmed the arrangement of the heavy atoms analogous to that observed for ca3ni7b2 ( table 2 ) . the structure is composed of ceco3b2- and mgcu2-type fragments alternating along z. each successive block is shifted with respect to the former one in the ( 110 ) plane by a half unit cell . the perspective view of the eu3pt7b2 unit cell along the c axis is presented in figure 4a , showing triangular prisms formed by 6 pt1 around b. the ceco3b2-type slabs are formed by 12 pt1 and 2 eu1 atoms surrounding eu2 ( eu2pt1 0.31525 nm , eu2eu1 0.32982 nm ) to form the bicapped hexagonal prism ( figure 4c ) . six boron atoms are located in front of the rectangular faces of the hexagonal prism at the distance 0.3203 nm , which is too long to assume bonding interaction ( compare , for example , with ca3ni7b2 , where dca the coordination polyhedron of pt2 exhibits a shape analogous to those of transition atoms in the binary mgcu2-type laves phase : an icosahedron formed by 6pt1 and 6eu1 reveals interatomic distances ( table 2 ) which are comparable with distances in eupt2 ( mgcu2 type , pt6pt 0.2727 nm , pt6eu 0.3198 nm ) . kasper polyhedron [ pt12eu4 ] of eu1 is slightly distorted in comparison with that of eu in eupt2 ( figure 4b ) . ( a ) perspective view of the eu3pt7b2 structure along the c - axis direction emphasizing the boron - centered pt1 triangular prisms . ( b e ) coordination spheres of eu1 ( b ) , eu2 ( c ) , pt2 ( d ) , and pt1 ( e ) . ( f ) twenty - membered cage capturing ca2 in the ca3al7cu2 structure . as compared to the prototype ca3al7cu2 structure , significant changes in the coordination sphere of atoms are observed for the ceco3b2-type block : ( i ) the distance between the 3636 kagom net formed by al atoms is long ( about 0.400 nm ) in contrast to the corresponding distance between platinum atoms in eu3pt7b2 ( dpt1pt1 = 0.29959 nm ) , thus delivering a different shape of the coordination polyhedron of ca ; ( ii ) due to the larger atomic radius of copper with respect to b , the distance ca cu of 0.324 nm is sufficient to indicate bonding interaction , thus increasing the coordination number of ca to cn = 20 ( figure 4f ) as compared to cn eu2 = 14 in eu3pt7b2 . in contrast to eupt4b , in the present structure none of the atoms show significant anisotropy in their thermal vibration ( table 2 ) . the investigated structures represent three families of structures revealing the cacu5-type block in conjunction with other structural fragments . in the discussion below , the structures are arranged in order according to increasing complexity of structural arrangements . the eupt4b structure ( ceco4b type ) ( figure 5f ) consists of alternating slabs of cacu5 type ( a ) ( figure 5a ) and slabs of its ternary derivative ceco3b2 type ( b ) ( figure 5b ) . it is a simplest representative of the structural series described in typix under the general formula rm+nt5m+3nm2n , with m = 1 , n = 1 ( m and n correspond to number of cacu5-type and ceco3b2-type blocks , respectively ) . similar to the prototype structure , the hexagonal channels filled with eu are formed by edge - connected trigonal prisms [ bpt6 ] in eupt4b and alternate with 18-membered platinum cages , capturing europium atoms along the c axis . while in the ceco4b structure the b atoms are located at 0.2889 nm from the central ce atom and assume bonding , this distance is rather long in the europium isotype with platinum . ( a ) cacu5- and ( b ) ceco3b2-type structures ( for both structures the origin is shifted by 0,0,1/2 ) . ( c ) eupt2 structure ( mgcu2 type , space group fd3m , origin shift 1/4 , 0 , 3/4 ) . ( e g ) structural relationships for ( e ) eu3pt7b2 , ( f ) eupt4b , and ( g ) eu5pt18b6x structures . relative arrangements of structural blocks are indicated with symbols a , b , c , and d. in contrast to the relatively small unit cell and simple structure of eupt4b , the two remaining structures , eu3pt7b2 and eu5pt18b6x , exhibit rather large unit cells and nontrivial stoichiometries . this binary compound belongs to the rhombohedral branch of a structural series formed within composition range rm2rm5 by stacking the fragments of cacu5 type and laves phase and can be described by the formula r2m+nm4m+5n , where m accounts for the number of laves - type slabs ( r2m4 ) and n is a number of cacu5-type slabs . since only the structures with m = 1 have been observed for binary structures , the ternary representatives of this series which are built by combining ceco3b2- and laves - type structural slabs ( ca3ni7b2 , ce2ir5b2 ) follow the formula r2+nm4 + 3nx2n : 2mgcu2 + nceco3b2 . cu system . in ca3al7cu2-type structures with boron ( ca3ni7b2 , eu3pt7b2 ) , the transition - metal atoms adopt the sites of aluminum in the mgcu2-type block and boron atoms replace copper in the cacu5-type block . figure 5e shows the arrangement of ceco3b2-type slabs ( b ) and mgcu2-type slabs ( c ) in the eu3pt7b2 structure . it can be considered as an intergrowth of three kinds of structure blocks ( figure 5 g ) : one is a cacu5 type ( a ) , the second having the atom arrangement of its ordered ternary derivative ceco3b2 ( b ) , and the third reveals the slab of the site - exchange variant of the thcr2si2 structure with b atoms adopting the sites of cr ( d ) . the body - centered structure of thcr2si2 ( space group i4/mmm , th in 2a : 0 , 0 , 0 ; cr in 4d : 0 , 1/2 , 1/4 , si in 4e : 0 , 0 , z ) , an ordered variant of baal4 , is widely distributed among ret2x2 compounds ( t = transition metal , x = p element ) . the unit cell of thcr2si2 can be described as a stacking of infinite layers of interconnected tetragonal [ cr4 ] pyramids around the si atom ( wyckoff position 4e , commonly named as pyramidal site ) running perpendicular to the c axis with a layer of thorium atoms between these pyramids . cr atoms are tetrahedrally surrounded by four si atoms ( wyckoff position 4d , tetrahedral site ) . chemical bonding , atomic site preferences as a function of electronegativity of the constituent elements , size effect with respect to the structural stability , phase widths , and physical properties have been studied for different combinations of the elements and particularly developed recently for pnictides due to discovery of superconductivity in the series of iron arsenides with thcr2si2 type . ternary rare - earth borides with a thcr2si2-type structure are quite rare except for few representatives , such as reco2b2 ( re = y , la , pr , nd , sm , gd er ) , refe2b2 ( re = y , gd these structures the transition metal occupies the atom site of cr and boron is placed in the position of si . accordingly , each t atom is coordinated by four b atoms ; a recent reinvestigation of the thcr2si2 type laco2b2 ( a = 0.36108 nm , c = 1.02052 nm , z = 0.3324(5 ) ) revealed a relatively short co b bond ( e.g. , 0.20 nm ) , indicating certain compression of the cob layer along the c axis and an elongated la there are two baal4-type derivative structures encountered for composition ret2x2 : the structures of thcr2si2 ( described above ) and cabe2ge2 , which is built by intergrowth of thcr2si2-type slabs and slabs of its site - exchange variant along the direction ( and thereby one - half of t and x atoms occupy the tetrahedral and pyramidal sites and vice versa , respectively ) . as reported by parthe et al . , formation of compounds revealing only inverse thcr2si2-type arrangement is rare ; however , this arrangement occurs as slabs in intergrown structures ( for example , in cenisi2 , bacusn2 , la3co2sn7 ) . among borides , formation of inverse thcr2b2-type structure was found for znir2b2 where layers of edge - connected [ bir4 ] tetrahedra ( ir in 4e ( 0 , 0 , z , z = 0.37347 ) , b in 4d ( 0 , 1/2 , 1/4 ) ) are separated by 4 networks of zinc atoms ( zn in 2a ( 0 , 0 , 0 ) ) . similarly , in eu5pt18b6x , four atoms of platinum form tetrahedra around boron atoms , however exhibiting a higher degree of compression along the b axis : the pt1b distance in the eu5pt18b6x is 0.1978(1 ) nm ( and 0.2239(8 ) nm for pt11 in split position ) as compared to the length of the ir b bond of 0.2150 nm in znir2b2 showing the tetrahedral angles 2 137.07 , 2 105.25 , 2 90.27 in eu5pt18b6x ( 2 126.09 , 2 100.79 , 2 102.91 for pt11 ) and 4 121.68 , 2 87.11 in znir2b2 . because of limited spatial dimension of the thcr2si2 slab in the eu5pt18b6x structure ( i.e. , one - half a unit cell cut along ( 111 ) ) , the tetrahedra do not form infinite layers but are arranged in one - dimensionally linked chains running infinitely along the a axis . previous theoretical studies based on mlliken overlap population analysis suggested that the element with greater electronegativity is more strongly bound in the 4e site . the atoms site preferences in the structures of three discussed borides ( i.e. , laco2b2 , znir2b2 , and thcr2si2 block in the eu5pt18b6x ) are consistent with the electronegativity on the pauling scale of boron , relative to those of transition elements co , ir , and pt . among a large family of reported ternary rare - earth transition - metal borides , the eu phases are usually missing . for only a few cacu5-type derivative ternary borides with europium have the precise structural parameters and physical properties been investigated . these are ( i ) the eu2rh5b4 and eu3rh8b6 structures composed of the ceco3b2- and carh2b2-type fragments , ( ii ) eu3ni7b2 of ceco4b type exhibiting one nickel site partially occupied by a mixture of europium and nickel atoms . in both eu2rh5b4 and eu3rh8b6 compounds , the eu atoms have been found in the divalent state , revealing magnetic moments only slightly smaller than the theoretical value of 7.94 b . euir2b2 of carh2b2 type can be considered as a metal - deficient derivative structure of ceco3b2 ; according to a plot of the unit cell volumes vs lanthanide atomic number , eu was found to be divalent . comparing the interatomic distances and relating them to the magnetic properties , one can see that in both eu2rh5b4 and eu3rh8b6 ( eu 4f ) the eu eu distances are 0.3207 , 0.3654 nm and 0.3113 , 0.3614 nm respectively , while eu rh distances vary between 0.3079 and 0.3256 and 0.3054 and 0.3279 nm . euir2b2 shows similar distance variations : the eu eu bond is 0.3852 nm , and the distances eu ir are 0.3058 and 0.3313 nm . the differences in bond length analogous to eu2rh5b4 , eu3rh8b6 , and euir2b2 can be found in eupt4b , where one eu atom , namely , eu1 ( ceco3b2-type block ) exhibits rather short contacts with pt2 of 0.31605 nm , whereas eu2 ( cacu5-type block ) exhibits pt neighbors at 0.32428 and 0.36113 nm . there is one rather long bond between europium atoms eu1eu2 0.3720 nm in eupt4b . the deviations of the lattice parameters of the europium compound from the lanthanoid contraction in the rept4b series also confirm the assumption that eu is not trivalent in eupt4b ( figure 6 ) . similarly , in eu5pt18b6x the eu eu contacts are long and the distances between eu and pt atoms are rather heterogeneous , ranging within 0.3128 and 0.3611 nm . for the europium compound , lattice parameters were obtained from rietveld refinement of single - phase eupt4b alloy used for physical properties measurements . in order to obtain the electronic configuration ( ec ) of the eu ions and thus the magnetic state , the magnetization of eupt4b and eu3pt7b2 was measured and analyzed in detail . the temperature dependence of the magnetization and susceptibility for various fields is presented in figures 7 and 8 , respectively . magnetic susceptibility data = m / h scale pretty well for 0.1 , 1 , and 3 t above the ordering temperature being indicative that the samples are free from magnetic impurities and other magnetic secondary phases ; data are displayed in figure 7 as vs t for the 3 t run , only . temperature - dependent magnetization of eupt4b ( a ) and eu3pt7b2 ( b ) . both eupt4b and eu3pt7b2 eu3pt7b2 shows a transition at 57 k , while eupt4b orders below 36 k as determined from low - field magnetization measurements and corresponding arrott plots ( see below ) . the magnetic susceptibility in the paramagnetic region is accounted for by the modified curie weiss law = 0 + c/(t p ) . the paramagnetic curie temperatures p = 60 and 40 k together with effective magnetic moments eff = 8.0 and 7.1 b per eu atom ( derived from the curie constant , c ) , were obtained as a result of least - squares fits to the susceptibility data of eu3pt7b2 and eupt4b , respectively ( solid lines , figure 7 ) , with a temperature - independent susceptibility 0 of about 3 10 cm / g for both compounds . while the paramagnetic moment of eu3pt7b2 recounted per one europium atom is almost that expected for the theoretical value of the free eu ion ( eff = 7.94 b ) the effective moment obtained for eupt4b is significantly smaller , pointing to a mixed- or -intermediate valence state of eu . the fact that eu in eupt4b possesses two inequivalent lattice sites suggests that the more static case ( i.e. , mixed valence ) might account . in such a case , the system can be treated as a mixture of eu and eu atoms located either at the 1(a ) or the 1(b ) site . following this approach , the paramagnetic moment of eu in eupt4b we use a mean literature value for the effective moment of eu with eu = 3.5 b rather than the theoretically vanishing effective magnetic moment of a pure eu state . the fraction x of eu atoms in the 2 + state is thus estimated to be x = 0.75 , whereas a vanishing effective moment for the eu state gives a fraction of 80% in the eu state . although the latter is presumably an overestimate , analysis of the saturation magnetization and heat capacity ( see below ) supports these assumptions . the temperature - dependent magnetization , displayed in figure 8 for various fields , exhibits for eupt4b significant irreversibilities of the magnetization for zero - field cooling ( zfc ) and field cooling ( fc ) which disappear only for fields larger than 1 t. these irreversibilities can be attributed to domain wall pinning associated with a remarkable hysteresis and a coercivity of 0.32 t at 2.8 k ( see figure 9 ) as a consequence of the magnetocrystalline anisotropy due to the cacu5 building blocks . on the contrary , there is hardly any difference for zfc and field cooling detectable even in the low - field regime for eu3pt7b2 yielding also reversible magnetic isotherms without a significant hysteresis , which means the coercivity is 3 orders of magnitude smaller than for eupt4b . a comparison of the magnetic isotherms of both compounds is shown in figure 10 , where the isotherms are displayed as m versus h / m plots ( arrott plots ) . it should be noted that in the case of eupt4b with the pronounced hysteresis only the demagnetization data , where the rotation of the magnetization plays the dominant role , are used for the arrott plots . deviations from the expected linear behavior of the arrott plots are observed frequently , see , e.g. , ref ( 59 ) and references therein , in particular with a curvature symmetrical with respect to tc , i.e. , a negative curvature at t < tc and a positive curvature at t > tc . this is indeed observed for eupt4b ( figure 10a ) , and such symmetrical deviations about tc can be attributed to spatial variations of the magnetization and/or spin fluctuations . as we have a stoichiometric compound these deviations may arise from fluctuating moments associated with significant thermal motions of eu2 along the hexagonal axis and/or with the proposed weak rattling of the eu1 atoms within the 14-atoms cage rather than from a heterogeneous magnetization . on the other hand , it was shown that a substantial uniaxial magnetocrystalline anisotropy causes a remarkable downturn of the arrott plots below tc . in this model , the negative curvature of the arrott plots below tc can be accounted for as a result of domain rotations in the nonoriented crystallites and against the random but uniaxial anisotropy fields ha . a rather simple way to estimate ha in terms of this model is to extrapolate the linear part of the arrott plot backward to obtain the spontaneous magnetization , and taking 91.3% of that value gives with a parallel forward extrapolation an intersection with the experimental arrott plot , yielding a value for h / m where this particular field corresponds to the anisotropy field ha ( see the dashed lines in figure 10b ; for further details see ref ( 59 ) ) . using this simple estimate we obtain for eu3pt7b2 an anisotropy field of about 1 although the magnetization at 7 t at 2.8 k is not in a fully saturated state for both compounds we use that value for the saturation magnetization ms , while the spontaneous magnetization m0 is obtained from the backward extrapolation of the arrott plot as mentioned above . this yields for eu3pt7b2ms = 7.43 b / mol - eu and m0 = 7.24 b / mol - eu and for eupt4b ms = 5.13 b / mol - eu and m0 = 3.85 b / mol - eu . both the saturation as well as the spontaneous magnetization of eu3pt7b2 is well above the value expected for a eu ion ( ms = gj = 7 b ) , indicative that a substantial 5d conduction electron polarization contributes to the total moment . the significant smaller saturation and spontaneous moment of eupt4b indicate , in line with results for the effective moments , a mixed - valence state . under the assumption that the conduction electron polarization is of similar magnitude in both compounds , the ratio of their saturation moments indicates that 69% of the eu ions are in the eu state in eupt4b , while a somewhat smaller fraction of about 53% is obtained using the ratio of the spontaneous moments . the latter is an underestimate caused presumably by the large high - field susceptibility of eupt4b also visible in the larger slope of the arrott plot in comparison to that of eu3pt7b2 . further information on the ground state properties of eupt4b and eu3pt7b2 can be gained from specific heat measurements . temperature - dependent specific heat cp of eupt4b ( a ) and eu3pt7b2 ( b ) . low - temperature details together with least - squares fits ( solid lines ) based on a ferromagnetic spin wave model . both eupt4b and eu3pt7b2 exhibit distinct -like anomalies around 35 and 57 k , respectively , idealized in figure 11 by solid lines , which can be attributed to a second - order phase transition of magnetic origin . a nonmagnetic isostructural compound to be used as an analogue was not available for a full comparative analysis . however , it is easily seen that the height of the anomaly cp = 56 j / molk in the case of eu3pt7b2 corresponds well to a mean - field - like transition of a j = 7/2 system ( cp = n*5r[j(j + 1)/(2j + 2j + 1 ) ] , where n is the number of re atoms / mol . considering three eu atoms per mole yields cp = 60.39 j / molk , in fair agreement with experiment . the jump of the specific heat of eupt4b right at the magnetic phase transition temperature is only about 13 j / molk in comparison with the expected value of 20.13 j / molk . this reveals that only a fraction of the eu ions is involved in magnetic ordering ( i.e. , 65% eu and 35% eu ) , which is in line with the broad range of estimates from the magnetic data where we obtained 53% eu as a lower and 80% eu as an upper limit . low - temperature heat capacity data were analyzed in terms of a ferromagnetic spin wave model including a spin wave gap , i.e. , cp = t + t + t exp(/t ) with the sommerfeld value and proportional to the low - temperature debye temperature d ( d = ( ( 1944n)/ ) . results of least - squares fits are shown as solid lines in the insets of figure 11 , revealing excellent agreement with = 28 and 206 mj / molk , resulting in an effective 68 mj / molk per europium atom , = 0.000222 and 0.000702 j / molk , = 0.36 and 0.46 j / molk , and = 12.6 and 10 k for eupt4b and eu3pt7b2 , respectively . both eu - based compounds are characterized by sommerfeld values well beyond simple metals , pointing to strong electron correlations induced by intra - atomic 4f5d exchange in eu ions in combination with a significant hybridization of eu and pt 5d orbitals . the latter is also corroborated by the high saturation magnetization of eu3pt7b2 which significantly exceeds the expected saturation moment of free eu ions . we note that largely enhanced sommerfeld values due to f d exchange have been reported for various gd intermetallics such as gd3co , gd3rh , and gd ni binaries . we studied the electrical resistivity of eupt4b and eu3pt7b2 from 0.3 k to room temperature ( see figure 12 ) . data evidence metallic behavior in both cases with rather low overall resistivities . unlike eu3pt7b2 , in the resistivity curve of eupt4b a slight change of slope is observed at around 240 k , pointing to some additional , unanticipated scattering processes . for both compounds , anomalies of (t ) figure 13 represents the magnetic field dependence of the resistivity of eu3pt7b2 and eupt4b . although the anomaly in both compounds is of ferromagnetic origin , application of a magnetic field causes different responses of the resistivity . the resistivity of eupt4b decreases slightly in the entire temperature range studied , and the (t ) anomaly is suppressed by fields of 6 t. magnetoresistance of eu3pt7b2 , on the other hand , is positive at low temperatures as well as in the high - temperature paramagnetic range and negative only in a narrow region around the transition temperature . despite the ferromagnetic ground state of eu3pt7b2 leading , in general , to a negative magnetoresistance , the classical magnetoresistance can overcompensate the one originated by magnetic interactions , resulting in an increase of (t ) with increasing magnetic fields , at least for certain temperature ranges . the dominance of the classical magnetoresistance in eu3pt7b2 as compared to eupt4b might follow from the fact that the overall resistivity , specifically the residual resistivity , is much lower here than in the case of eupt4b . temperature - dependent electrical resistivity of eu3pt7b2 ( a ) and eupt4b ( b ) as a function of magnetic field . to quantitatively evaluate (t ) of eu3pt7b2 , experimental data were first split into two parts , i.e. , a low - temperature region ( below the magnetic phase transition ) and a high - temperature paramagnetic part . in the paramagnetic regime the behavior of the resistivity of eu3pt7b2 grneisen term corresponding to the resistivity originated from scattering of conduction electrons on phonons and * = 0 + mag , with mag describing the interaction of conduction electrons with localized magnetic moments . in the absence of crystalline electric field effects a debye temperature d = 312 k and * = 32 cm were obtained from a least - squares fit in a temperature region over 80 k ( dashed line , figure 12 ) . in the low - temperature region , the resistivity of a ferromagnetic material with spin wave dispersion k is expected to behave like t . moreover , as a consequence of the enlarged gamma value , derived from the specific heat measurements , the electron electron scattering should also be taken into account and in turn be proportional to t at low temperatures as well , making it impossible to distinguish between both phenomena . as a result of the fit using = 0 + at , 0 = 7.64 cm was obtained . the low - temperature part of the resistivity of eupt4b is in good agreement with the temperature dependence of a ferromagnetic material . experimental data are well accounted for by = 0 + bt(1 + ( 2t/))e with = 12.8 k as the width of the gap in ferromagnetic spin wave spectrum . the high - temperature part , on the other hand , displays some characteristic curvature observed in systems strongly influenced by the crystalline electric field , supporting the mixed - valence hypothesis . three new ternary phases have been observed for the first time in the eu pt b system from arc - melted alloys annealed at 1020 k. crystal structures have been studied by x - ray single - crystal diffraction and validated by rietveld refinements of x - ray powder diffraction data . the results beautifully illustrate the structural diversity and versatility of a cacu5-type derivative family of structures , revealing three different structural series . eupt4b forms a ceco4b type and is composed of cacu5- and ceco3b2-type fragments stacked along the c axis ; eu3pt7b2 is built of ceco3b2- and mgcu2-type slabs and exhibit an ordered variant of puni3 , namely , ca3al7cu2 type with al and cu atom sites occupied by pt and b atoms , respectively . in both compounds unique stacking of cacu5- , ceco3b2- , and thcr2si2-type slabs has been uncovered in the eu5pt18b6x structure ; the atom arrangement in the thcr2si2-type block is rare for the distribution of pt and b atoms on si and cr atom sites , respectively ( baal4-type derivative structures ) , thus changing the coordination of the transition metal as compared to other representatives of a large family of thcr2si2-type compounds and exhibiting chains of edge - connected platinum tetrahedra [ bpt4 ] running along the a - axis direction . with respect to eu eu linkage , the europium atoms are interconnected infinitely along the c axis in both eupt4b and eu3pt7b2 structures . the eu5pt18b6x structure exhibits the repeating blocks of 5 fused polyhedra of eu linked in a sequence ... thcr2si2ceco3b2cacu5ceco3b2thcr2si2 ... ; the units are shifted for a 1/2a along the b - axis direction with respect to each other , forming blocks which are interlinked via thcr2si2-type fragment along the c axis . positioning of the b atom on the cr wyckoff site in the thcr2si2-type slab and consequently its nearest tetrahedral coordination plays a decisive role in formation of eu5pt18b6x structure , serving as a linking entity ( figure 14 ) for the repeating structural units of fused eu polyhedra . although eupt4b exhibits a feromagnetic ordering at relatively high temperature of 36 k , a mixed - valence state for eu is observed . in a static case of valence distribution one would expect 50% eu and 50% eu as a consequence of obvious difference between the coordination polyhedra for two eu atoms unlike our case , where magnetic and specific heat measurements purpose more complex valence behavior . time - dependent studies are needed to distinguish between a static and a more complex dynamic case of mixed - valence state . in eu3pt7b2 , the eu atoms are in the 2 + state and the compound orders feromagnetically at around 57 k. the electron part of specific heat was found to be 206 mj / molk , pointing to the existence of strong electron correlations .
three novel europium platinum borides have been synthesized by arc melting of constituent elements and subsequent annealing . they were characterized by x - ray powder and single - crystal diffraction : eupt4b , ceco4b type , p6/mmm , a = 0.56167(2 ) nm , c = 0.74399(3 ) nm ; eu3pt7b2 , ca3al7cu2 type as an ordered variant of puni3 , r3m , a = 0.55477(2 ) nm , c = 2.2896(1 ) nm ; and eu5pt18b6x , a new unique structure type , fmmm , a = 0.55813(3 ) nm , b = 0.95476(5 ) nm , c = 3.51578(2 ) nm . these compounds belong to the cacu5 family of structures , revealing a stacking sequence of cacu5-type slabs with different structural units : cacu5 and ceco3b2 type in eupt4b ; ceco3b2 and laves mgcu2 type in eu3pt7b2 ; and cacu5- , ceco3b2- , and site - exchange thcr2si2-type slabs in eu5pt18b6x . the striking motif in the eu5pt18b6x structure is the boron - centered pt tetrahedron [ bpt4 ] , which build chains running along the a axis and plays a decisive role in the structure arrangement by linking the terminal fragments of repeating blocks of fused eu polyhedra . physical properties of two compounds , eupt4b and eu3pt7b2 , were studied . both compounds were found to order magnetically at 36 and 57 k , respectively . for eupt4b a mixed - valence state of the eu atom was confirmed via magnetic and specific heat measurements . moreover , the sommerfeld value of the specific heat of eu3pt7b2 was found to be extraordinarily large , on the order of 0.2 j / mol k2 .
Introduction Experimental Section Determination and Analysis of Crystal Structures Structural Relationships Magnetic Properties Specific Heat Electrical Resistivity Summary
PMC4281833
the social worlds of animals are filled with many different types of interactions , and social experience interacts with organismal stress on many levels . social stressors have proven to be potent across a wide range of species , and their study in rodents has led to greater understanding of the role of stressor type , timing , and other factors impacting physiology and behavior . while negative social interactions can be acutely damaging , social interaction can alsomoderate stressful experiences , buffering potentially adverse impacts and contributing to resilience . in this review we consider three main classes of effects : the social environment as a stressor ; the effects of stress on subsequent social behavior ; and social buffering of stressful experience ( fig . 1 ) . we explore mechanisms that mediate links between stress and social behavior , and consider sex differences in these mechanisms and behavioral outcomes . finally , we discuss data from a wide variety of rodent species wherever possible , in order to explore the universality and specificity of findings in single species . responses to stress span a spectrum from detrimental immediate and long - term effects to resilience and protection against future stressors . the effects of stress exposure and consequent trajectory depend on the nature of the stressor , the severity , duration ( acute vs. chronic ) , sex / gender , genetics , timing of exposure ( early life , adolescence , adulthood or aging ) as well as the perception of the stressor by the individual for example , stressor controllability dramatically affects resilience versus vulnerability as an outcome ( maier and watkins , 2005 , amat et al . , 2010 , lucas et al . , 2014 recently it was shown that even the gender of researchers can affect rodent stress levels and influence results of behavioral tests ( sorge et al . , 2014 ) . one of the most commonly measured immediate physiological responses to stress is activation of the hypothalamic corticotropin releasing factor ( crf , also called crh ) is released from the hypothalamus , and is the primary trigger of adrenocorticotropic hormone ( acth ) secretion from the anterior pituitary . acth then triggers systemic release of glucocorticoids ( cort ) from the adrenal gland ( bale and vale , 2004 ) . we describe outcomes related to hpa - axis responsivity , as well as several additional neurochemical players including bdnf , serotonin , and multiple neuropeptides in the text below . social behavior is complex and varies with the behavioral test chosen , and whether focal individuals are tested with familiar or novel conspecifics , with same- or opposite - sex individuals , or with familiar or unfamiliar strains . the laboratory setting is a sparse environment compared to the complexity of nature , both physically and socially . some research aims to quantify social behavior in complex housing areas such as enriched caging with social groups ( e.g. , artificial , visible burrow systems ( blanchard et al . , 2001 , 2006 ) , and large , semi - natural enclosures ( e.g. king , 1956 , dewsbury , 1984 , ophir et al . , 2012 , margerum , 2013 ) . other research relies on constrained social interactions in tests designed to measure a few particular aspects of social behavior ( crawley , 2007 ) . for example social interaction tests typically measure the amount of time spent in social contact or investigation with a conspecific . social choice tests take place in multi - chambered apparatuses that allow investigation of either a conspecific or a non - living stimulus such as a novel object or empty restrainer ( moy et al . , 2007 ) . variations on this test involve a choice of a familiar versus unfamiliar individual , such as in the partner preference test ( williams et al . , 1992 ) . social habituation / dishabituation tests are often used to assess social recognition and memory for familiar individuals ( ferguson et al . , 2002 ; choleris et al . , 2003 ) . social motivation may be assessed by measures of effort expended to access another individual ( lee et al . , 1999 ) , or by conditioned place preference for a social environment ( panksepp and lahvis , 2007 ) . other tests measure specific aspects of social competency , such as memory and social inferences involved in hierarchy ( cordero and sandi , 2007 , grosenick et al . recent studies of pro - social behavior in rats have focused on latency to free a restrained rat under different scenarios ( ben - ami bartal et al . , 2011 , ben - ami bartal et al . , there is no peripheral hormonal indicator of sociability , but two neuropeptides have been highly implicated in many aspects of mammalian social behavior : oxytocin ( ot ) and arginine vasopressin ( vp ) . oxytocin is produced in the hypothalamus and facilitates a wide variety of processes related to social behavior , including maternal behavior , trust , anxiolysis , and sexual pair - bond formation ( reviewed in ross and young , 2009 , young et al . , 2008 , neumann , 2008 , zucker et al . , 1968 , carter et al . , vasopressin activity has been associated with aggression , anxiety , and social behavior ( reviewed in kelly and goodson , 2014 ) , as well partner preference formation in male prairie voles ( cho et al . , 1999 , young and wang , 2004 ) . the locations and densities of oxytocin receptors ( otr ) and vasopressin type 1a receptors ( v1ar ) have been associated with species variations , as well as with individual variations in social behavior from affiliation to aggression ( e.g. everts et al . , 1997 , young , 1999 , beery et al . many studies have also investigated the role of the mesolimbic dopamine system and opioid regulation of rewarding social behaviors such as pair - bonds between mates ( aragona , 2009 , resendez et al . , 2012 ) ; we describe these and additional research avenues throughout . in addition to considering how social behavior is assessed , we must consider the significance of the behavior to the species in which it is assessed . social behavior encompasses skills from social recognition to social memory , as well as many distinct types of interaction , including with peers , potential reproductive partners , competitors , and offspring . some of these interactions are better studied in some species than others ; for example biparental care is only present in a few rodent species that have been studied in laboratories , namely prairie voles ( microtus ochrogaster ) , california mice ( peromyscus californicus ) , and djungarian hamsters ( phodopus campbelli ) . monogamous pairing with mates is similarly rare among rodents , and is most studied in prairie voles and california mice . mechanisms supporting group living have been in explored in colonial rodents including naked mole - rats ( heterocephalus glaber ) , tuco - tucos ( ctenomys sociabilis ) , seasonally social meadow voles ( microtus pennsylvanicus ) , and others ( anacker and beery , 2013 ) . the idea that some problems are best studied in particular species is far from new ; this principle was promoted in 1929 by the late physiologist and nobel laureate august krogh ( krebs , 1975 ) . in contrast to krogh 's assertion that species should be selected for their suitability for studying particular problems , modern biological research is strongly biased towards rats and mice ; in 2009 rats and mice made up approximately 90% of mammalian research subjects in physiology , up from 18% at the time krogh 's principle was articulated ( beery and zucker , 2011 supplementary material ) . lab strains of mice and rats are highly inbred and in many ways quite different from their wild peers . use of multiple species allows researchers to compare and contrast mechanisms across the phylogenetic tree . while the depth of mechanistic information available for non - model organisms is much less than for rats and mice , the comparative perspective is essential for understanding to what extent mechanisms underlying social behavior are unique to particular species , common across broader groups , or are variations on a theme ( phelps et al . , 2010 , katz and lillvis , 2014 ; hofmann et al . , 2014 ) . in this review we focus on rats and mice for which data on stress and social behavior are most abundant , but incorporate findings from other rodent species whenever possible . and although laboratory research in rodents is heavily male - biased ( beery and zucker , 2011 ) , we review a substantial body of findings on the interrelationship of stress and social behavior in females . , rodents may encounter competition for resources such as territory , food , and access to mates , and even solitary species interact with conspecifics and their chemical cues , if only to avoid them in the future . widely used models of social stress in rodents include social subordination , crowding , isolation , and social instability ( fig . 1 , left side ) . while most studies have been conducted in mice and rats , prairie voles and other social rodent species provide an opportunity to study the role of identity of the social partner , and how separation from a mate differs from isolation from a same - sex peer . in humans , social rejection is used as a potent experimental stressor ( kirschbaum et al . , 1993 ) , and decades of work in humans and non - human primates have demonstrated that an individual 's position in the social hierarchy has profound implications for health and well - being ( adler et al . , 1994 , sapolsky , 2005 ) . in rodents , the most prominent model of stressful social interaction social defeat is typically induced by a version of the resident - intruder test in which a test subject is paired with a dominant resident in its home cage . dominance may be assured by size , prior history of winning , strain of the resident , and/or prior housing differences ( martinez et al . , 1998 ) . social defeat is typically used as a stressor in male rodents , for whom dominance is easier to quantify and aggressive interactions related to home territory are presumed more salient . a few studies report effects of social defeat on females , particularly in syrian hamsters in which females are highly aggressive and dominant to males ( payne and swanson , 1970 ) . in rats and mice , females do not always show a significant response to this task and the effect in males is far greater ( palanza , 2001 , huhman et al . , 2003 ) . thus , other stress paradigms such as social instability are more widely used with females ( haller et al . , 1999 ) . social defeat can have a more substantial impact on male rodent physiology and behavior than widely used stressors such as restraint , electric shock , and chronic variable mild stress ( koolhaas et al . , 1996 , blanchard et al . , 1998 , sgoifo et al . , 2014 ) . in the short - term , social defeat produces changes in heart rate , hormone secretion , and body temperature , with longer - term impacts on a wide variety of additional outcomes including activity , social behavior , drug preference , disease susceptibility and others ( martinez et al . , 1998 , sgoifo et al unlike physical stressors such as restraint , social defeat does not appear to be susceptible to habituation or sensitization ( tornatzky and miczek , 1993 , sgoifo et al . , 2002 ) , and can be used in groups housed with a single dominant individual ( nyuyki et al . , 2012 ) . social defeat stress has profound effects on hippocampal morphology and function ( reviewed in mcewen and magarinos , 2001 , buwalda et al . , 2005 , these effects include reduction in hippocampal volume ( czh et al . , 2001 ) related to dendritic remodeling and reduced neurogenesis ( magarios et al . , 1996 , gould et al . , 1998 ) , social defeat also alters the ratio of mineralocorticoid to glucocorticoid receptors in the hippocampus ( buwalda et al . , 2001 , veenema et al . , 2003 ) . as with most of neurobiological research , attention has centered on neurons as the brain mediators of the biological embedding of the social world . however , following recent reports on the effects of stress ( in general , and particularly social stress ) on astrocytes , oligodendrocytes and microglial cells , it has become clear that glial cells are likely to play a role in this process , and deserve more attention in future studies ( braun et al . , 2009 , , 2011 , araya - calls et al . , 2012 , chetty et al . , social hierarchy has also been explored in settings where dominance is established through unstaged social interactions that occur on an ongoing basis ( e.g. blanchard et al . , 1995 , blanchard et al . , 2001 ) . a low position in the social ( and economic / resource ) hierarchy appears to be stressful across a wide range of species . negative health effects of low social status have been particularly well documented in non - human primates ( e.g. sapolsky , 1989 , sapolsky , 2005 , virgin and sapolsky , 1997 , wu et al . , 2014 ; shively review , 2015 ) . in humans , lower socioeconomic status ( ses ) predicts decreased mental and physical health in a graded fashion , and subjective perception of socioeconomic status may be an even more potent mediator than objective ses ( adler et al . , 1994 , kawachi and kennedy , 1999 , siegrist and marmot , 2004 , singh - manoux et al . , 2005 ) . while low social status appears stressful across all instances discussed thus far , several studies have demonstrated that low status is not always stressful , in part dependent on species - particular life - history traits . for example , subordinate status is most stressful in species with despotic hierarchies , and may not be a stressor in high status is more stressful in societies in which dominance must be continuously defended than in stable social hierarchies ( sapolsky , 2005 ) . in a meta - analysis of cortisol levels in primates , abbott et al . ( 2003 ) found that subordinates had higher basal cort levels only when exposed to higher rates of stressors due to subordinate status , and when subordinate status afforded them few opportunities for social contact . in naked mole rats , a highly social rodent species that lives in large underground colonies , all but a few animals in each colony are reproductively suppressed subordinates ( sherman et al . , 1991 ) . in this instance , subordinates are related to breeders and are non - aggressive except in the event of loss of the breeding queen or her mates ( clarke and faulkes , 1997 ) . reproductively suppressed subordinates do not have higher cort levels than breeders and may have lower levels ( clarke and faulkes , 1997 , clarke and faulkes , 2001 ) . while it is not yet clear how stress relates to status in this species , social subordination must be considered in the context of how it affects the individuals involved . housing density affects rodent behavior , and both crowded and isolated social environments have been used as stressors in rodents . crowding is a naturalistic stressor especially for social or gregarious species that relates to high population density and resource competition in the field . in house mice , several studies have shown that crowding can impair reproductive function and may be part of population size regulation ( christian and lemunyan , 1958 , christian , 1971 ) . in the highly social , group - living rodent species the degu ( octadon degus ) , increased group size is associated with greater dispersal consistent with a social competition hypothesis ( quirici et al . , 2011 ) . in the laboratory , crowding typically consists of large numbers of mice or rats ( e.g. > 6 rats / cage ( brown and grunberg , 1995 , reiss et al . , 2007 ) ) with ad libitum access to resources such as food and water . crowding must be somewhat extreme to induce stressful outcomes , as group - housing ( e.g. 46 rats or 12 mice in a sufficiently large area ) is often used as a key component of environmental enrichment ( sztainberg and chen , 2010 , simpson and kelly , 2011 ) . social crowding has been shown to impact many different physiological outcomes in male mice , rats , and prairie voles . these include changes in organ weights , hormone secretion , hpa reactivity , pain sensitivity , telomere length , and cardiac outcomes ( gamallo et al . , 1986 , gadek - michalska and bugajski , 2003 , kotrschal et al . , 2007 , grippo et al . , 2010 , tramullas et al . , 2012 , puzserova et al . , crowding of pregnant dams also produces changes in the offspring birth weight , pubertal timing , and reproductive behavior ( e.g. harvey and chevins , 1987 , ward et al . , 1994 ) and may lead to lasting changes through a subsequent generation ( christian and lemunyan , 1958 ) . there appear to be important sex differences in the consequences of crowding , with one study in rats finding that crowding is a stressor for males but has the capacity to calm females ( brown and grunberg , 1995 ) . at the opposite extreme social isolation is employed as a stressor in previously group - housed mice and rats ( heinrichs and koob , 2006 ) ; in both species , extended ( 213 week ) solitary housing produces an isolation syndrome particularly in females , consisting of hyperadrenocorticism , reduced body weight , altered blood composition , and enhanced pain responsiveness among other outcomes ( hatch et al . , 1965 , valzelli , 1973 ) . these changes coincide with alterations in behavior including aggression , mating behavior , learning , and pain sensitivity ( valzelli , 1973 ) . more recent studies have added a host of additional physiological outcomes related to stress and depressive behavior , including changes in dopamine signaling in different brain regions ( heidbreder et al . , 2000 ) , altered heart rate and cardiac function ( spni et al . , 2003 , carnevali et al . , 2012 ) , and neurogenesis ( stranahan et al . , 2006 , lieberwirth and wang , 2012 ) . which outcomes are affected by isolation depend in part on the age at which isolation occurs ( reviewed in hall , 1998 ) , and there are sex differences in the effects of social isolation . these suggest that isolation may be stressful for females but not necessarily to the same extent for males ( hatch et al . , 1965 , palanza , 2001 , palanza et al . , 2001 ) . assessing the impacts of both isolation and crowding share the problem of what to consider as the control comparison , as anxiety and other behavioral outcomes vary along a continuum of group sizes ( botelho et al . , 2007 ) . in recent decades , prairie voles have become a popular model for studying social behaviors because of their unusual capacity to form socially monogamous pair - bonds with opposite sex mates ( getz et al . , 1981 ) . an additional advantage of this species is that the effects of social manipulations can be contextualized in terms of findings from field populations and semi - natural settings ( e.g. ophir et al . , 2008 , mabry et al . , 2011 ) . in wild prairie voles , cohabitation with a mate or a mate and undispersed offspring is common ( getz and hofmann , 1986 ) , and reproductively nave prairie voles are affiliative towards their same - sex cage mates . in the lab , separation of adult prairie voles from a sibling cage - mate for 12 months reduced sucrose consumption ( a measure of anhedonia ) , and was associated with increased plasma levels of oxytocin , cort , and acth , as well as increased activity of oxytocin neurons in the hypothalamus following a resident intruder test . these effects were more profound in females ( grippo et al . , 2007 ) . further work has shown that social isolation from a sibling also leads to changes in cardiac function associated with cardiovascular disease ( grippo et al . 2012 ) , and immobility in the forced swim test ( grippo et al . , 2008 ) considered a measure of depressive behavior . some physiological and behavioral sequelae were prevented or ameliorated by exposure to environmental enrichment , or by peripheral administration of oxytocin ( grippo et al . , 2009 , grippo et al . , 2014 ) , as has been demonstrated in rats ( hellemans et al . , social isolation of prairie voles from weaning has been associated with higher circulating cort , and greater crf immunoreactivity in the paraventricular nucleus ( pvn ) of the hypothalamus ( ruscio et al . , 2007 ) . while the majority of current studies have focused on social isolation from a non - reproductive partner , recent investigation into disruption of opposite - sex pairs takes advantage of this unusual feature of prairie vole behavior , and suggests that mate - pair disruption has substantial autonomic and behavioral consequences for both male and female prairie voles ( bosch et al . , 2009 ; mcneal et al . , 2014 ) . as the work in prairie voles illustrates , it is important to consider the natural history of species when social manipulations are performed . for example , male syrian hamsters housed in isolation are more aggressive than those housed in groups ( brain , 1972 ) , but that is not to suggest that isolation was distressing , or produced an unusual behavioral phenotype , as this species is naturally solitary ( gattermann et al . , conversely , crowding might be a particularly potent but unnatural stressor for this species , and it has been associated with increased mortality ( germann et al . , 1990 , marchleswska - koj , 1997 ) . social species provide good subjects for studying the influence of social interactions on health and related outcomes , and this has been demonstrated both in the laboratory and in the field . in a species of south american burrowing rodent the colonial tuco tuco ( c. sociabilis ) females may live alone or share a burrow with several other adults members and their young ( lacey et al . , 1997 ) . yearling c. sociabilis that live alone ( whether via dispersal in the field or investigator manipulations in the lab ) , have significantly higher baseline fecal glucocorticoid metabolite levels than do group - living individuals in the same environments ( woodruff et al . , 2013 ) . in a putatively monogamous species of wild guinea pig ( galea monasteriensis ) , social separation induces increases in cortisol secretion that are only rectified by return of the social partner ( adrian et al . , 2008 ) . the study of species in the context of their natural behavior allows us to better understand stress - related outcomes in a variety of rodent species . some studies employ both crowding and isolation in alternation ( for example , 24 h of each for 2 weeks ) , as a model for chronic social instability ( e.g. haller et al . , 1999 , herzog et al . , 2009 ) . social instability has particularly been used as a social stressor for female rats , for whom crowding and social defeat are not always effective stressors ( palanza , 2001 ) . in the crowding phase , females exposed to this variable social environment show increased adrenal weight , increased corticosterone secretion , decreased thymus weight , and reduced weight gain relative to females housed in stable male female pairs ( haller et al . , 1999 ) . a second study replicated these findings and demonstrated that social instability also induced dysregulation of the hypothalmic pituitary gonadal ( hpg ) axis ( elevated luteinizing hormone , prolactin , and disrupted estrus cycles ) , and reduced sucrose preference and food intake ( herzog et al . , 2009 ) . this stressed phenotype persisted for several weeks without habituation and led to a depressive - like phenotype . prior history of social instability in the form of early - life separation from the mother also exacerbates vulnerability to later life chronic subordination stress ( veenema et al . , 2008 ) in humans , stressful situations can promote affiliative behavior ( zucker et al . , 1968 , teichman , 1974 , taylor , 2006 ) and anticipation of stressful events can promote group cohesion and liking for group members ( latan et al . all stress is not the same , however , and in some cases , social behavior is reduced after a stressor in fact social withdrawal is one of the diagnostic criteria for post - traumatic stress disorder ( dsm v , american psychiatric association , 2013 ) . while effects of stress on social behavior are evident in humans , most of our understanding of these impacts , and of the underlying molecular and cellular mechanisms , come from rodent studies . in rodents , several stressors and manipulations of the hypothalamic adrenal ( hpa ) hormonal axis have been shown to impact a variety of subsequent social behaviors . in this case , much of what we know comes from research on prairie voles for which there appear to be important differences between the sexes , with some outcomes dependent on whether the partners are same - sex siblings or opposite - sex mates . as previously mentioned , prairie voles provide an opportunity to study pair - bond formation between males and females , as this species forms reproductive pair bonds both in the laboratory and in the field . prairie voles also exhibit unusually high levels of circulating cort relative to other rodents including montane voles , rats , and mice ( devries et al . , 1995 ) moderated by reduced tissue sensitivity to glucocorticoids ( taymans et al . , 1997 , klein et al . , 1996 ) . stress has opposite effects on the formation of mate preferences in male and female prairie voles . in males , males do not typically form a partner preference for a female after 6 h of cohabitation , however they form significant preferences within this time interval when paired after a brief swim stress ( devries et al . , 1996 ) . preference formation is also facilitated by cort administration in male prairie voles , and impaired by adrenalectomy ( devries et al . some doses of central crf administration also facilitate partner preference formation in males ( devries et al . , 2002 ) . interestingly , cort decreases after pairing with a female , but partner preferences are not established during the early cohousing interval , and cort levels have returned to baseline by the time male preferences have been formed ( devries et al . , 1997 ) . in female prairie voles , stress impairs partner preference formation , but this effect is prevented in adrenalectomized voles ( devries et al . , 1996 ) . this phenomenon appears to be mediated by cort , as exposure to cort during ( but not after ) cohabitation with a novel male prevents partner preference formation , and adrenalectomized females form partner preferences after shorter cohabitation periods than are typically necessary ( devries et al . , 1995 ) . cort levels are naturally low immediately following cohousing with a male , and partner preferences are formed before they return to baseline ( devries et al . , 1995 ) . in rats , in particular , stress has been shown to inhibit mating behavior in males and in naturally cycling females , via elevation of the inhibitory hypothalamic hormone rf - amide related peptide 1 ( kirby et al . , 2009 , geraghty et al . , 2013 ) . same - sex interactions have not been as well explored in prairie voles as opposite - sex affiliative interactions have been , although some data suggest same - sex affiliative behavior in prairie voles may be enhanced following a stressor ( devries and carter , unpublished data referenced in carter , 1998 ) . same - sex affiliative behavior can be studied more broadly in rodent species that live in groups , so additional rodent species may be informative for this question . meadow voles are conditionally social rodents , with photoperiod - mediated seasonal variation in social huddling . while females are aggressive and territorial in summer months , they live in social groups and huddle with conspecifics in winter months or short day lengths in the laboratory ( madison et al . , 1984 , seasonal variations in huddling and partner preference formation allow for the study of the endocrine and neurobiological mechanisms underlying changes in social tolerance and peer affiliation outside the context of mate - pairing . in meadow voles , cort varies seasonally ( boonstra and boag , 1992 , galea and mcewen , 1999 , pyter et al . , crf / urocortin pathways may also link stress - reactivity and social behavior in this species , as crf1 and crf2 receptor densities change with day length and are associated with huddling behavior ( beery et al . , 2014 ) . stress exposure prior to pairing impairs preference formation for a same - sex individual in female of this species ( anacker et al . , 2014 ) . in addition , familiarity of the conspecific prior to the stressor may influence whether social behavior is increased or decreased . wild rats live in gregarious colonies , where social interactions may be beneficial for predator avoidance and under other stressful conditions ( macdonald et al . , 1999 ) . in male rats , social defeat stress leads to social avoidance less time spent in social contact with an unfamiliar non - aggressive rat ( meerlo et al . , 1996 ) and avoidance of the dominant rat ( lukas et al . , 2011 ) . non - social stressors may have the opposite effect , for example , in groups of familiar male rats , rats spend more time huddling in large groups during an immediate stressor ( cat fur or bright light ) . this effect has been termed defensive aggregation , and is facilitated by oxytocin ( bowen et al . , 2012 , bowen and mcgregor , 2014 ) . exposure to chronic social defeat stress leads to social avoidance , altered fear acquisition and elimination , anhedonia , changes in neural circuitry and transmission , neurogenesis and metabolism in groups of exposed versus unexposed subjects ( chou et al . however , looking at individual outcomes reveals a much more complex picture , even in inbred mice . for example , measuring social motivation after exposure to social defeat stress reveals a bimodal segregation of the group into affected and unaffected individuals . affected individuals spend less time interacting with conspecific peers in the social zone , while unaffected ( unsusceptible ) individuals spend time in the social zone similar to unstressed individuals . susceptibility to social aversion following social defeat is associated with a suite of other signs of stress including decreased sucrose preference , decreased body weight , and increased sensitivity to cocaine - induced conditioned place preference ( krishnan et al . , 2007 ) . what is the difference between responders and non - responders , or a resilient vs. vulnerable trajectory ? interestingly , this resilience phenotype did not correlate with social motivation pre - stress , nor with levels of circulating glucocorticoids ( krishnan et al . , 2007 ) . however , stress - susceptibility has been correlated with stress - induced increase in levels of brain derived neurotrophic factor ( bdnf ) , a key regulator of dopamine release in the nucleus accumbens ( nac ) . following 10 days of repeated social defeat , bdnf protein levels were persistently elevated in the nac of mice . reduction of bdnf levels in the ventral tegmental area ( vta ) via local bdnf knockdown provided an antidepressant - like effect relative to untreated , defeated mice and prevented social aversion ( berton et al . , 2006 ) . investigation of the individual differences between susceptible and unsusceptible mice revealed that susceptibility was characterized by increased nac bdnf , but reinforced the importance of bdnf release from the vta , as knockdown in the vta but not nac promoted resilience . susceptibility to defeat was further shown to be mediated by enhanced firing of vta dopamine neurons , with resilience characterized by a lack of activity - dependent bdnf release ( krishnan et al . , 2007 ) . interestingly , unsusceptible individuals were not lacking a neural response , but in fact showed greater change in gene expression patterns in the vta than susceptible individuals suggesting that behavioral non - responsiveness is an active process and not merely a lack of the pathological process . analysis of differential gene expression revealed significant down - regulation of several members of the wnt ( wingless)-dishevelled signaling cascade , including phospho - gsk3 ( glycogen synthase kinase-3 ) , in the nac of susceptible , but not resilient , mice ( wilkinson et al . regulation of hpa axis activity , and specifically reduced expression of crf ( regulated by stress - induced demethylation of regulatory areas of the gene crf1 ) was shown in the subset of vulnerable mice that displayed social avoidance ( elliott et al . , 2010 ) and in mice that displayed short latency to defeat in the resident / intruder paradigm ( wood et al . , 2010 ) . supporting this finding , knockdown of crf levels diminished stress - induced social avoidance ( elliott et al . , 2010 ) . in a separate model of chronic subordinate colony housing , mice selectively bred for low anxiety were behaviorally resilient to subordination stress , and showed distinct hpa axis responses ( fchsl et al . , 2013 ) . several neurotransmission systems are implicated in social - stress resilience vs. vulnerability : in addition to bdnf - control of dopamine mentioned above , differences in the nac dopaminergic system resulting from differential maternal behavior are correlated with increased preference for social interactions in a group of highly groomed rat offspring ( pea et al . , 2014 ) . vulnerable and resilient animals differ significantly in the expression of ampa receptors in the dorsal hippocampus , and activation of ampa receptor during the stress exposure prevented the physiological , neuroendocrine , and behavioral effects of chronic social stress exposure ( schmidt et al . , 2010 ) . knockout of serotonin transporter increases the vulnerability to social avoidance following social defeat ( bartolomucci et al . , 2010 ) . finally , supression of the gabaergic system is seen in the pre - frontal cortex of mice showing depressive symptoms following social defeat ( veeraiah et al . , 2014 ) , and in amygdala of mice exposed to peripubertal stress ( tzanoulinou et al . , 2014 ) . similar suppression is found in the cortex of human patients with ptsd ( meyerhoff et al . , 2014 ) . stress exposure not only alters social interaction , but that social interaction can in turn play a role in buffering or moderating the effects of that stressor , providing adaptive value of social networks for coping with stress exposure . we can think about stress - resilience in multiple layers : life - long programming of stress - resilient individuals originating from the early life environment and in particular through maternal interactions ( parker et al . , 2012 ; lyons et al . , 2010 , szyf et al . , 2007 ) ; short - term resilience after an acute moderate stressor promoting better functioning after a secondary stressor ( kirby et al . , 2013 ) ; or resilience that comes from mitigating ( buffering ) the effects of stress by positive , supportive social environment , or even by aggressive social interactions . for example , lower ranking baboons that show displacement of aggression on peers have lower cort levels ( virgin and sapolsky , 1997 ) . the effects of social buffering are far reaching , and in humans there is evidence that social relationships aid immune function , cardiovascular health , and other health - related outcomes ( reviewed in berkman and kawachi , 2000 ) . stable natural social relationships have even been associated with increased longevity in humans and other species ( humans : holt - lunstad et al . , 2010 ; baboons : silk et al . , the endocrine consequences of social buffering were first described in primates ( coe et al . , 1978 , mendoza et al . , 1978 ) and primate studies continue to be important particularly for our understanding of natural social buffering in the context of stress . for example in female chacma baboons , loss of a partner results in elevated cort and also in enhanced social behaviors such as allogrooming which may help mediate the decline to baseline levels ( engh et al . , 2006 ) . studies of social manipulations in rodents have also played a pivotal role in our understanding of social support on a variety of behavioral , endocrine , and neurobiological outcomes ( reviewed in devries et al . , 2003 , kikusui et al . , 2006 ) . in rodents , most studies of social buffering have focused on the presence or absence of a conspecific such as the cage - mate after a stressor . as one might imagine , many different variables may affect whether social buffering occurs , including the familiarity of the conspecific , the relative hierarchy , presence or absence during stress exposure , whether the cage - mate was also stressed , sex of the individual and partner , sensory modalities of exposure to that individual , timing of the availability of social support and so forth . while these parameters have by no means been explored in all combinations , we summarize what is known for each variable across a variety of rodent species . rats temporarily housed in an open field spend more time together than expected by chance ( latan , 1969 ) , and stressed males are more likely to interact socially than non - stressed males ( taylor , 1981 ) . investigator - manipulated housing conditions ( solitary- , pair- , or group - housing ) also affect reactions to stress . conditioned avoidance of noxious stimuli is reduced in pair - housed animals ( hall , 1955 , baum , 1969 ) . pair - housed rats also show reduced impacts of stress exposure relative to rats housed alone in their response to white noise ( taylor , 1981 ) and foot shock ( davitz and mason , 1955 , kiyokawa et al . , 2004 ) . group - housed rats exposed to social defeat exhibit greater growth and less anxiety behavior in repeated open field exposure relative to solitary - housed rats ( ruis et al . , 1999 ) . solitary housing increases anxiety - like behaviors on its own ( see above section ) ; thus distinguishing between effects of isolation and effects of a stressor ( and their potential interactions ) requires that all housing conditions be paired with both the stressor and lack thereof . in studies where this has occurred , pair - housed animals do not show stress - induced anxiety behavior changes relative to control pair - housed animals , unlike solitary - housed individuals ( nakayasu and ishii , 2008 ) . more recent studies have examined novel behavioral outcomes , including social buffering effects on pain tolerance ( reviewed in martin et al . , 2014 ) and changes in alcohol consumption ( anacker et al . , 2011 social housing impacts hpa axis responsiveness to a stressor or to hormonal stimulation via crf . following crf administration , male group - housed rats have reduced cort and acth relative to isolated males ( ruis et al . , 1999 ) . in young male guinea pigs , presence of the mother or an unfamiliar adult female attenuates increases in plasma acth , cortisol and vocalizations in response to a novel environment ( hennessy et al . , 2000 ) , with additional , subtly varying effects across the lifespan ( hennessy et al . , 2006 ) . studies in prairie voles allow for distinction between buffering by social peers and reproductive partners . in prairie voles , exposure to a novel individual of the opposite sex leads to a decline in serum cort over the following 1560 min in both males and females , while same - sex novel pairings did not influence serum cort ( devries et al . , 1997 , devries et al . , this decline in cort may be important for the ability of the female to form a partner preference , while it must pass in order for males to form ( cort - dependent ) partner preferences ( devries , 2002 ) . the nature of social buffering may be quite different within established social relationships : in prairie voles , female sibling pairs experienced elevated cort following separation and this effect was attenuated following reunion ( unpublished data referenced in carter et al . , 1995 ) . in males , loss of a female partner also resulted in increased circulating cort as well as increased adrenal weight ( bosch et al . , the presence of a partner may provide social buffering from a stressor ; female prairie voles that recovered alone from immobilization stress exhibited high levels of cort and increased anxiety behavior , while females recovering with their male partner showed no such elevation ( smith and wang , 2014 ) . while cort is an easily measured signal that often relates to stress level , it is worth noting that measurement of glucocorticoids is not always a clear indicator of either stress exposure or stressed affect , and stress may result in both enhanced and dampened cort profiles depending on timing and chronicity ( e.g. sapolsky et al . social companionship has been associated with outcomes beyond the hpa axis , although many of these changes may ultimately be related to common pathways . for example , in prairie voles , females recovering from immobilization stress with a male partner showed no cort elevation , coupled with evidence of increased oxytocin ( ot ) release in the paraventricular nucleus ( pvn ) of the hypothalamus . direct administration of ot to the pvn reduced cort responses to a stressor , while oxytocin receptor antagonist ( ota ) injection prevented the ameliorative effects of housing with the partner ( smith and wang , 2014 ) . this parallels research in humans in which ot and social buffering interact to reduce cort responses to a social stressor ( heinrichs et al . , 2003 ) . for example , the presence of a conspecific in an open - field test reduces peripheral prolactin in male rats ( wilson , 2000 ) . relative to isolated individuals , socially housed female siberian hamsters experience improved wound healing ; an effect which is mediated by oxytocin ( detillion et al . , 2004 ) . while little is known about the natural social organization of this hamster species ( wynne - edwards and lisk , 1989 ) , wound healing has also been studied in three species of peromyscus mice for which social organization is well characterized . in the two species of monogamous or facultatively monogamous peromyscus mice , wound healing was facilitated by social contact . this was not the case in the promiscuous species , and this species did not experience reduced cort with pair - housing ( glasper and devries , 2005 ) . this suggests that social housing was beneficial only to the species that normally resides with a partner . some recent findings in humans suggest that higher blood oxytocin and vasopressin levels may also be associated with faster wound healing in our species ( gouin et al . , 2010 ) . social environment during stress has been shown to impact gastric ulcer formation in male rats following a stressor , however , only the social environment at the time of testing and not prior housing affected ulcer frequency ( conger et al . , 1958 ) . ( 2005 ) found that group - housed chronically stressed female rats had less adrenal hypertrophy than solitary - housed , stressed females . social housing and support , social support reduces heart rate and alters the ratio of systolic to diastolic blood pressure after performing stressful tasks ( lepore et al . , 1993 , thorsteinsson et al . , 1998 ) . in mice and prairie voles , , 2003 , grippo et al . , 2007 ) , as well as other measures of cardiovascular health ( grippo et al . , 2011 ) . not all social interactions are equal , and the effects of social companionship may differ by partner familiarity , sex , age , species , and affective state . most studies of social buffering have explored one or two of these contexts at a time , but some evidence suggests that each of these can , but does not necessarily , impact the social buffering provided . in guinea pigs , the presence of both familiar and unfamiliar adults reduces hpa activation in response to a novel environment ; however for young ( pre - weaning ) guinea pigs , this effect is greater with the mother ( graves and hennessy , 2000 ) , and the salience of different individuals changes over the life course and varies with sex ( kaiser et al . , 2003 ) . in a pair of studies in male rats , armario et al . found the surprising result that cort levels in an open field were higher when paired with a familiar versus an unfamiliar individual ( armario et al . in prairie voles , brief separation from a mate , but not from a same - sex sibling , increased depressive - like behavior ( bosch et al . , 2009 ) . partner identity / familiarity was also found to be critical in a recently developed paradigm in which helping behavior is measured in rats . in this study , rats were motivated to rescue a trapped rat from restraint only if it was matched to their own strain , or a strain they had exposure to from birth ; they were uninterested in freeing rats of an unfamiliar strain ( ben - ami bartal et al . , exposure to nave , unshocked individuals can lessen stress responses relative to exposure to shocked individuals ( kiyokawa et al . , 2004 ) , similar to earlier findings in fear - conditioned rats ( davitz and mason , 1955 ) . future research on social buffering in rodents will hopefully make progress into questions of how and when social support is helpful , and what the optimal timing and type of that support is . in contrast , anxiety is a lasting state that is not an immediate response to the external environment . while stressful events can have impacts on social behavior , individual differences in anxiety also relate to variation in social behavior . for example , in humans , extraverted personality is associated with lower trait anxiety ( jylh and isomets , 2006 , naragon - gainey et al . , 2014 ) . in rodents , the social interaction test in which social interaction with a familiar or an unfamiliar individual are measured in an open arena was initially developed to be an ethologically relevant measure of anxiety behavior ( file and hyde , 1978 ) . social interaction times of individual male and female rats are positively correlated with exploratory behavior in classic tests of anxiety - like behaviors . for example , individuals that spend more time in social interaction are more likely to spend more time in the center region of an open field or the light portion of a light - dark box ( starr - phillips and beery , 2014 ) . maternal care , particularly maternal grooming behavior , has lasting effects on offspring anxiety behavior . high levels of maternal grooming are associated with reduced anxiety behavior in two paradigms : pup reunion after brief separation and/or handling , and natural , individual variation in maternal care ( reviewed in gonzalez et al . natural variations in the amount of time dams spend licking and grooming their new pups in the first week of life impacts their offspring in many ways that persist into adulthood . reduction in stress - reactivity in rats reared by high - licking dams appears to be mediated by increased glucocorticoid receptor expression in the hippocampus ( liu et al . , 1997 , weaver et al . , 2004 ) which enhances negative feedback on the hpa axis ( sapolsky et al . , 1985 , recent studies have shown that natural variation in maternal care affects a wide range of outcomes beyond anxiety behavior , including social behaviors . high levels of early maternal grooming are associated with increased play behavior in juvenile male rats ( parent and meaney , 2008 , van hasselt et al . , 2012 ) , increased social interaction in adult offspring of both sexes ( starr - phillips and beery , 2014 ) , and altered play dominance rank in adult female rats ( parent et al . , 2013 ) . effects of maternal contact have also been described in other species ; for example in prairie voles , maternal care and family structure have been associated with social investigation in adolescence , and changes in parental and mate - directed behaviors in adulthood ( ahern and young , 2009 , perkeybile et al . , 2013 ) . early experience of maternal care is sometimes associated with changes in oxytocin and vasopressin system regulation ( reviewed in veenema , 2012 ) , although it is not yet clear whether such changes underlie the known differences in social behavior . in a synthesis of findings across rodents , primates , and human studies , shelly taylor proposed that in addition to flight - or - flight responses to stress , females show pronounced tend and befriend responses to a stressor ( taylor et al . , 2000 ) . taylor related tending to parental nurturing behaviors , based on evidence that rat dams lick their pups ( tending ) following separation , that oxytocin appears to be more elevated in females following a stressor , and that oxytocin can act both as an anxiolytic and to promote affiliative behavior . befriending was related to the adaptive value of social support under stressful conditions , and its particular value for females that might be more vulnerable than males . whether or not shared history of maternal care - giving and defensive social behaviors best explains distinct female responses to stress , the existence of such sex differences in stress / social behavior interactions has been demonstrated repeatedly . we have discussed several examples in this review ; first , we described sex differences in the potency of particular stressors , for example crowding is particularly stressful for males , but is either calming to females or does not have major effects on physiological endpoints ( brown and grunberg , 1995 , kotrschal et al . , 2007 ) . even when the same event is stressful to both males and females , the sequelae of stress exposure may differ , for example stress impairs classical conditioning in females , which is the opposite of the effect found in males ( wood and shors , 1998 ) . sex differences are also present in social behavior responses to stress : conditions of stress , high cort , and high crf facilitate pair - bonding in male prairie voles , while the same conditions impair pair - bonding in females voles ( devries et al . , even where both sexes appear to be supported by their same - sex peers , male and female rats exhibit anxiety responses and adrenal reactions under different combinations of conditions ( westenbroek et al . , 2005 ) . some of these differences may relate to neurochemical variation in the brains of males and females . both oxytocin and vasopressin are important for social behavior , and there are sex differences in the production and release of these neuropeptides , the location and density of their receptors , and their roles in social behavior ( bales and carter , 2003 , carter , 2007 ) . there are many sex differences in human psychiatric disorders , most notably anxiety and depression , which some argue are based on sex differences in responses to stress ( bangasser and valentino , 2014 ) . one consequence of these findings is that we must study the interactions of stress and social behavior in both sexes in order to make meaningful conclusions about each sex . this idea is gaining greater appreciation within the scientific and funding communities ( mogil and chanda , 2005 , cahill , 2006 , zucker and beery , 2010 , couzin - frankel , 2014 , clayton and collins , 2014 , woodruff et al . , 2014 ) . the social environment can cause stress or ameliorate the impacts of stress , and social behavior responds to stress . these effects may happen all together or at different times , and vary with individual genetic background , experience , sex , species , and other factors . while it is not feasible to study all such factors in a single study , almost a century of research has helped to show which stressors are most impactful in males and females , and how such stress is reflected in neurochemistry . interaction time is a longstanding measure of social behavior , but recent studies have begun to employ more nuanced approaches for instance measuring helping behavior and distinguishing preferences for familiar versus unfamiliar individuals . while adverse social conditions ( from subordination to isolation ) are potent stressors , the interactions between stress and social behavior also offer multiple entry points into the study of stress resilience . stress resilience varies with early life social environment in particular with experience of maternal behavior and life history of exposure to mildly stressful experiences . resilience can also arise from the mitigating or buffering effects of positive ( or negative ) social interactions . there is a vast body of literature linking stress and social behavior and their roles in resilience . we may learn the most from these studies when we consider the social life of the organism , and look beyond group averages to individual variability .
the neurobiology of stress and the neurobiology of social behavior are deeply intertwined . the social environment interacts with stress on almost every front : social interactions can be potent stressors ; they can buffer the response to an external stressor ; and social behavior often changes in response to stressful life experience . this review explores mechanistic and behavioral links between stress , anxiety , resilience , and social behavior in rodents , with particular attention to different social contexts . we consider variation between several different rodent species and make connections to research on humans and non - human primates .
Introduction The social environment as a stressor Social behavior responds to stress (in species, sex, and context specific ways) Resilience and social buffering: social interaction can moderate effects of a stressor Anxiety and depression are associated with reduced social behavior Sex differences in reactions to stress and implications Conclusions
PMC4408361
according to the diagnostic and statistical manual of mental disorders , developmental coordination disorder ( dcd ) refers to those children whose acquisition and execution of motor skills is substantially below their age and opportunities for learning . the coordination problems are expressed in slow and inaccurate performance of motor skills , including activities of daily life , sports , and leisure activities . dcd is sometimes called a motor learning deficit , as these children have difficulties learning to perform all kinds of motor skills in daily life which their typically developing ( td ) peers seem to acquire almost effortlessly . td children learn motor skills either implicitly or explicitly by observing and imitating other children and adults or by trial and error . important in motor learning is the inherent ability of td children to monitor their own performance , to detect possible errors , and to identify possible sources of these errors . in addition to the ability to detect and correct errors , the amount of practice with a particular skill , or time - on - task , is an important determinant of improved motor skill in children . for instance , in infants , the amount of experience in locomotion is regarded as the most important determinant of improvements in walking skill . an important question in dcd is whether the motor learning deficit is merely a matter of lack of sufficient practice . put another way , if we give children with dcd sufficient opportunities to practice motor skills , will their motor problems gradually disappear ? although dcd is characterized by deficits in skill acquisition , remarkably little research has been done in this domain . the few studies that have attempted to study questions relating to deficient or inefficient learning in children with dcd are inconsistent in outcome . it is reported that the children with dcd are delayed in reaching the level of automaticity . an important new part in the description of the dsm 5 criteria is that the child with dcd should have had enough opportunities for learning . recent theory also acknowledges that contextual factors may play a large role in mediating developmental outcomes and should be taken into consideration early on . to date , no intervention studies are known , which tested if children with dcd only need more time and opportunity to practice in order to reach an appropriate level of performance in motor tasks compared to their td peers . if that were the case , then specific intervention would not be necessary , but rather providing the child more exercise time with enriched affordances by the home and school environment should suffice . based on these findings , there is a clear need for rigorous intervention studies using different motor learning paradigms , ranging from simply giving enough opportunity to practice to tailored client - focused interventions . ideally , these studies should involve not only valid and reliable test outcome measures but also pre- and post - fmri measurements to look at task - related changes in the brain . the icf defines participation as the involvement of a person in a life situation . in the case of motor activities , it encompasses involvement in activities of daily living ( adl ) or in sports and leisure time motor activities . participation refers not only to the amount of time a child engages in motor activities but also to the perceived ability to perform well and the motivation to perform an activity . several studies suggest that children with dcd have an activity deficit , as they participate less in adl and in both organized and non - organized physical activities [ 7 , 8 ] . organized activities include both school - related activities and activities outside school , such as participating in organized sports . non - organized activities include activities performed during leisure time at home or during recess at school . for instance , observation of the amount of motor activity during recess at school showed that children with dcd were most often onlookers , observing the active play of other children . but also during organized activities such as physical education classes at school , children with dcd engaged more often in off - task behaviors , such as going to the toilet , than on - task behaviors . it is obvious that activity limitations experienced by children with dcd will influence participation in sports or activities in which these fundamental movements are required . several other reasons have been put forward for their reduced participation , such as avoidance of failure experiences . their lower scores in scales measuring perceived physical competence demonstrate that at a certain age these children are well aware of their lack of competence in motor skills [ 12 , 13 ] . the vicious circle that develops out of avoidance of participation in motor activities is obvious : reduced participation leads to diminished opportunities to practice motor skills , which may result in less opportunity to improve motor skill performance . as a consequence , children with dcd not only become less physically fit but are also found to experience more loneliness as they participate less in social play and sports [ 14 ] . all these factors together may contribute to the development of internalizing symptoms as anxiety and depression in children with dcd [ 14 , 15 ] . however , reduced participation is not the only explanation for the motor skill deficits of children with dcd . in a recent , yet unpublished study , this questionnaire consists of three scales ; parents rate for 23 adl items how well children are able to perform the activities , whether it took longer to learn the activities , and how often children participate in these activities . according to the parents , children with dcd however , in the majority of those tasks ( 17 out of 23 ) , they were rated to participate as often as their peers . these adl included dressing , writing , hopping in squares , and brushing teeth , tasks often mentioned as difficult to perform for a child with dcd . although we subscribe the necessity of participation or time - on - task for learning a motor skill , the results of the aforementioned study highlight that mere participation may not be enough to improve the level motor skill learning in children with dcd . other factors may also play a role , such as learning from doing and the problem - solving skills of a child . in order to improve performance , it is necessary that children with dcd develop the right problem - solving skills , such as the ability to identify and correct errors . to this end , it is also important that they possess accurate understanding about the requirements of a motor task [ 16 ] . several studies have demonstrated that this understanding is often lacking in dcd [ 17 , 18 ] . consequently , they often focus on less relevant aspects or incorrect causes of an incorrect motor activity when they try to identify possible causes for their incorrect task performance , for instance by referring to lack of luck as a cause for failure or by stating that the target is too far away when in fact they threw the ball too soft . in general , children with dcd were found to less often plan , monitor , and evaluate their performance . this is in line with a recent study ; hyland and polatajko reported that children with dcd were able to recognize that their motor performance was not adequate , but they failed to identify the cause of their performance deficit . for that reason , sangster and whitebread concluded that intervention should also incorporate the development of problem - solving abilities in children with dcd to enable them to improve their motor skills [ 16 ] . otherwise , impaired cognitive - motor function may limit their ability to benefit from the interactions with the environment and compromise their psychosocial development . thus , supportive , enabling environments should create opportunities for motor skill development and promote emotional engagement in physical activity . so we may conclude that treatment of dcd may not just be a matter of offering opportunities to practice motor skills but also of creating an environment in which children can engage in the ( physical ) activity and learn to detect and correct their motor performance . in the next paragraph over the past 40 years , several treatment methods have been developed , which can be divided roughly into two categories : process - oriented treatment approaches and task - oriented treatment approaches . the main assumption of process - oriented or deficit - oriented approaches is that a deficit in a body structure or sensory process is responsible for the motor skill problems of children with dcd . the aim of treatment is to remediate this deficit , which will result in improved motor task performance . one of the most well - known examples of a process - oriented approach is sensory integration therapy . however , despite its popularity , results of a recent review and meta - analysis of the efficacy of interventions ( published between 1995 and 2011 ) showed that the effect size of process - oriented intervention is weak ( 0.12 ) [ 23 ] . the results of this study are in line with those of a comparable meta - analysis summarizing the efficacy of interventions investigated in studies published between 1983 and 1993 . due to the limited availability of methodologically sound studies , the application of process - oriented approaches ( like sensory integration therapy ) was not recommended in the recent recommendations of the european academy of childhood disabilities ( eacd ) on the definition , diagnosis , and intervention of dcd [ 25 ] and not recommended in a policy statement of the american academy of pediatrics . task - oriented approaches focus on teaching those motor tasks that are difficult for a child with dcd , and are designed to improve functional outcomes . for each task , task performance is analyzed in order to identify aspects of the task that are difficult for a child . recent examples of task - oriented interventions are neuromotor task training ( ntt ) [ 2729 ] and the cognitive orientation to daily occupational performance ( co - op ) [ 30 , 31 ] . ntt is based upon motor learning theory and the ecological approach [ 23 , 29 ] . the first step is the identification of those tasks and activities related to participation , which are of greatest concern to the child and his family these are the target of treatment . by using motor teaching strategies , therapists guide children through the different phases of motor skill learning by gradually increasing task demands . task constraints refer to aspects of the task that restrain a motor activity , such as when a child can not catch a ball that is thrown with too much force or can not close a shirt with very small buttons . environmental constraints refer to aspects of the environment that impede performance , for instance when a child tries to cycle when the wind blows too hard or when people are watching . these task and environmental constraints are manipulated in intervention sessions to provide the opportunity to practice and improve the deficient motor skills . in the early phase of learning , providing simple verbal instruction as to the intended outcome of the skill may be adequate to stimulate practice . next , the child is provided with augmented feedback ( information about their performance from the therapist or other external sources ) so that they can improve performance on subsequent practice attempts . techniques such as guided discovery ( ask and not tell ) are applied to promote efficient learning , to ensure development of the skill , to influence the child s motivation to persist with practice , and to encourage the child to reflect on their performance to promote problem - solving skills . co - op is a child - centered approach based upon cognitive behavior modification theories , in particular the verbal self - instruction strategy developed by meichenbaum . it focuses on the acquisition of self - chosen occupational skills . during a co - op intervention , a child learns this self - instruction strategy , which enables the child to identify why the performance was not successful and to invent and execute plans to correct their performance ( the goal - plan - do - check strategy ) . both ntt and co - op proved to be effective task - oriented intervention approaches for children with dcd according to the results of the meta - analysis by smits - engelsman et al . the common factor in both approaches seems to be the development of meta - cognitive skills during intervention , such as the ability to identify and correct performance problems . in a recent study , hyland and polatajko demonstrated that children with dcd learned to improve the ability to self - monitor their performance and to identify and correct errors during a co - op intervention . children not only analyzed their performance more often but were also better able to analyze what was going wrong , for instance when a child fails to write straight and comes up with the solution that a ruler is needed to improve performance . according to the authors , this effect was prompted by providing augmented feedback by the therapist and by using guided discovery , i.e. , by asking questions about the performance . these results are in line with those of a study concerning ntt investigating the association of the application of teaching principles with treatment effectiveness . in particular , those teaching principles that enhanced problem - solving abilities proved to be effective , such as asking questions about the children s task performance and sharing knowledge about how to improve task performance . so , we may conclude that it is not only the increased time to practice motor skills that lies behind the effectiveness of task - oriented approaches but also the development of meta - cognitive problem - solving skills , which the child can draw on to learn other skills . since the publication of the meta - analysis of the effectiveness of intervention approaches , a new development can be noticed in intervention studies , i.e. , the application of serious games . as mentioned before , time - on target is an important ingredient of treatment success . practicing motor skills during intervention sessions is often not enough to increase motor skill performance . in order to increase treatment effectiveness and to promote transfer of what has been learned to daily life , however , children with dcd are often not inclined to engage in physical activities at home . according to a study of kwan et al . , boys with probable dcd reported not enjoying physical activities , and they did not feel that they were able to practice regularly . as a consequence , their motivation to become physically active is low . as mentioned before , both motivation and the perceived ability to perform well are important moderators of participation . the lack of enjoyment and motivation prompted clinicians to consider options that might encourage a more positive attitude towards physical activities . children will be more inclined to practice the activities if they are enjoyable and if they experience success . therefore , the need for enjoyment and the need to experience success should be important ingredients of intervention options . a serious game is the application of an interactive game that can be used for purposes other than mere entertainment , such as rehabilitation . children in general like to play games , as they are fun and motivating , as often some kind of reward is offered when they perform well . application of serious games as part of an intervention session or as exercises at home may motivate the children to practice more often and as such may increase the number of hours children with dcd are physically active . several commercially available games have been developed that can encourage children and adults to be physically active , such as the nintendo wii fit training , the kinect , and the eyetoy for playstation . recently , four studies have been conducted to investigate the effectiveness of these games as ( part of ) an intervention for children with dcd . in a small pilot study , hammond et al . investigated the effectiveness of a wii fit intervention on motor proficiency and on emotional and behavioral problems of children with dcd . two groups of children were included : a group of ten children with dcd who played nine wii fit games focusing on coordination and balance and a group of eight children who practiced motor skills in groups 1 h per week . each intervention session lasted 10 min and took place three times a week for a month . motor abilities were measured with the bruininks - oseretsky test ( bot-2 ) , and emotional and behavioral problems were measured with the strengths and difficulties questionnaire ( sdq ) filled out by parents . significant improvements in motor skills were seen after wii fit intervention , not only in skills measuring balance but also fine motor precision and visuo - motor integration , although less pronounced . however , the improvements in motor skills were not maintained after a period of 2.5 months without wii fit intervention . nevertheless , the results of this study are encouraging , as they provide evidence of the immediate effectiveness of a wii fit training , its popularity with the children , and its positive effect on their motivation to practice . in another pilot study , the effectiveness of the playstation 2 eyetoy game on motor skills and aspects of physical fitness was explored for 46-year - old children with dcd . nine children referred to physical therapy suspected of dcd were included who played the eyetoy games for 60 min once a week over 10 weeks . several eyetoy games were played , such as volleyball , bowling , and boot camp , requiring accurate upper - extremity movements that involve motor planning , balance , and eye - hand coordination . effects of intervention were assessed with the movement assessment battery-2 ( mabc2 ) , the developmental coordination disorder questionnaire ( dcdq ) , the walking and talking test , and the 6-min walk test ( 6mwt ) . like the earlier study , children s overall performance on the mabc2 improved after intervention , particularly balance skills . an improvement in daily motor activities was also reported by the parents of the children . walking speed and walking distance the lack of effect of intervention on walking endurance may be due to the fact that walking endurance was only practiced in two of the games . an interesting part of this study is that from the fifth intervention session onwards , games were introduced in which children had to play against their parents . both children and parents enjoyed playing together , and the children exerted more effort when playing with their parents . although the evidence is scarce , the results of several studies confirm that virtual reality games enhance the motivation to engage in practice . players were found to perform better in a rehabilitation setting when they played in competition . in general , one of the major advantages of virtual reality games is the opportunity to vary task and environmental constraints : the games offer enriched environments , and children can practice functional movements repeatedly ( time - on - task ) under different task constraints . investigated the effect of the wii fit balancing games on the balance skills of 14 children with probable dcd and balance problems . children practiced the wii balancing games for 30 min , three times a week for 6 weeks . eighteen wii balancing games were available , and children were free to choose the games they wanted to play to increase variability of practice . a second group of 14 children with probable dcd and balance problems was included to serve as a no - treatment control group . performance was assessed pre - post with the movement assessment battery for children ( mabc2 ) , the wii fit ski slalom test ( which was not practiced ) , and three subtests of the bot-2 ( balance , running speed and agility , and bilateral coordination ) . after intervention , a positive effect on balance skills was found , as measured with the balance test of the mabc2 and the bot-2 scales running speed and agility and bilateral coordination . the effects of the wii intervention were largely task specific , as only those skills improved that were close to the balance tasks trained . the effects of a wii fit training have also been compared to those of ntt . the group receiving ntt consisted of 27 children with dcd and were treated in groups of five to eight children , two times a week for 4560 min over 9 weeks . a second group of 19 children with dcd underwent wii fit training for 6 weeks , three times a week for 30 min . these children practiced various games , such as cycling , skiing , soccer , and skateboarding games as well as five games incorporating arm movements . effects of treatment were assessed with the mabc2 , the functional strength measure ( fsm ) , a hand - held dynometer ( hdd ) , the muscle power sprint test ( mpst ) , and the 20 metre shuttle run test ( 20msrt ) . although both groups improved on the mabc2 , only for the ntt group was this statistically significant . interestingly , the effects of ntt also transferred to tasks not practiced , such as tasks measuring manual dexterity . previous studies on the effectiveness of ntt also demonstrated transfer effects to untreated skills , such as handwriting skills and balance . isometric strength did not improve for either the ntt or wii fit group , but anaerobic performance did , for both groups . taken together , these results suggest that application of serious games such as the wii fit might be useful for children with dcd with low cardiorespiratory fitness . the authors conclude that the results of their study support the application of both ntt and wii training for children with dcd , but the results of ntt were superior . the results regarding the effectiveness of the application of serious games in intervention in these four studies are promising . an important difference between serious gaming and regular physical or occupational therapy is that children learn to perform motor skills more implicitly during serious gaming , as no formal instruction is part of the training . the increase in performance after playing serious games demonstrates that children with dcd are able to learn implicitly . these findings are in line with those of other clinical groups , such as children with cp who benefit from implicit motor learning . possible elements that induce the effects of playing serious games are the multiple repetition of tasks , variability of practice , and the provision of augmented feedback about their performance . it is well known from literature about motor learning that these elements enhance the acquisition of motor skills . in addition , children often practice serious games on their own and not with other children . playing on their own has the advantage that they do not have to be afraid of failing in front of other children . when children with dcd can practice on their own , and when the games are fun to play , they will be more motivated to engage in physical activities . on the other hand , playing against children with the same level of disabilities in a therapy setting can enhance their motivation and research findings demonstrate that children perform better in competition . despite the effectiveness of serious games , the results of ferguson et al . demonstrate that practicing serious games is effective , but not as effective as a regular task - oriented intervention , such as ntt . as only one study has compared the effectiveness of ntt with those of serious gaming , definite conclusions can not yet be drawn . however , the results of ferguson et al . an important difference between serious gaming and regular intervention is that learning is more explicit in regular intervention , as therapists provide feedback , but also teach problem - solving skills , such as the goal - plan - do - check strategy in co - op , and engage children in guided discovery during both co - op and ntt . as mentioned before , children with dcd lack these meta - cognitive problem - solving skills , and teaching these skills seems to be an effective element of regular intervention . the superior effectiveness of regular intervention in comparison to serious gaming may be due to the development of these problem - solving skills during regular intervention . however , serious gaming can be an important complementary intervention , which may enlarge the effectiveness of regular intervention . however , for some children with dcd , participation is affected not only by poor motor learning but also by contextual barriers such as the attitudes and support of others , and self - efficacy beliefs , all of which should be taken into account to prevent an activity disorder . the results of studies evaluating the effectiveness of intervention demonstrate that without any kind of intervention , most children with dcd generally do not improve their motor skills to normal standards . so far , specific task - oriented intervention methods , such as co - op and ntt , have proven to be most effective . as well , merely offering the children the opportunity to practice motor skills , for instance by playing serious games , can lead to improved motor performance , but to a lesser extent than task - oriented intervention . what we learn from the success of serious games is the importance of success experiences and practice in a safe environment . whether serious games can be enlisted to produce sustained effects is an issue for future investigation . whatever the intervention , explicit motor teaching with an emphasis on developing meta - cognitive problem - solving skills seems to be a necessary ingredient for children with dcd . furthermore , influencing contextual factors to create circumstances where the children can be active and keep practicing have to be part of the overall approach . important factors may be to create a support system , which encourages children to stay active over time . for instance , informing parents about the necessity may help them to support their children to practice regularly . to date , only a few studies evaluating the effectiveness of intervention for children with dcd have been conducted , and more research specifically on the best way to deliver the intervention is necessary to come to more definite conclusions . questions that need to be addressed are how implicit and explicit learning can best be combined in intervention and in which stage of motor learning they may be effective . also , the long - term effects of serious gaming have yet to be established . so far , we know little about the transfer of the effects of gaming to daily life motor performance . and it is important to investigate whether children continue to practice once the intervention has come to an end .
developmental coordination disorder ( dcd ) is often called a motor learning deficit . the question addressed in this paper is whether improvement of motor skills is just a matter of mere practice . without any kind of intervention , children with dcd do not improve their motor skills generally , whereas they do improve after task - oriented intervention . merely offering children the opportunity to practice motor skills , for instance by playing active video games , did lead to improved motor performance according to recent research findings , but to a lesser extent than task - oriented intervention . we argue that children with dcd lack the required motor problem - solving skills necessary to further improve their performance . explicit motor teaching with an emphasis on developing these problem - solving skills is a necessary ingredient of intervention in dcd , leveraging the effectiveness of intervention above that of mere practicing .
Introduction DCD, a Motor Learning Deficit Decreased Participation in Motor Activities in Children with DCD Problem-Solving Abilities of Children with DCD Intervention Methods and Their Effectiveness A New Development in Intervention Conclusion
PMC4111217
infectious diseases caused by flaviviruses are major concerns in the public health community , particularly those that are drug - resistant or resistant to antibody - mediated neutralization . however , the basic mechanism by which mutations in antigenic proteins lead to evasion of antibody neutralization is still unclear . in flaviviruses , the envelope protein domain iii ( ed3 ) harbors many of the critical mutations that have been shown to reduce antibody neutralization . the ed3 forms a classic -sandwich fold that is conserved among flaviviruses ( figure 1a , side view ) . e , and f g form a surface patch that is exposed to the solvent in the viral particle ( figure 1a , top view ) . structural studies have shown that effective neutralizing monoclonal antibodies ( mabs ) recognize this surface patch with a high degree of shape complementarity . structural and sequence analysis of the ed3 from different flaviviruses . structural alignment of the ed3 from different flaviviruses : dengue virus types 1 , 2 , 3 and 4 , denv-1 ( pdb 3irc ) , denv-2 ( pdb 1tg8 ) , denv-3 ( pdb 1uzg ) , and denv-4 ( pdb 2h0p ) , respectively ; west nile virus , wnv ( pdb 1s6n ) ; st . louis encephalitis virus , slev ( pdb 4fg0 ) ; omsk hemorrhagic fever , omsk ( pdb 1z3r ) ; yellow fever virus , yfv ( pdb 2jqm ) ; japanese encephalitis virus , jev ( pdb 1pjw ) ; tick - borne encephalitis , tbe ( pdb 1svb ) . the rmsd ( all atoms ) among all ed3 structures is between 1 and 4 . -strands are colored in yellow , random coils are colored in green , and a highly conserved hydrophobic core found in most flaviviruses is colored in gray . the loops de ( cyan ) , bc ( red ) , n - terminus ( blue ) , and fg ( pink ) form a patch of residues that are exposed to the solvent in the context of the intact viral particle . the figures were rendered using pymol v. 0.97 ( delano scientific llc , san carlos , ca ) . the sequence alignment shows the amino acids of each solvent - exposed loop and the conserved hydrophobic core . interestingly , within the large ed3mab interaction surface , a few specific mutations significantly decrease mab binding and reduce neutralization in vivo , depending largely on the type of side chain substitution . in the ed3 of west nile virus ( wnv ) , the mutations t332k and t332 m reduced mab binding almost completely ( > 80% reduction compared with wild - type ) . the mutation t332a partially reduced mab binding ( 50% reduction ) , whereas t332v had no effects in antibody binding . this broad spectrum of mutational effects on mab binding is also observed in the ed3s of other flaviviruses , such as mutations at positions s331 and d332 in japanese encephalitis virus ( jev ) , a serologically close relative of wnv , or mutations at positions k305 and k307 in dengue virus type 2 ( denv-2 ) , a distant relative of wnv and jev . moreover , not only mutations within the interaction surface itself prevent mab binding and neutralization , but mutations can occur outside the binding site and influence antibody binding via long - range effects . these observations motivated us to address the following questions : what is the nature of epitopes ? antibody contact region reduce mab binding ? is there a network of interacting residues , wherein mutations perturb distantly positioned regions of the protein ? addressing these questions using high - resolution structures and antibody binding data is challenging and time - consuming , particularly for phylogenetically and serologically related viruses whose protein antigens typically share a high degree of structural and sequence similarity . the structures of the ed3 of various flaviviruses , shown in figure 1a , are very similar , making it difficult to identify specific features in each viral protein that may be associated with unique antigenicity . hence , an alternative approach is needed , and the ed3 of flaviviruses was adopted as a model system . based on increasing experimental evidence highlighting the importance of protein conformational fluctuations in biological function , we hypothesized that epitopes are intrinsically encoded in the thermodynamic properties of the conformational fluctuations of the ed3s . thus , even if the protein antigens look alike , their thermodynamic properties might differ significantly . to test our hypothesis , we investigated whether mab - resistant mutations affect the conformational fluctuations in the ed3s of two related but serologically distinct flaviviruses , wnv and denv-2 . in this study , we used an algorithm that allowed us to explore the thermodynamic properties of correlated fluctuations in computer - generated protein ensembles . our results support a model of evasion of antibody - mediated neutralization that involves changes in the protein s conformational fluctuations and long - range interactions between mutation sites and epitopes . this study also reveals fundamental thermodynamic properties of residues residing in epitopes that may have general implications for other pathogens . to investigate the thermodynamic properties of protein fluctuations in the ed3s from wnv and denv-2 , we used the corex algorithm . briefly , corex generates a native state ensemble from the target protein structure ( pdb 1s6n for wnv and 1tg8 for denv-2 ) through the combined unfolding of adjacent groups of residues defined as folding units . folding units are treated as native - like or as unfolded peptides . within this ensemble , the free energy of each conformational state , gi , is calculated with a surface - area parametrization that has been validated experimentally . by rewriting gi as an equilibrium constant ( ki ) between the conformer i and the folded state ( ki = exp[gi/(rt ) ] ) , we define the probability ( pi ) of each conformer as1 in eq 1 , q is the sum of all possible states in the ensemble ( partition function ) . moreover , gi can be resolved into the energetic contributions from each residue in the protein : gi = jn = aagij , which allows the investigation of thermodynamic properties at the residue level . within the ed3 ensembles , we estimated the relative stability of each residue , gf , j , as the ratio of the probability that residue j is in the folded state ( pf , j ) to the probability that the same residue is in the unfolded state ( pu , j):2 from eq 2 , residues with high stability are mostly in the folded state in the ensemble , whereas residues with low stability are mostly in the unfolded state . we describe pairwise residue interactions in the ed3s as the thermodynamic coupling between two residues j and k. the first step to obtain a quantitative value of the thermodynamic coupling is to evaluate the effect of an energetic perturbation on residue k ( = g ) over the stability of residue j. this is accomplished by recasting eq 2 to consider the folding state of a second residue k. after rewriting eq 2 , we evaluated the effect of g over the stability constant of residue j:3 in eq 3 , the modified residue stability constant ( gf , j ) considers the probabilities of states where residues j and k are both folded ( pf , j|f , k ) , both unfolded ( pu , j|u , k ) , or one in each state ( either pf , j|u , k or pu , j|f , k ) . thus , an energetic perturbation on residue k ( = exp[g/(rt ) ] ) can have a stabilizing , a destabilizing , or no effect over residue j. this thermodynamic effect can be quantified by subtracting eq 3 from eq 2:4 importantly , this analysis does not require residues j and k to be in close proximity in the primary sequence or in the tertiary structure of the protein . thus , the value gf , j from eq 4 characterizes long - range effects of residue k over residue j. given that a long - range effect is not a bidirectional phenomenon per se , in this study we define thermodynamic coupling as a bidirectional long - range effect . in other words , thermodynamic coupling is the sum of the influence of a perturbation on residue j over residue k ( gf , kpert , j ) and the reciprocal effect , namely , the influence of a perturbation on residue k over residue j ( gf , jpert , k):5 thermodynamic coupling between two residues can be manifested in three ways : positive ( gj , k > 0 ) , negative ( gj , k < 0 ) , and neutral ( gj , k = 0 ) . positive coupling occurs when stabilization or destabilization of the j residue stabilizes or destabilizes , respectively , the k residue . for negative coupling , namely , stabilization of the j residue results in the destabilization of the k residue , and vice versa . equation 5 provides a quantitative descriptor for long - range interactions between residues in the ed3s . with this analysis , we can investigate the long - range effects of a mutation by calculating the thermodynamic coupling between residues of mutant proteins ( gj , kmut ) . a general boltzmann equilibrium process , f(x ) , is defined as6 in eq 6 , f(x ) is the boltzmann factor , x corresponds to the thermodynamic coupling ( gj , k ) , xo is the midpoint of the transition between the two states , and c is a cooperativity constant that describes the sharpness of the transition . the response of a boltzmann equilibrium process is mathematically obtained by taking the derivative of eq 6 with respect to x:7f(x ) is a peaked function that describes the relationship between gmab and thermodynamic coupling ( gj , k ) shown in figure 6 . we measured antibody binding to the ed3 from wnv wild - type and the single - site mutants e390d , h396y , l312v , v371i , v338i , and l375i against three type - specific mabs , 5h10 , 5c5 , and 3a3 ( bioreliance corp . , rockville , md ) , using published protocols . binding data for other single - site mutant ed3s from wnv or denv-2 used in this study were obtained from the literature ( table 1 ) . binding data for the ed3 from denv-2 against two type - specific mabs , 5h5 and 9f16 , was obtained from hiramatsu et al . , gromowski and barrett , and pitcher et al . for the wnv ed3s k307r , k307e , y329k , y329f , wild - type , a365s , l312a , a369s , k310 t , t330i , t332a , and t332k , binding data against type - specific mabs 5h10 , 5c5 , and 3a3 was obtained with permission from beasley and barrett , volk et al . for the wnv - ed3s e390d , h396y , l312v , v371i , v338i , and l375i , we measured mab binding against the same three type - specific mabs . mean thermodynamic coupling of residues in the bc loop ( 328338 ) for wnv and in the fg loop ( 378388 ) for denv-2 . to obtain thermodynamic information for the ed3s , we used a computer algorithm to generate an ensemble of conformational states around the native structure . this algorithm models fluctuations in proteins as simultaneous local unfolding reactions throughout the structure of the target protein ( see methods ) . after generating more than 10 conformational states of the ed3s , we sought to determine whether the ensemble could reveal virus - specific epitopes . their root - mean - square deviation ( rmsd ) between all atoms is 2.5 ( figures 1b ) , and their sequence similarity is 50% ( figure 1c ) . hence , these ed3s have high structural similarity and share high sequence identity . the differential effect of mutations in resistance to antibody - mediated neutralization and their limited antigenic relatedness , indicate that the primary epitopes in wnv and denv-2 are distinct . our previous studies using virus - specific neutralizing antibodies show that mutations at residue t332 generate a resistant phenotype in wnv , whereas mutations at the homologous position in the ed3 from denv-2 , residue s331 , have no significant effect . conversely , mutations in position k310 in the ed3 from wnv do not generate a resistance phenotype in the virus , whereas mutations in the homologous position in denv-2 do ( residue k307 ) ( figure 2a ) . for example , relative to the wild - type proteins , the mutation t332a in the ed3 from wnv decreased antibody binding energy by 10% ( equivalent to a reduction in kd of 5-fold ) , whereas the mutation s331a in denv-2 did not have any effect ( figure 2b ) . the mutation at position k307 g in denv-2 showed a similar reduction in binding energy and kd , while the mutation k310 t in wnv had no significant effect ( figure 2b ) . ( a ) single - site mutations at homologous positions in the ed3 from denv-2 and wnv . mutation s331a in denv-2 and mutation t332a in wnv are located in the bc loop . mutations k307 g and k307 t in denv-2 and wnv , respectively , are located in the n - terminal loop . ( b ) effect of the mutations shown in part a on the binding energy to type - specific mabs . the vertical axis is the ratio of binding energies between mutant and wild - type ed3s . ( c ) residue stability plots ( gf , j ) of the ed3s from wnv ( top ) and denv-2 ( bottom ) . the dashed line represents a threshold that separates residues with low and high stability . this threshold was obtained by considering the 20% lowest stability from the residue stability distribution . residues with high stability ( blue ) are mostly located in the core of the protein , whereas residues with low stability ( red ) are solvent exposed . ( d ) linear effect of mutations on the residue stability of the ed3s from wnv and denv-2 . the top panel shows the residue stability plot of the wild - type ed3 from wnv ( ) and the resistant mutation t332k ( ) . the amino acids perturbed by the mutation correspond to residues in the bc loop ( red , dotted square ) . the bottom panel shows the residue stability plot for the wild - type ed3 ( ) and the resistant mutant k388 g from denv-2 ( ) . the effect of the mutation is concentrated to residues in the fg loop ( red , dotted square ) . the top panel displays the destabilizing effect of the mutation k307r ( blue arrowhead ) in wnv . the affected residues due to the k307r mutation correspond to the bc loop ( red , dotted box ) . g in denv-2 , which is located in a structurally homologous position to k307 in wnv , had destabilizing effects over residues in the fg loop ( bottom panel ) . to test whether the differential effect of mutations between the ed3 of wnv and denv-2 on antibody binding were due to differences in the conformational ensembles and their sensitivity to mutations , we characterized the stability ( eq 2 in methods ) of each residue in the ensemble of the wild - type ed3s . then , we determined what amino acids were thermodynamically perturbed ( stabilized or destabilized ) by resistant and nonresistant mutations previously described in wnv and denv-2 ( column 1 in table 1 ) . the plot gf , j vs amino acid position of the wild - type ed3s is shown in figure 2c . the bottom panel shows the values for denv-2 , plotted as gf , j to resemble a mirror image between the two residue stability profiles . this mirror image suggests that the analysis of the residue stability captures both the structural and the sequence similarities between these two protein antigens . as expected from the ed3 structures , highly stable amino acids are found in the protein cores , while those with low stability are located in solvent - exposed structures or random coils ( figure 2c , structures ) . not expected , however , were the two distinct effects that resistant mutations had on the residue stability constants . the first effect was the perturbation of the stability constant of residues near the mutation sites or this linear effect was observed in residues 328338 for resistant mutations in wnv such as t330i , t332a , and t332 k and ( figure 2d , top ) . resistant mutations in denv-2 such as e383 g or k388 g had linear effects over the stability of residues 378388 ( figure 2d , bottom ) . conformational effects , wherein the effect of the resistant mutations was not localized to nearby residues in the primary sequence . for example , mutations in the n - terminal loop , such as k307r in wnv and k305 g in denv-2 , destabilized residues in the bc loop ( 328338 ) for wnv and in the fg loop ( 378388 ) for denv-2 ( figure 2e , top and bottom , respectively ) . noteworthy , the bc and fg loops have been identified as major mab - neutralizing epitopes in wnv and denv-2 , respectively . to better understand the basis for mutational effects over the bc and fg loops , we sought to identify the thermodynamic properties of these loops that distinguish them from other structural elements in the ed3s . to address this , we determined the correlated thermodynamic fluctuations , or thermodynamic coupling , between residues in the ed3 ensembles . thermodynamic coupling ( defined as gj , k , eq 5 in methods ) applies even if two residues are distantly positioned in the protein structure , allowing the characterization of long - range residue networks in the ed3 . to experimentally validate the analysis of thermodynamic coupling , we compared calculated long - range mutational effects between distantly positioned residues in the ed3 from wnv with previously published experimental data . specifically , we correlated the effect of the single - site mutations k310 t , t332a , and t332k on the solvent accessibility and dynamics of the single tryptophan residue w397 , which is located 2 nm away from the mutation sites in the ed3 structure . the mutations k310 t , t332a , and t332k linearly increased the solvent - accessibility and dynamics of w397 , which correlated very well with the linear increase of our calculated long - range mutational effects ( r = 0.98 ) ( figure 3a ) . the agreement between experimental and calculated data demonstrates that analysis of the thermodynamic coupling captures long - range interactions between residues in ed3 . these published results are consistent with those from molecular dynamics simulations ( gottipati , dodson and lee , in preparation ) . ( a ) long - range mutational effect on the solvent accessibility and protein dynamics near the single tryptophan residue in the ed3 from wnv ( w397 ) . data were obtained with permission from maillard et al .. the values of gw397mutation were calculated using eq 4 in methods . the structure on the right side displays the position of the mutation sites and w397 . ( b ) thermodynamic coupling ( eq 5 in methods ) between each residue pair in the ed3 from wnv . the color scaling represents the thermodynamic coupling between each residue pair , from 3 kcal / mol ( purple / blue color ) to 3 kcal / mol ( orange / red color ) . the blue and black squares highlight distantly positioned residues that share high positive thermodynamic coupling . ( c ) data is rendered identically as in panel b. we determined the entire set of long - range interactions in the ed3 by calculating the pairwise thermodynamic coupling in the wild - type protein ( gj , kwt ) . the plot of gj , kwt forms an n n matrix that describes the thermodynamic coupling at the residue level , where n is the total number of amino acids in the structure . we rendered the values of gj , kwt in a color map ( figure 3b , c ) , wherein amino acids with positive thermodynamic coupling are in red , residues that manifest negative thermodynamic coupling are in blue , and residues that are not thermodynamically coupled are in green . the results of gj , kwt for wnv , shown in figure 3b , reveal several features . first , the algorithm captures the expected local perturbations to nearby residues , as delineated by the positive thermodynamic coupling running in diagonal from the lower left to the upper right of figure 3b . second , there is positive and negative thermodynamic coupling between several groups of residues that are distantly positioned in the tertiary structure , indicating that there is a complex long - range interaction network between residues that is encoded in the native state ensemble of the ed3 . third , there appears to be a grid of high coupling that connects distal regions of the protein . particularly pronounced among these regions are residues 375 through 384 , which are thermodynamically coupled with the entire ed3 . the relatively high coupling between these sites and the rest of the protein indicates a high mutual susceptibility to perturbations that will likely affect other residues . when residues 375384 are mapped onto the ed3 structure , it is clear that they are part of a highly conserved hydrophobic core of this family of viral protein domains ( figure 1 , -strand colored in gray ) . similar analysis of the thermodynamic coupling in ed3 from denv-2 also showed positive thermodynamic coupling between distant residues , as well as a conserved core of hydrophobic residues that are thermodynamically coupled with the entire protein antigen ( residues 370379 ) ( figure 3c ) . residues 370379 in ed3 from denv-2 and residues 375384 in ed3 from wnv altogether , these analyses show that the ed3s do not behave as rigid structures but rather as conformational ensembles that are capable of transmitting long - range effects between even distant residues . against this backdrop of global coupling between core residues , we sought to determine the effect of mutations on the residue networks of the ed3s . we investigated how single - site mutations in wnv and denv-2 perturb the pattern of thermodynamic coupling observed in the wild - type ed3s ( figure 3b , c , respectively ) . our goal was to identify the unique thermodynamic features of mutations that confer resistance to antibody - mediated neutralization compared with those that do not . we analyzed the thermodynamic coupling of a panel of resistant and nonresistant mutations of wnv and denv-2 ( table 1 ) . for the mutants analyzed , the general pattern of thermodynamic coupling was qualitatively similar to that of the wild - type ed3s ( gj , kmut gj , kwt ) . the similarity between gj , kwt and gj , kmut for both wnv and denv-2 ( figure 4 ) indicates that the overall energetic hierarchy of states ( i.e. , what states are most stable ) of the ed3s is thermodynamically robust and that single mutations do not disrupt the basic networks of long - range interactions between residues . despite these similarities , however , important differences were revealed by subtracting the values of thermodynamic coupling of the wild - type ed3 , gj , kwt , from those of the mutant , gj , kmut ( figure 5 ) . gj , kwt , quantifies the relative impact of each mutation on the coupling network in each ed3 and reflects the response of the ensemble to a resistant or nonresistant mutation . thermodynamic coupling of single site mutants of the ed3 from wnv and denv-2 ( a ) thermodynamic coupling observed for the wnv wild - type ed3 and for a representative mutant . ( b ) thermodynamic coupling from denv-2 wild - type ed3 and a representative mutant . ( a ) effect of mutations on the thermodynamic coupling of the ed3 from wnv . resistant mutations such as k307r ( middle ) decreased the thermodynamic coupling of residues 328338 . the mutation t332k , another resistant mutation , increased the thermodynamic coupling of residues 328338 . mutations that do not confer resistance against antibody neutralization , such as the k310 t mutation , did not have any large effect on the thermodynamic coupling of residues 328338 . ( b ) effect of mutations on the thermodynamic coupling of the ed3 from denv-2 . nonresistant mutations such as i379v did not significantly affect any other part of the protein . gj , kwt for resistant mutations in the ed3 from wnv revealed three striking features ( figure 5a ) . first , the effect of the mutation on the thermodynamic coupling was localized mainly to residues 328338 , that is , the bc loop , the major neutralizing epitope in the ed3 of wnv . second , depending on the type and position of the mutation , we observed two distinct effects on the thermodynamic coupling of residues 328338 . mutations at position t330 and t332 increased the magnitude of thermodynamic coupling ( positive effects ) , while mutations at position k307 decreased the degree of thermodynamic coupling ( negative effects ) ( figure 5a , center and right ) . third , residues 328338 become thermodynamically coupled with the rest of the amino acids in ed3 . this last observation indicates that the effect of a resistant mutation is not limited to nearby residues . thus , the ability of a mab to neutralize a virus can be impaired by a single mutation , even if that mutation is outside of the mab binding site . the analysis of mutations that have no effect on mab neutralization resistance in wnv ( e.g. , k310 t ) revealed minimal changes in the thermodynamic coupling over residues 328338 or other regions in the ed3 ( figure 5a , left ) . gj , kwt for denv-2 also revealed positive or negative effects on the thermodynamic coupling for mutations that confer resistance to neutralization ( figure 5b ) . however , and most interestingly , the effects of the mutations were concentrated on residues in the fg loop of the ed3 structure ( residues 378388 ) , the major epitope in denv-2 ( figure 5b , center and right ) remarkably , resistant mutations such as k305 g and k307 g had a negative effect on the thermodynamic coupling of the fg loop despite being located outside of the affected area . moreover , residues 378388 became thermodynamically coupled to the rest of amino acids in ed3 , underscoring again the long - range effects of the resistant mutations on the thermodynamic properties of the protein antigen . since the calculations in this study are based on the thermodynamics of protein fluctuations , these results indicate that resistant mutations in ed3 change the conformational dynamics of specific loops that correspond to the primary epitopes in the protein antigens . moreover , the positive and negative coupling effects on these loops are observed exclusively in mutations that confer antibody resistance . in the present study , we used an experimentally validated ensemble model to investigate the molecular mechanism by which single - site mutations in wnv and denv-2 lead to evasion of antibody - mediated neutralization . our approach successfully identified the location of the primary epitopes in the ed3s of wnv and denv-2 , despite the high structural and sequence similarity between the two protein antigens . thus , even if two viral proteins share high sequence and structural stability , the response of the ensemble to mutations is unique . equally important is that the ensemble - based thermodynamic analysis used in this study reveals a previously unidentified relationship between mutational susceptibility and epitope location , a relationship that could prove useful for a priori identification of specific primary epitopes in different viruses . we observed that evasion of neutralization and decrease in mab binding was caused by mutations that triggered changes in the conformational fluctuations of the epitopes . based on this observation , we sought to establish a quantitative relationship between changes in the conformational fluctuations and mab binding . to this end , for each ed3 mutant , we plotted the thermodynamic coupling of the residues in the epitope ( gepitope ) and their corresponding binding energies to mabs ( gmab = rt ln[keq ] , where keq is the equilibrium binding constant ) . figure 6a shows the mean thermodynamic coupling of residues in the bc loop for wnv and their respective mab binding energies , gmabwnv ( figure 6a , red squares ) . in the same figure , we included the plot for denv-2 , but with the values of mean thermodynamic coupling of residues in the fg loop and their respective mab binding energies , gmabdenv2 ( figure 6 , blue circles ) . qualitatively , the two plots in figure 6a have a similar shape , following a peaked function . moreover , figure 6a shows that for both viruses mutations that significantly decrease the mab binding energy ( i.e. , low binding constants ) also exerted the largest changes in the thermodynamic coupling of epitope residues ( i.e. , gepitopemutant > gepitopewild - type or gepitopemutant < gepitopewild - type ) . the conserved effect of mutations in both ed3s suggests similar thermodynamic principles in the mechanism of evasion of antibody - mediated neutralization . ( a ) the correlation between thermodynamic coupling and binding energy to mabs follows a peaked function . the values for this figure were obtained by averaging the thermodynamic coupling of the residues in the ed3s that reside in the bc loop for wnv ( red symbols ) or in the fg loop for denv-2 ( blue symbols ) . the black line is a fit of the response of a boltzmann equilibrium process ( eq 7 in methods ) . ( b ) changes in the response of the boltzmann fit using decreasing cooperativity values ( c = 6.5 in brown , c = 4.0 in green , c = 2.0 in dark brown , and c = 1 in orange ) . the simplest model that describes the peaked function shown in figure 6 is the response of a boltzmann ( equilibrium ) process between two thermodynamic states ( eq 7 in methods ) . often in biophysics , a boltzmann process is described for folded / unfolded , bound / unbound , or active / inactive transitions in a protein . in this study , however , the boltzmann process represents a transition between low and high thermodynamic coupling states in the epitope of the ed3s . we interpret these thermodynamic coupling states as states of lower or higher fluctuations in the epitope relative to wild - type . by using eq 7 to fit the data in figure 6a , we determined the midpoint of the transition ( xo = 2.3 kcal / mol ) and a cooperativity factor , c , that describes the sharpness of the transition between the two states . we obtained a value of c = 6.5 , which reflects a sharp transition between states . the high degree of cooperativity implies that mutations that generate small changes in the thermodynamic coupling of the epitope result in a large reduction in the binding energy to a mab . this is clearly seen in figure 6a wherein changes in the thermodynamic coupling of the epitope as small as 0.3 kcal / mol result in a 12 kcal / mol change in binding energy ( equivalent to 10- and 100-fold decrease in mab binding affinity ) . to illustrate the biological relevance of a high degree of cooperativity characterizing the transition between thermodynamic coupling states , we systematically reduced c in eq 7 and recorded the changes in the shape of the peaked function observed in figure 6a . when we reduced the value of c from 6.5 to 1 , the height of the peaked function was significantly reduced ( figure 6b ) . this decrease in height implies that even if a mutation in ed3 generates a large change in the thermodynamic coupling of the epitope , the reduction in the mab binding affinity will be very small . we also observed that the peaked function broadens as the value of c decreases . this broadening effect indicates that a mutation in ed3 loses efficacy in reducing mab binding affinity even for large changes in thermodynamic coupling . in fact , for a change of 0.3 kcal / mol in the thermodynamic coupling and a c = 2 , the reduction in mab binding affinity is only 25% . when c = 1 , the expected change in mab binding affinity is negligible . a phenotype that evades neutralization in wnv and denv-2 is clearly observed when the mab binding affinity is reduced 10-fold or more relative to wt . in order to decrease the mab binding affinity 10-fold when c = 2 , the change in thermodynamic coupling in the epitope needs to be as high as 1.52.0 kcal / mol . thermodynamic perturbations of 1.52.0 kcal / mol will likely lead to global protein unfolding and render the viral particle nonviable . our analysis is logical from a biological point of view because mutations that result in severe changes in the stability of the protein antigen will be selected against . instead , mutations that generate small changes in the thermodynamic coupling of residues in the epitope would be evolutionarily advantageous and provide the virus with an advantage over neutralizing antibodies without jeopardizing the structural integrity , fold , global stability of the protein antigen , or viability of the infectious virion . because of the fundamental thermodynamic nature of our analysis , we foresee that the strategy to generate resistance to antibody binding described here may be applicable to other pathogens . previous studies have proposed that epitopes are predominantly located in flexible regions of a protein antigen . our studies , however , provide a thermodynamic foundation for the role of protein flexibility in escape mutants . we found that resistant mutations increase or decrease the flexibility of the epitope by tuning up or down the degree of thermodynamic coupling between the site of mutation and the epitope . the modulation of thermodynamic coupling may occur via linear effects ( mutations in the epitope ) or by conformational effects ( mutations in neighboring structures of the epitope ) . these long - range mutational effects probably originate in the redistribution of conformational states within the ensemble of the ed3s and not through a direct mechanical pathway that evokes a static view of proteins . we envision that long - range interactions between residues residing in the epitopes and other amino acids may provide additional mutation sites in the virus to increase antigenic variations that allow the virus to go undetected by the host immune system . altogether , the application of an ensemble - based description of protein fluctuations quantitatively explains the effect of mutations in evasion of antibody neutralization . ensemble - based descriptions of the equilibrium have been successfully used to describe the mechanism of allosteric regulation in other protein systems ( i.e. , dhfr and camp receptor protein ) . thus , changes in the subpopulations of the native state ensemble ( due to mutation , ligand binding , or protein protein interactions ) seem to be a general strategy that nature uses to regulate biological function , including escape from antibody binding . the parallel observations for wnv and denv-2 allowed us to establish a unified mechanism of antibody - mediated neutralization for these two viruses . this mechanism correlates subtle changes in the conformational fluctuations in the epitope ( described as thermodynamic coupling ) and large defects in antibody binding affinity . the consequence of such correlation is that the virus is efficiently undetected by neutralizing antibodies without global unfolding of the viral protein . viruses have evolved many strategies to survive neutralizing antibodies , including the disruption of interaction surfaces between the epitope and the antibody ( steric hindrance ) . here , we describe the role of protein conformational fluctuations and how mutations inside or outside the epitope lead to evasion to antibody neutralization ( figure 7 ) . . a static view of the mechanism of antibody resistance involves steric hindrance of tight antibody antigen interactions due to mutations in the epitope ( i.e. , at position t332 , green arrowhead ) . alternatively , but not exclusively , mutations in the viral epitope ( red arrowhead ) may lead to changes in the conformational dynamics of the protein antigen to prevent antibody binding . moreover , we found that mutations not located in the primary epitope ( the bc loop in wnv ) that are thermodynamically coupled can also lead to antibody neutralization resistance via conformational effects ( black arrowhead ) . in this figure , antibody neutralization is a central component of the immune response against viruses , as antibodies reduce viral infectivity by preventing cell entry and tissue dissemination . a large effort is being invested in identifying epitopes in proteins from these and other flaviviruses . correctly identifying the epitopes is very important , since it will allow the development of new and efficient therapies ( i.e. , therapeutic antibodies ) , or the identification of target sites for small molecules to bind and inactivate viral proteins . without a priori information on the epitopes , our studies revealed a previously unidentified relationship between mutational susceptibility of the ensemble and epitope location . thus , an important and fundamental question arises : what is an epitope ? from a functional perspective , an epitope has been identified by mutations that decrease antibody - binding affinity and generate a neutralization - resistant phenotype in a virus . from a structural point of view , epitopes have been assigned as residues that are part of an interaction surface between the protein antigen and the antibody . in this study , we add a dimension to the identification of an epitope based on the fundamental thermodynamic property of correlated fluctuations in the protein ensemble . we propose that the combination of functional data , high - resolution structures , and thermodynamic information should allow a more accurate identification of the key residues responsible for the generation of evasion to antibody - mediated neutralization of viruses or other pathogens .
mutations in the epitopes of antigenic proteins can confer viral resistance to antibody - mediated neutralization . however , the fundamental properties that characterize epitope residues and how mutations affect antibody binding to alter virus susceptibility to neutralization remain largely unknown . to address these questions , we used an ensemble - based algorithm to characterize the effects of mutations on the thermodynamics of protein conformational fluctuations . we applied this method to the envelope protein domain iii ( ed3 ) of two medically important flaviviruses : west nile and dengue 2 . we determined an intimate relationship between the susceptibility of a residue to thermodynamic perturbations and epitope location . this relationship allows the successful identification of the primary epitopes in each ed3 , despite their high sequence and structural similarity . mutations that allow the ed3 to evade detection by the antibody either increase or decrease conformational fluctuations of the epitopes through local effects or long - range interactions . spatially distant interactions originate in the redistribution of conformations of the ed3 ensembles , not through a mechanically connected array of contiguous amino acids . these results reconcile previous observations of evasion of neutralization by mutations at a distance from the epitopes . finally , we established a quantitative correlation between subtle changes in the conformational fluctuations of the epitope and large defects in antibody binding affinity . this correlation suggests that mutations that allow viral growth , while reducing neutralization , do not generate significant structural changes and underscores the importance of protein fluctuations and long - range interactions in the mechanism of antibody - mediated neutralization resistance .
Introduction Methods Results Discussion Conclusions
PMC4557047
synthesis of new catalysts is critical for modern synthetic chemistry , but catalyst discovery is commonly based on time - consuming and frustrating trial - and - error protocols . to address this issue , many combinatorial approaches to accelerate the process have been developed.[1 , 2 } however , combinatorial catalysis has been hampered by limited access to structurally diverse systems , in particular with bifunctional scaffolds . non - trivial synthetic operations are commonly required for their assembly , which renders the systems unsuitable for automated high - throughput synthesis . furthermore , a significant drawback of most combinatorial catalytic protocols is the requirement for all candidates to be purified , characterized , and evaluated individually , regardless of their activity . therefore , collective catalyst screening is highly desirable , although only a few pioneering reports have been described . in recent years , substantial effort has been invested into the design of modular and responsive catalysts , in which the activity can be controlled through secondary inputs . in particular , highly successful self - assembled supramolecular catalysts with tunable activity have been developed for transition - metal catalysis and organocatalysis , providing quick and facile routes to bifunctional catalyst scaffolds . elegant studies by the reek and breit groups , have also shown the potential for simplified screening of such systems by deconvolution methods . dynamic covalent chemistry ( dcc ) uses reversible covalent bonds to mimic the adaptive nature of supramolecular systems , while retaining the advantages of well - defined , stable covalent compounds . for example , dcc has been successfully used for ligand / receptor identification , molecular - interaction analysis , kinetic processes , biopolymers , and chemical reaction networks . due to the high interest in developing tunable catalysts and catalytic systems , we became interested in the possibility of creating such a dynamic catalyst and investigating its properties . there are furthermore no known bifunctional catalysts , in which the two functional parts are connected by a reversible covalent bond . the application of dcc for catalyst discovery has otherwise been a long - standing goal . early examples relied on adaptive host systems that re - equilibrate in the presence of a transition - state analogue ( tsa ) , leading to amplification of the host that in theory best stabilizes the transition state . however , this leads to a need for design and synthesis of the tsa , and the screening process may result in a host that only binds the tsa without possessing any actual catalytic activity . because dynamic covalent chemistry is equipped with a developed framework for analysis of large mixtures , we imagined a possibility to directly find an optimal dynamic catalyst for a given reaction from a large adaptive system . herein , we have developed a method for the dynamic combinatorial synthesis of systems of bifunctional catalysts , followed by in situ identification of the optimal catalyst . baylis hillman ( mbh ) reaction , and a selective bifunctional catalyst with interesting properties was discovered . this method circumvents previous issues with dcc and catalysis by directly screening towards the actual chemical transformation in a kinetic manner . in bifunctional catalysis , two functional groups capable of activating substrates are mounted on one scaffold . it was hypothesized that if such a scaffold incorporated a reversible bond as shown in figure 1 a , a dynamic combinatorial system of potential bifunctional catalysts could be generated . by allowing the system to reach equilibrium , a predictable product distribution dictated only by the relative thermodynamic stability of the catalysts would be obtained . thus , dynamic deconvolution with selective component removal can be used to evaluate the effect of each component ( figure 1 b ) . note that the thermodynamic nature of the key bond connection is essential for the accuracy of the deconvolution approach . performing the same deconvolution on mixtures , in which the bifunctional catalyst has been constructed under kinetic control such systems are highly vulnerable to kinetic traps , resulting in a risk of active catalysts being unexpressed in the mixture . for a dynamic system , as long as the building blocks utilized for constructing the catalysts are relatively uniform in terms of the dynamic covalent functional group , all possible linear combinations should be expressed in the system in predictable ratios . b ) removal of a single - system component gives propagating effects , eliminating all possible linear combinations of the component . to utilize this dcc methodology for discovery of dynamic bifunctional catalysts the mbh reaction was chosen , because organocatalysis has proven to be highly successful for this transformation , and the importance of bifunctionality has been well investigated . furthermore , studies have found that optimal catalyst architectures were difficult to predict through rational design , which together with the often very long reaction times highlighted a need for rapid catalyst screening methods.[19b , e ] traditionally , mbh reactions utilizing ,-unsaturated ketones as donors are also hard to control , with polymerization and side - reactions often diminishing the efficiency . accurate catalyst predictions for such a reaction would indicate that the dynamic screening methodology possessed a high level of generality . thus , a racemic catalyst system that incorporated a nucleophilic lewis base , an h - bond donor and a dynamic imine bond connecting the two components was designed as shown in scheme 1 a. acids and water render the imine bond labile , but removal of either component leads to a structurally robust linkage . this conditional reversibility is essential , because a dynamic catalyst should be able to equilibrate under one set of conditions and stay inert under another . as illustrated in scheme 1 b , the catalyst should activate both the enone and the aldehyde , and preorganize the substrates for conversion towards the mbh adduct . the initial strategy was to first form the imines , and then allow the dynamic system to reach equilibrium in situ using an equilibration catalyst . this approach was tested for the model system shown in scheme 2 , using components a , b , 1 , and 2 to form imines a1 , a2 , b1 , and b2 quantitatively . herein , only component b2 fulfills the criteria for bifunctionality , because it possesses both a nucleophilic tertiary amine moiety and an h - bond donating thiourea group . model - system formation with indirect re - equilibration route ( top ) and direct condensation route ( bottom ) . however , upon attempted re - equilibration by addition of catalytic amounts of water and widely used transimination catalysts , such as benzoic acid or sc(otf)3 , it was noticed that the component distribution in the imine system did not change . control experiments confirmed that the system had in fact already reached equilibrium during condensation ( see the supporting information ) . this result was surprising , because amines and aldehydes in the absence of acid are known to condensate irreversibly under kinetic control . it was thus hypothesized that the thiourea n h protons could act as general acid catalysts for the system and self - catalyze the system synthesis , in which it takes part . further control experiments indicated that thioureas are indeed able to induce equilibration of dynamic imine systems , as long as water and/or amines are still present in the mixture ( see the supporting information ) . we also confirmed that transimination did not proceed at all in the absence of these species , which supports a hydrolysis / condensation mechanism for the re - equilibration . this effectively led to dynamic systems that were locked at equilibrium under dry conditions , because the water necessary for re - equilibration was continuously removed during the condensation phase . furthermore , it was also confirmed that thiourea structures were capable of catalyzing the exchange even in the absence of primary amines , indicating that aliphatic amine transimination catalysis was not the sole factor at play . to the best of our knowledge , this is the first report of h - bond - catalyzed transimination outside of biological systems . this finding greatly simplified our method , because the re - equilibration step shown in scheme 2 could be entirely omitted . furthermore , it added a further layer of complexity to this potential catalyst class , because these dynamic thiourea - imine catalysts are , in a sense , able to modify and catalyze their own formation . with equilibration conditions in hand , the system was next expanded to four aldehydes and four amines , as shown in scheme 3 , to increase the chances of finding an active catalyst . aldehydes 2 , 3 , and 4 comprise nucleophilic sites in the ortho position to the imine linker , whereas amines b , c , and d incorporate h - bond donors . cyclohexylamine a and benzaldehyde 1 were used as controls . a dynamic catalyst system composed of 16 different imines was formed analogously to the model reaction , and equilibrium was again attained during the condensation phase . next , ethyl vinyl ketone and p - nitrobenzaldehyde were added directly to the system as shown in figure 2 a. the mbh reaction proceeded readily , and 2025 % yield of the desired adduct 5 was obtained after 24 h , as indicated by nmr analysis . thus , at least one of the 16 potential catalysts in the mixture possessed mbh activity . b ) observed initial rate difference for mbh reaction upon selective replacement of investigated components 24 or b - d by equivalent amount of non - functionalized analogues 1 or a in the pre - generated catalyst system . conditions : 0.12 mmol p - nitrobenzaldehyde , 0.24 mmol ethyl vinyl ketone , 4 ms ( 300 mg ) , anhydrous thf ( 0.5 ml ) , pre - generated imine catalyst system ( 0.075 mmol of each initial component a - d and 14 except for the omitted building block and the replacement compound a or 1 of which 0.15 mmol was added ) . duplicate experiments ; for further experimental details and kinetic plots , see the supporting information . to minimize the number of experiments required to identify the active components in the mixture , a dynamic deconvolution scheme was devised , the results of which are shown in figure 2 b. equimolar amounts of the amine and aldehyde species were generally required , because the formed imines were inert under mbh conditions even in the presence of thioureas . hence , deconvolution could be efficiently accomplished through selective replacement of the evaluated component by an equivalent amount of a reference compound ( a for amines , 1 for aldehydes ) . initial rates were then measured to fully correlate systemic catalytic activity with changes in system composition upon component replacement . replacement of potentially active components by inactive species would lead to retarded rates of the investigated reaction , compared with the complete system with all functionalities present ( the reference bar in figure 2 b ) . conversely , removal of a component that is detrimental to catalytic activity should give enhanced initial rates . as can be seen from figure 2 b , replacement of the dimethylamino - containing component 2 gave a slight rate increase . a potential explanation for this observation can be the systemic effects of bifunctionality in the catalyst system . assuming one or more optimal combinations of nucleophile and h - bond donor , a scenario , in which pairing of an inactive component with a potentially active species would produce a bifunctional catalyst that exhibits low activity , can be envisaged . if this pairing would be thermodynamically more preferred than pairing of two active components , then removal of the inactive component would lead to re - equilibration in favor of the more active catalyst combination and thus increased rates . this scenario may be well applicable to the case of component 2 . however , removal of diphenylphosphine - containing aldehyde 3 led to complete loss of catalytic activity , implying that the highly nucleophilic phosphine was the only nucleophile in the system capable of catalyzing the reaction . in further support of this observation , imidazole - based aldehyde 4 showed almost no rate change when replaced . removal of the weaker h - bonding thiourea c provided the largest systemic effect , with the product formation rate decreasing by almost 30 % . replacement of the stronger h - bond donor b instead led to a rate increase , suggesting that b had deleterious effects on the catalysis . to evaluate the accuracy of the deconvolution predictions all linear combinations of the catalysts were synthesized in situ by direct condensation of the corresponding amine and aldehyde , and tested in single experiments . only the four reactions involving the imines resulting from aldehyde 3 showed any product formation after 24 h. these four catalysts were then synthesized and purified , giving bench - stable compounds that were subsequently tested in controlled single experiments . the results are summarized in figure 3 and are in accordance with the dynamic deconvolution results . compound c3 turned out to be the most active catalyst , with a 19 % yield of the mbh product 5 , compared to 15 % for b3 and only 3 % for a3 and d3 . yields of compound 5 in parallel catalyst - screening experiments . conditions : 0.1 mmol p - nitrobenzaldehyde , 0.3 mmol ethyl vinyl ketone , 0.02 mmol bifunctional catalyst , 0.5 ml thf , 200 mg 4 ms , 24 h , rt . the relatively high catalytic ability of b3 was initially surprising , because the system experiments actually predicted the compound to be detrimental to catalysis . however , subsequent experiments showed that b3 was highly unselective , with formation of large amounts of byproducts . furthermore , product 5 was shown to be unstable in the presence of b3 , and decomposed over time . these effects are an example of why care has to be taken in the collective screening of catalyst mixtures , because simple determination of the yield of 5 upon completed reaction would not lead to accurate predictions of the optimal catalyst activities . however , this study has showcased that kinetic measurements of initial rates is a possible way to measure systemic activities of catalyst mixtures . although c3 is by no means a state - of - the - art catalyst activity - wise , these results provide compelling evidence that the deconvolution methodology has accurately predicted the most active catalyst from a dynamic system . this protocol seems to be highly suited for detecting components crucial for activity , but it can also differentiate between less important functional groups that still contribute to the catalysis in the system . the method is simple and straightforward , and allows one - pot synthesis and subsequent screening of well - defined , covalently linked bifunctional organocatalysts without the need for separation , purification , and characterization of each individual molecule . the small model system investigated in this study is easily amenable to expansion , and the deconvolution protocol would be expected to increase further in efficiency with larger systems . furthermore , considering the range of dynamic covalent linkages developed in recent years , a wide range of potential dynamic catalysts architectures could be envisaged . having shown that the dynamic covalent chemistry enabled accelerated activity screening , we turned to investigating the behavior of the dynamic bifunctional catalyst c3 in more detail . when the mbh reaction was performed with 20 % loading of c3 , a yield of 87 % also , c3 could efficiently catalyze an aza - mbh reaction with highly electrophilic phenyl n - tosyl imine 6 to give aza - mbh adduct 7 in a very good 85 % yield over 72 h ( scheme 4 ) . conditions : 0.2 mmol aldehyde / imine 6 , 0.6 mmol ethyl vinyl ketone , 0.04 mmol c3 , 4 ms ( 100 mg ) , 1.0 ml thf , n2 . furthermore , we were interested in investigating if the dynamic covalent bond could be utilized to modulate the mbh activity . running the reaction with only amine c predictably only led to imine formation with p - nitrobenzaldehyde , but more surprisingly , utilizing aldehyde 3 as the sole catalyst led to almost no product and quick decomposition ( table 1 ) . when adding c and 3 together , the mbh reaction proceeded with very low selectivity and yield , with decomposition of the aldehyde presumably occurring over mbh adduct formation . however , when c and 3 were pre - stirred with 4 ms overnight , c3 was formed in quantitative yield , and the corresponding mbh reaction proceeded readily and selectively . conversely , pre - stirring four equivalents of h2o with c3 followed by reagent addition again produced almost no product formation , because the thiourea seemed to have catalyzed the partial hydrolysis of the imine back to the unfavorable aldehyde these results indicate that the dynamic bifunctional organocatalysts might be utilized as primitive switches , especially given the discovered self - modifying capabilities of this class of catalysts . tunable catalytic activity for c3 [ a ] conditions : 0.1 mmol p - nitrobenzaldehyde , 0.3 mmol ethyl vinyl ketone , 0.02 mmol catalyst , 0.5 ml thf , n2 . [ b ] indicated by h nmr spectroscopy after 7 h. [ c ] with 0.2 mmol h2o . the inclusion of a dynamic imine bond , as well as a transimination catalyst , into the same structure also opens further interesting possibilities . for the catalyst screening , the dynamic system was locked during the entire catalytic event to maintain accuracy in reaction kinetics measurements . however , it is also straightforward to unlock the dynamic system and allow living dynamic catalyst behavior , in which the catalyst structure is continuously changing during the reaction . in theory , organocatalysts capable of in situ error correction of their own molecular architecture could then be envisaged . a new class of dynamic bifunctional catalysts capable of catalyzing modifications of their own constitution was developed , and it was showcased how this property allows one - pot synthesis and evaluation of large systems of catalysts . the methodology uncovered a relatively effective catalyst for the morita baylis hillman reaction , and catalyst effectiveness could be regulated through manipulations of the dynamic covalent bond . dcc is integral for the screening approach , because it enables a deconvolution strategy that rapidly identifies the system components that contribute most to catalytic activity . the dynamic imine linkage allows proofreading of the dynamic system , with the reversibility ensuring a uniform catalyst distribution . the methodology can be utilized for catalyst discovery , and the obtained dynamic bifunctional scaffolds exhibit the potential for use as adaptable organocatalysts . further investigations on the screening methodology and the self - modifying ability of the dynamic catalysts are currently in progress . aldehydes and amines ( 0.075 mmol each ) were dissolved in anhydrous thf ( 0.5 ml ) in an eppendorf vial , and the solution was transferred to a dry reaction vial containing pre - activated 4 ms ( 300 mg ) under n2 . the mixture was stirred at room temperature for 20 h after which time the equilibrated system was obtained . tests for thiourea system equilibration were performed ( see the supporting information ) , showing that the systems were at equilibrium after condensation . afterwards , p - nitrobenzaldehyde ( 18.1 mg , 0.12 mmol ) in anhydrous thf ( 0.120 ml ) was added under n2 , followed by addition of ethyl vinyl ketone ( 23.9 l , 20.8 mg , 0.24 mmol ) . an aliquot of the reaction mixture ( 30.0 l ) was withdrawn and added to 0.550 ml cdcl3 in an nmr tube , with phsime3 ( 0.020 l / ml cdcl3 ) as internal standard . nmr measurements were performed within 5 min , although control experiments indicated that the aliquot composition was stable for several hours in anhydrous cdcl3 . product formation was monitored by integrating the characteristic peaks at =5.66 and 6.00 ppm and comparing to the internal standard . aldehydes and amines ( 0.075 mmol each ) were dissolved in anhydrous thf ( 0.5 ml ) in an eppendorf vial , and the solution was transferred to a dry reaction vial containing pre - activated 4 ms ( 300 mg ) under n2 . the mixture was stirred at room temperature for 20 h after which time the equilibrated system was obtained . tests for thiourea system equilibration were performed ( see the supporting information ) , showing that the systems were at equilibrium after condensation . a dynamic system was generated according to the description above . afterwards , p - nitrobenzaldehyde ( 18.1 mg , 0.12 mmol ) in anhydrous thf ( 0.120 ml ) was added under n2 , followed by addition of ethyl vinyl ketone ( 23.9 l , 20.8 mg , 0.24 mmol ) . an aliquot of the reaction mixture ( 30.0 l ) was withdrawn and added to 0.550 ml cdcl3 in an nmr tube , with phsime3 ( 0.020 l / ml cdcl3 ) as internal standard . nmr measurements were performed within 5 min , although control experiments indicated that the aliquot composition was stable for several hours in anhydrous cdcl3 . product formation was monitored by integrating the characteristic peaks at =5.66 and 6.00 ppm and comparing to the internal standard . as a service to our authors and readers , this journal provides supporting information supplied by the authors . such materials are peer reviewed and may be re - organized for online delivery , but are not copy - edited or typeset . technical support issues arising from supporting information ( other than missing files ) should be addressed to the authors
the first example of a bifunctional organocatalyst assembled through dynamic covalent chemistry ( dcc ) is described . the catalyst is based on reversible imine chemistry and can catalyze the morita baylis hillman ( mbh ) reaction of enones with aldehydes or n - tosyl imines . furthermore , these dynamic catalysts were shown to be optimizable through a systemic screening approach , in which large mixtures of catalyst structures were generated , and the optimal catalyst could be directly identified by using dynamic deconvolution . this strategy allowed one - pot synthesis and in situ evaluation of several potential catalysts without the need to separate , characterize , and purify each individual structure . the systems were furthermore shown to catalyze and re - equilibrate their own formation through a previously unknown thiourea - catalyzed transimination process .
Introduction Results and Discussion Conclusion Experimental Section Experimental procedure for dynamic system generation Kinetic analysis of Morita-BaylisHillman reactions with dynamic systems catalysis Supporting Information
PMC4681456
f10.7 , ap , and dst replicate time series of radiative cooling by nitric oxide quantified relative roles of solar irradiance , geomagnetism in radiative cooling establish a new index and extend record of thermospheric cooling back 70 years f10.7 , ap , and dst replicate time series of radiative cooling by nitric oxide quantified relative roles of solar irradiance , geomagnetism in radiative cooling establish a new index and extend record of thermospheric cooling back 70 years the climate of the thermosphere is controlled in part by cooling to space driven by infrared radiation from carbon dioxide ( co2 , 15 m ) , nitric oxide ( no , 5.3 m ) , and atomic oxygen ( o , 63 m ) . the sounding of the atmosphere using broadband emission radiometry ( saber ) instrument [ russell et al . , 1999 ] on the nasa thermosphere - ionosphere - mesosphere energetics and dynamics ( timed ) satellite has been measuring infrared cooling from co2 and no in the thermosphere since january 2002 [ mlynczak et al . , these data provide integral constraints on the energy budget and climate of the atmosphere above 100 km . in this paper physically , changes in no emission are due to changes in temperature , atomic oxygen , and the no density . these physical changes , however , are driven by changes in solar irradiance and changes in geomagnetic conditions . we will show that the 13 year time series of no cooling derived from saber can be accurately fit with a multiple linear regression using standard solar and geomagnetic indices . this fit enables several fundamental properties of the no cooling to be determined , including the relative importance of solar ultraviolet irradiance and geomagnetic conditions and their variability with time . in addition , the time series of solar and geomagnetic indices extends back to 1947 , allowing a reconstruction of thermospheric cooling by no back in time nearly 70 years . this reconstruction then provides a long - term time series of an integral radiative constraint on thermospheric climate that can be used to test climate models . the test can be done in two ways : first , validating the overall no radiative cooling time series , and second , validating the relative roles of solar and geomagnetic effects in determining the total cooling over time over seven very different solar cycles . previously , lu et al . showed a high correlation coefficient ( 0.89 ) between f10.7 , the kp index , and the daily global no power for 7 years of saber data . this paper extends that work to short- and long - term climate timescales at much higher accuracy ( 0.985 correlation coefficient ) with simpler mathematical expressions for the linear regression fits . the results presented below show that solar and geomagnetic effects jointly determine the radiative cooling by no . we therefore propose a new index , the thermosphere climate index ( tci ) , based on the results herein . the tci can be used to assess the general state of the thermosphere because it reflects the main processes that control a key radiative cooling element of thermospheric climate . the time series of radiative cooling in the thermosphere as measured by the saber instrument on the nasa timed satellite has previously been described in detail [ mlynczak et al . , 2014 , and references therein ] . we specifically look here at the daily global infrared power ( w ) radiated by the no molecule . this parameter is the total amount of energy radiated on a daily , global basis due to infrared emission from the no molecule at 5.3 m . the no power is derived by integrating with respect to altitude each vertical profile of radiative cooling ( w / m ) due to no and then integrating this with respect to latitude ( area ) and longitude . approximately 1500 vertical profiles of radiative cooling per day go into the global calculation . at this time over 4750 days of data comprise the time series of no infrared power and radiative cooling . the data are observed to exhibit large day - to - day variability associated with geomagnetic effects and long - term variability associated with changes in the uv output over the 11 year solar cycle . as this paper focuses on thermospheric climate , we use the daily global power radiated by no and construct a time series of the 60 day running mean of the no power . sixty days is chosen as this is the time required for the timed satellite to sample all local times . there is a strong 60 day period in the no power [ mlynczak et al . , 2008 ] , implying a strong dependence of the no power on local time that is very repetitive . the local time variation is due to tidal effects in the no cooling [ oberheide et al . , the 60 day running mean gives a consistent average of the no power over all local times for each reported point in the time series . by doing so , we avoid potential biases in the no power time series due to improper sampling of local time variability . for purposes of the multiple linear regression fits , we also compute the 60 day running means of the f10.7 , ap , and dst indices . f10.7 is a commonly used proxy for solar uv and euv irradiance and its variation . sixty day running means of no power , ap index , dst index , and f10.7 index from january 2002 to march 2015 . visual correlations between the ap , f10.7 , and dst indices and the no power are evident upon examination of figure1 . these strongly suggest that the no power time series can be fit with a multiple linear regression involving these three standard solar and geomagnetic indices . the integrated power ( area under each curve ) from january 2002 to january 2015 agrees to better than 2 ppm . the inclusion of dst in addition to ap and f10.7 was found to slightly improve the agreement in regions where there is a marked peak in the no power . without dst sixty day running mean of the daily global radiated power from nitric oxide observed by saber ( blue curve ) and the multiple linear regression fit using the 60 day running means of f10.7 , ap , and dst . the fit shown in figure2 is remarkable in the sense that the complex photochemical and geomagnetic energetic processes that ultimately lead to thermospheric infrared cooling can be represented so accurately by three standard solar and geomagnetic indices . this allows extension of the fit back in time with the extant databases of the three standard indices . both f10.7 and ap are available back to 1947 , and dst is available back to 1957 . from these we can construct a time history of no cooling back nearly 70 years and covering almost seven solar cycles , from the peak of solar cycle ( sc ) 18 to the peak of sc 24 today . figure3 ( top ) shows the reconstruction of the thermospheric no power , which will be referred to as the thermosphere climate index ( tci ) , as discussed in the next section . the blue curve is the reconstruction back to 1957 using ap , f10.7 , and dst . figures3 ( middle ) and 3 ( bottom ) are the 60 day running means of ap and f10.7 ( respectively ) used in the reconstruction back to 1947 . from this figure , we can see several interesting features about the time evolution of radiative cooling by no . first , the largest radiative cooling occurred back in the late 1950s during sc 19 . the peak no emission briefly exceeded 4 10 w , a level that was reached only one other time in the early 1990s near the peak of sc 22 . the large no power associated with sc 19 was followed by a much weaker sc 20 in which the peak no power was roughly half that of its predecessor . the minimum in no power during the prolonged minimum of sc 23 in 20082009 is the lowest power value in the time series . in addition , although sc 24 is yet to complete , the no power during the peak of sc 24 is the smallest of any prior peak in the reconstructed time series . given the fundamental role no plays in the energy budget of the thermosphere , the time series shown in figure3 provides a long - term record of an integral constraint on the energy budget of the atmosphere above 100 km . as such , the time series can be used to test upper atmosphere climate models over a variety of solar and geomagnetic conditions of the past 70 years . ( top ) time series of the 60 day running mean of the no power extended back to 1947 , which we now refer to as the thermosphere climate index ( tci ) . the tci is constructed from 1957 to the present day using extant databases of f10.7 , ap , and dst . from 1947 to 1957 corresponding 60 day averages of ( middle ) ap and ( bottom ) f10.7 used to construct the index . the tci represents a fundamental integral constraint on the climate system of the thermosphere and provides a time series for testing upper atmosphere climate models over nearly seven complete solar cycles . the long - term time series shown in figure3 can be separated into its solar uv and geomagnetic components by using the coefficients derived for the fit shown in figure2 . the expression for the fit shown in figure2 is 1 each term is the 60 day running mean of that parameter . the no power is in units of 10w , ap and dst are in nanotesla , and f10.7 is in solar flux units ( sfu , 1 sfu = 1 10 w m hz ) . the coefficients a0 , a1 , a2 , and a3 are , respectively , 1.0271 , 1.5553 10 , 4.0665 10 , and 8.2360 10 . the fraction of no cooling due to deposition of solar irradiance is ( a1 f10.7)/(a1 f10.7 + a2 ap + a3 dst ) , and the fraction due to geomagnetic effects is ( a2 ap + a3 dst)/(a1 f10.7 + a2 ap + a3 dst ) . this immediately provides an assessment of the relative roles of solar and geomagnetic processes that ultimately lead to radiative cooling by no . figure4 shows the percentage of the radiative cooling in figure3 due to solar irradiance ( red curve ) and geomagnetism ( blue curve ) , obtained from these expressions . over the nearly 70 year record , about 70% of the radiative cooling is due to energy deposition of solar uv radiation and about 30% is due to geomagnetic processes . overlaid in figure4 in grey is the 60 day running mean of the daily sunspot number . from this we can see that during solar maximum conditions ( as indicated by the peak in sunspots ) solar irradiance is responsible for up to 90% of the radiative cooling by no . however , during solar minimum conditions , geomagnetic processes account for more than 40% of the cooling and , briefly on several occasions , are essentially comparable to solar uv . relative contributions of solar irradiance ( red ) and geomagnetic processes ( blue ) to the variability of the no cooling . the grey curve is the 60 day running mean of the daily sunspot number . solar irradiance is the dominant mechanism for energy deposition resulting in no cooling at solar maximum , while geomagnetic processes are much more important during solar minimum . solar and geomagnetic indices are individually very useful for gauging levels of solar and geomagnetic variability and activity . however , they do not individually provide information on the state of the atmosphere in response to that variability and activity . we therefore propose a new index , the thermosphere climate index ( tci ) that provides a quantitative measure of the state of the thermosphere . the tci would be the 60 day running mean of the no power computed from the fit of the 60 day running means of f10.7 , ap , and dst to the saber - observed no power as given in equation 1 . we suggest that the new index is critical because the thermosphere response in radiative cooling due to no and to carbon dioxide ( co2 ) has been shown to occur well after the maximum in sunspot number during solar cycle 24 , as we will document in a forthcoming publication . the proposed tci combines both solar irradiance and geomagnetism into one index to replicate accurately a key parameter of thermospheric climate . we further suggest that upon further modeling , the tci could be given adjectival descriptors to describe the thermal state of the thermosphere , such as those applied to the kp index and geomagnetic storms . the multiple linear regression fit of f10.7 , ap , and dst to the observed saber no power tacitly assumes that these parameters adequately capture the processes associated with solar uv and geomagnetic effects that ultimately result in infrared cooling to space by no . these drivers ultimately originate with the sun as uv photons and solar wind particles . however , we point out that there is a slow , long - term driver internal to the earth system associated with the continual buildup of co2 . roble and dickinson predicted that the continued buildup of co2 would lead to a long - term cooling of the thermosphere . thus , it is to be expected that over time , there would be a slow decrease in the no emission as the temperature of the lower thermosphere decreases . roble and dickinson predict decreases in lower thermospheric temperature ranging from 5 k near 100 km to 35 k near 200 km for a doubling of the co2 concentration . at 130 km , the peak altitude of no emission , a decrease of 15 k is predicted . for a nominal temperature of 525 k at 130 km , a 15 k cooling would result in a reduction of no emission of about 15% for doubled co2 amounts . we estimate that since 1947 the co2 in the atmosphere has increased by approximately 100 ppmv ( annual rate of 1.5 ppmv ) , which is roughly one third the amount expected for co2 doubling since preindustrial times . thus , the decrease in no emission since 1947 due to co2 increase would be about 5% , assuming all other temperature - dependent processes related to no chemistry ( of which there are several ) are essentially constant . the long - term effects of thermospheric cooling due to carbon dioxide increase on no cooling merit further investigation . a key time series of the global infrared power radiated by no from the thermosphere can be fit quite accurately with a multiple linear regression of three solar and geomagnetic indices . this has enabled reconstruction of the no power time series back to 1947 using extant databases of the f10.7 solar radio flux , the ap index , and the dst index . the reconstructed time series enables tests of upper atmosphere climate models over the last six solar cycles . the multiple regression fit has also enabled the relative roles of solar irradiance and geomagnetic processes in driving the no cooling to be determined . in general , solar uv irradiance is the primary factor that determines the no cooling , particularly at solar maximum . during solar minimum conditions , the proposed thermosphere climate index is a new metric that accurately replicates the state of the thermosphere . its main advantage is that it provides a key measure of the state of the thermosphere that is not captured by other individual metrics . this is an important point as individual metrics such as sunspot number do not adequately reflect all of the processes which cause the atmosphere to respond to solar variability .
infrared radiation from nitric oxide ( no ) at 5.3 m is a primary mechanism by which the thermosphere cools to space . the sounding of the atmosphere using broadband emission radiometry ( saber ) instrument on the nasa thermosphere - ionosphere - mesosphere energetics and dynamics satellite has been measuring thermospheric cooling by no for over 13 years . in this letter we show that the saber time series of globally integrated infrared power ( watts ) radiated by no can be replicated accurately by a multiple linear regression fit using the f10.7 , ap , and dst indices . this allows reconstruction of the no power time series back nearly 70 years with extant databases of these indices . the relative roles of solar ultraviolet and geomagnetic processes in determining the no cooling are derived and shown to vary significantly over the solar cycle . the no power is a fundamental integral constraint on the thermospheric climate , and the time series presented here can be used to test upper atmosphere models over seven different solar cycles.key pointsf10.7 , ap , and dst replicate time series of radiative cooling by nitric oxide quantified relative roles of solar irradiance , geomagnetism in radiative cooling establish a new index and extend record of thermospheric cooling back 70 years
Key Points 1 Introduction 2 Methodology 3 A Proposed TCI 4 The Role of CO 5 Summary and Conclusion
PMC3353616
laparoscopic liver surgery is now being performed by select groups worldwide . the louisville consensus conference on laparoscopic liver surgery suggested a role for laparoscopic liver resections for lesions in segment 2 to 6 . we herein report a male patient undergoing laparoscopic liver resection for a giant right lobe liver hemangioma . a 45-year - old male patient with no known medical risk factors presented to the outpatient department with complaints of right upper quadrant pain restricting his regular activity . a multidetector computerized tomography ( mdct ) of the abdomen showed a giant hemangioma ( 18 cm in greatest diameter ) arising from segments 5 and 6 of the liver [ figure 1 ] . the feeding vessel to the hemangioma was from the anterior branch of right hepatic artery ( rha ) . the right anterior portal pedicle ( rapp ) was seen abutting the hemangioma supero - medially . ( a ) computerized tomography showing the giant hemangioma arising from segment 5 and 6 of the liver , ( a reconstructed image showing extent of hemangioma . ) ( b ) arterial phase showing the main feeding artery from the anterior branch of the right hepatic artery , ( c ) portal venous phase showing relation of the anterior portal pedicle to the hemangioma an open entry was achieved with a 10-mm port at the umbilicus for laparoscopic vision and four additional ports were placed [ figure 2 ] . at laparoscopy , a giant hemangioma of 18 cm 12 cm was seen to arise from segments 5 and 6 of the liver , reaching up to the iliac fossa on the right side with displacement of the gall bladder medially to midline , in line with the falciform ligament [ figure 3a ] . the gall bladder fundal retraction was achieved using a grasper through a port in the right midclavicular port . c , camera port ( 10 mm ) ; rhw , right hand working port ( 12 mm ) ; left hand working port ( 5 mm ) ; lr , liver retractor port ( 10 mm ) ; gbf , gall bladder fundal retraction port ( 5 mm ) ( a ) laparoscopic view of the giant hemangioma , ( b ) laparoscopic image showing the bulldog clamp across the anterior branch of the right hepatic artery , ( c ) line of demarcation on the liver after clamping the anterior branch of the right hepatic artery , ( d ) laparoscopic view of the enucleation plane the calot 's triangle was dissected and the cystic artery was clipped and divided . the right anterior branch was selectively dissected , looped and occluded with a bulldog clamp [ figure 3b ] . the line of demarcation of the anterior segment became clearly evident and was associated with shrinkage of the hemangioma by one - third of its size [ figure 3c ] . resection was initially performed in the plane of enucleation medially [ figure 3d ] . during the course of this dissection , cranially , a transverse line of transection was marked at the summit level of the hemangioma . the transverse transection plane thus chosen was based on pre - operative planning from the reconstruction from mdct evaluation and intraoperative ultrasound ( ious ) . the hepatic venous tributary running from segment 6 crossed this line as confirmed by ious . the 5-mm port in the epigastrium was exchanged for a 12-mm port to facilitate the use of the 4 habib laparoscopic probe . california , usa ) , 60 watts setting , was used during liver parenchymal transection with the laparoscopic 4 habib probe by choosing a 2 cm depth of application of rf prongs along the line of transection with parenchymal division performed with straight scissors [ figure 4a ] . to prevent injury to the retroperitoneal structures , the posterior 1 cm depth of the parenchymal division was achieved by two firings of an endo gia stapler with 60 mm , white reloads ( autosuture ) [ figure 4b ] . ( a ) laparoscopic 4 habib probe transection in progress , ( b ) final transaction surface , ( c ) morselled specimen an indigenously prepared endobag ( urobag ) cut to appropriate size with a prolene 2 - 0 suture placed as a pursestring along its open end was then passed into the abdomen through the 12 port . the bag was placed in the right upper abdomen and two 2 - 0 prolene , interrupted sutures were placed on the anterior leaf of the open end of the bag and sutured to the anterior wall of the abdomen . one grasping foreceps held the posterior leaf of the open end of the bag . this suturing technique facilitated in bagging the large specimen comfortably . the cystic duct was clipped , the gallbladder was dissected off from the liver bed , placed in the same endobag and the pursestring suture at the mouth of the bag was tightened . under laparoscopic guidance , the pursestring suture was held and the mouth of the bag was delivered through the umbilical port site . re - laparoscopy was performed , hemostasis was ensured , a subhepatic 24 f tube drain was placed and ports were withdrawn . the patient was started orally the same evening and discharged from hospital on the third post - operative day . an open entry was achieved with a 10-mm port at the umbilicus for laparoscopic vision and four additional ports were placed [ figure 2 ] . at laparoscopy , a giant hemangioma of 18 cm 12 cm was seen to arise from segments 5 and 6 of the liver , reaching up to the iliac fossa on the right side with displacement of the gall bladder medially to midline , in line with the falciform ligament [ figure 3a ] . the gall bladder fundal retraction was achieved using a grasper through a port in the right midclavicular port . c , camera port ( 10 mm ) ; rhw , right hand working port ( 12 mm ) ; left hand working port ( 5 mm ) ; lr , liver retractor port ( 10 mm ) ; gbf , gall bladder fundal retraction port ( 5 mm ) ( a ) laparoscopic view of the giant hemangioma , ( b ) laparoscopic image showing the bulldog clamp across the anterior branch of the right hepatic artery , ( c ) line of demarcation on the liver after clamping the anterior branch of the right hepatic artery , ( d ) laparoscopic view of the enucleation plane the right anterior branch was selectively dissected , looped and occluded with a bulldog clamp [ figure 3b ] . the line of demarcation of the anterior segment became clearly evident and was associated with shrinkage of the hemangioma by one - third of its size [ figure 3c ] . resection was initially performed in the plane of enucleation medially [ figure 3d ] . during the course of this dissection , cranially , a transverse line of transection was marked at the summit level of the hemangioma . the transverse transection plane thus chosen was based on pre - operative planning from the reconstruction from mdct evaluation and intraoperative ultrasound ( ious ) . the hepatic venous tributary running from segment 6 crossed this line as confirmed by ious . the 5-mm port in the epigastrium was exchanged for a 12-mm port to facilitate the use of the 4 habib laparoscopic probe . california , usa ) , 60 watts setting , was used during liver parenchymal transection with the laparoscopic 4 habib probe by choosing a 2 cm depth of application of rf prongs along the line of transection with parenchymal division performed with straight scissors [ figure 4a ] . to prevent injury to the retroperitoneal structures , the posterior 1 cm depth of the parenchymal division was achieved by two firings of an endo gia stapler with 60 mm , white reloads ( autosuture ) [ figure 4b ] . ( a ) laparoscopic 4 habib probe transection in progress , ( b ) final transaction surface , ( c ) morselled specimen an indigenously prepared endobag ( urobag ) cut to appropriate size with a prolene 2 - 0 suture placed as a pursestring along its open end was then passed into the abdomen through the 12 port . the bag was placed in the right upper abdomen and two 2 - 0 prolene , interrupted sutures were placed on the anterior leaf of the open end of the bag and sutured to the anterior wall of the abdomen . one grasping foreceps held the posterior leaf of the open end of the bag . this suturing technique facilitated in bagging the large specimen comfortably . the cystic duct was clipped , the gallbladder was dissected off from the liver bed , placed in the same endobag and the pursestring suture at the mouth of the bag was tightened . under laparoscopic guidance , the pursestring suture was held and the mouth of the bag was delivered through the umbilical port site . the umbilical port was extended to 3 cm . re - laparoscopy was performed , hemostasis was ensured , a subhepatic 24 f tube drain was placed and ports were withdrawn . the patient was started orally the same evening and discharged from hospital on the third post - operative day . the role of laparoscopic liver resection for liver tumors is unclear at present.[25 ] the louisville consensus statement suggests laparoscopic liver resection as an option for lesions in the left lateral and inferior segments of the right lobe . the concerns with regard to laparoscopic liver resections are many . thirdly , there is a lack of tactile feedback , which is critical in evaluating the margin of resection , particularly in malignant tumorus . fourthly , the ideal technique for parenchymal transection during laparoscopic liver resection is not yet standardized . lastly , retrieval of a large specimen may require a large incision , which defeats the primary objective of keeping the procedure minimally invasive . lesions reaching the hilar structures , in particular , pose technical problems with the laparoscopic approach . we elected to perform the resection laparoscopically in our patient because of a suitable location of the tumour , namely segment 5 and 6 , with a large exophytic component . also , a good triphasic mdct with reconstruction provided excellent anatomical delineation , facilitating appropriate planning of vascular control and line of parenchymal transection . enucleation has been reported for smaller hemangiomas . in giant hemangioms such as in this report , the cross - sectional area for enucleation is likely to be large and visualization of the entire enucleation plane and achieving blood - less dissection could pose problems . although control of the feeding vessel with subsequent shrinkage of the tumour could facilitate enucleation , choosing , a transverse line of transection cranially in our patient kept the transection surface to the minimum . a blood - less transection was achieved medially in the enucleation plane and cranio - laterally with the laparoscopic 4 habib probe in our patient . others have reported on the use of a laparoscopic habib probe for blood - less liver parenchymal transection . our technique of bagging the specimen is very suitable for solid organs , particularly large specimens . we have used the same technique for laparoscopic retrieval of other solid organs such as spleen or distal pancreas before . in our present report , because there was no concern of studying margins during histopathological examination , the specimen could be morselled and retrieved . in conclusion , laparoscopic resection is feasible in giant liver hemangiomas located in the inferior segments of the right lobe of the liver . laparoscopic 4 habib probe is an important tool in the armamentarium of liver transection methods .
experience with laparoscopic liver resections is limited . laparoscopic resection of a variety of liver lesions has been reported and is considered appropriate for lesions in the left lateral segment and inferior segments of the right lobe . herein , we report a 52-year - old male patient who underwent a laparoscopic resection of giant liver hemangioma with the use of a laparoscopic 4 habib probe .
INTRODUCTION CASE REPORT Step 1: Port placement and retraction Step 2: Taking control of the anterior branch of the right hepatic artery Step 3: Dissection of the medial part of the hemangioma in an enucleation plane Step 4: Transverse line of transection (cranially) using a laparoscopic Habib probe Step 5: Bagging the specimen, morsellation and retrieval DISCUSSION
PMC3880060
crystal engineering the rational design of crystalline molecular solids remains an important challenge for chemistry . crystal structure prediction is not yet feasible in all cases , and it is therefore useful to develop motifs which allow families of structures to be generated in a reliable fashion . there is special interest in motifs which lead to microporous ( or nanoporous ) crystals with voids on the scale of 0.52 nm , sufficient to accommodate molecular guests . such materials offer various functionalities , such as inclusion and storage of gases , and other guest molecules , the enhancement of optical properties of included guests , the use of pores as reaction vessels to promote the formation of desired products , and the separation of mixtures including enantiomers from racemates . at the same time crystallizing species will usually attempt to maximize contact with each other , thus minimizing any void space . to counter this trend is not straightforward and will often require the construction of specially shaped rigid components or units capable of strong and directional interspecies interactions . successful approaches to nanoporous crystal engineering may be divided into two categories . on the one hand ( pcps ) or metal organic frameworks ( mofs ) , formed by combining metal ions with rigid multivalent ligands . on the other are purely organic systems , which rely on noncovalent bonding to regulate crystal packing . the organic systems may be further divided into intrinsically and extrinsically porous molecular crystals . intrinsically porous crystals are based on molecules with predefined open spaces ( macrocycles , cages etc . ) , whereas extrinsic porosity results simply from crystal packing . of these three approaches ( hybrid , intrinsic / extrinsic organic ) , the latter is probably the most challenging as open frameworks must be maintained without the help of powerful directional coordination bonds or pre - existing cavities . some solutions have emerged through serendipity , such as the classic urea inclusion compounds . there are few motifs which generate families of readily accessible nanoporous crystals , allowing tuning of void dimensions and material properties . the work described in this paper is founded on a serendipitous discovery made a few years ago in the course of our program on anion - binding cholapods . these powerful receptors combine a rigid steroidal scaffold , derived from cholic acid 1 ( chart 1 ) with various combinations of h - bond donor groups . most are reluctant to crystallize but a small subset , represented initially by 24 , were found to form needles from methyl acetate - water or acetone - water mixtures . all three were subjected to x - ray crystallography , with interesting results . despite the significant differences between 24 , the external similarity of the crystals was reflected in the internal structures ; the three were isomorphous , with almost identical unit cell dimensions and packing arrangements for the invariant steroidal cores . the packing involved the formation of helices with hexagonal symmetry ( space group = p61 ) , surrounding solvent - filled channels . the arrangement is illustrated in figure 1 , using tris - urea 3 as an example . individual steroid molecules bind to a single molecule of co - crystallized water through 5 h - bonds ( figure 1a ) and stack to form columns ( figure 1b ) . the columns then pack in a hexagonal arrangement to generate the solvent - filled pores . the orientation of the steroids in the columns is such that the terminal groups ( methoxy and nhph ) face into the pores and largely determine the nature of the channel surface . effectively , the terminal groups can expand into the channel interior without affecting the packing of the columns which maintain the structure . the channels , moreover , are unusually wide . in the case of trifluoroacetamide 2 , the average diameter was found to be 16.4 . the average diameter for 3 was only slightly less at 15.7 , although the surfaces are more irregular ( figure 2 ) . there is thus substantial room , in principle , both for guest molecules and for terminal groups . preliminary experiments on 2 implied that guest exchange was possible , at least for certain solvents ( meoac , et2o , toluene ) . evacuation lead to partial degradation ( evidenced by crazing ) , but the powder x - ray diffraction ( xrd ) pattern remained largely unchanged . the steroid is solvated by a molecule of water which forms hydrogen bonds to all three urea groups . ( b ) molecules of 3 stack to form columns , running along the crystallographic c axis . representation as for ( a ) except that core steroidal atoms are colored blue and green in adjacent molecules , and water molecules are shown with thick bonds . one column of steroids is highlighted using the coloring from ( b ) , with the och3 and nhph groups now in spacefilling mode . ( d ) a single channel sliced in half along the c axis , viewed in spacefilling mode . terminal och3 and nhph groups retain their coloring , other atoms near or at the internal surface are shown as light blue . ( e ) 3d schematic representation of a channel , showing helical arrangement of methyl groups and aromatic rings ( spheres and hexagons , respectively ) . interior surfaces for trifluoroacetamide 2 and tris - n - phenylurea 3 viewed along the c - axis . the surfaces were calculated using a 1.4 probe . given the space available within the channels , it seemed likely that a wide range of analogues with a common bis-(n - phenylureido)steroidal core ( figure 3 ) would form crystals isostructural with 24 . variation should be feasible not only at the c3 substituent ( r in figure 3 ) but also at the c24 ester group ( r in figure 3 ) ( npsus ) would be isostructural they should be able to form solid solutions ( organic alloys ) , greatly enlarging the range of systems available . since our original publication we have confirmed both of these possibilities . we have described a series of three npsus with aromatic groups in r , and the interesting feature of water wires in the channels , and also a range of npsu - based organic alloys . herein we provide a more complete description of our work surveying the scope and properties of npsus , drawing on 25 examples which have been characterized by x - ray crystallography . we show how the dimensions and shapes of the channels can be tuned , and how their chemical nature can be altered by the introduction of functional groups ( including previously unreported alkene and aldehyde functionality ) . we also report , for the first time , that npsus can be porous in the strictest sense , stable to evacuation and capable of gas adsorption . moreover we show that they can adsorb a remarkable range of guests , including organic dyes with molecular weights up to 300 and even the c30 hydrocarbon squalene ( mw = 410 ) . the core bis-(n - phenylureido)steroidal unit maintains the p61 nanoporous structure , while groups r and r control the size and nature of the pore . the first are esters of 3,7,12-tris-(n - phenylureido)-5-cholanoic acid 6 and include 3 as well as the 14 variants 720 represented in chart 2 . ester groups r were chosen for variation in size ( and thus pore diameter ) and surface characteristics ( aliphatic vs aromatic vs fluorocarbon ) and also to showcase the potential for placing chromophores ( e.g. , azobenzenes ) , fluorophores ( e.g. , pyrenes ) , and reactive units ( e.g. , allyl groups ) in the channels . the second group are derivatives of methyl 3-amino-7,12-bis-(n - phenylureido)-5-cholanoate 5 , including trifluoroacetamide 2 , carbamate 4 , tris - ureas 2126 , and amides 2729 ( chart 3 ) . again the variable group ( r ) was used to change steric and surface properties and to introduce chromophores and functional groups . in this case aalthough 29 is included here for convenience , it does not adopt the npsu crystal packing . for further details see text . although 29 is included here for convenience , it does not adopt the npsu crystal packing . for further details amine 5 is accessible from cholic acid 1 via a multistep but well - established route in 40% overall yield . tris - urea 3 may be prepared from 5 by treatment with phenyl isocyanate or more directly from 1 via methyl 3,7,12-triaminocholanoate . esters 720 ( chart 2 ) are available from 3 via equilibration with lithium alkoxide or hydrolysis to acid 6 followed by o - alkylation or carbodiimide - induced esterification . the derivatives in chart 3 may be prepared from 5 by treatment with an aryl isocyanate ( giving 2126 ) or an acylating agent ( giving 2729 ) . the preparations of 9 , 10 , 12 , 1419 , 23 , 26 , and 29 have been reported previously . procedures for the remaining compounds in charts 2 and 3 are given in the supporting information . the steroids in charts 2 and 3 were crystallized from methyl acetate or acetone , to which small amounts of water had been added , through slow evaporation of the organic solvent . in most cases other polar solvents , such as methanol or ethanol , or nonpolar mixtures , such as chloroform - hexane , yielded oils or amorphous solids . all the compounds could be analyzed by single crystal x - ray diffraction ( scxrd ) , and with the single exception of 29 ( see below ) , all formed crystals with the p61 npsu packing . the structures of 24 , 14 , 16 , and 18 have been reported in communications , the remainder are described for the first time herein . as expected these show only minor variations , the differences between the steroids being accommodated by changes to the shape , diameter , and surface characteristics of the pores . unsurprisingly , given the open nature of the pore region , disorder in terminal groups r and ( especially ) r was fairly common , being present in 11 of 25 structures . however , in most cases the groups concerned were divided between just two positions , so that a reasonable model of the crystal ( for estimating pore volume etc . ) this applied to 4 , 8 , 10 , 12 , 13 , and 24 ( disorder in r ) , and 22 ( disorder in r ) . in two cases , 15 and 19 , deleting one of the two possible positions did not give a viable structure . however , these crystals could be modeled successfully by assuming equal occupancy of both positions , on an alternating basis . after editing where relevant , the smoothed solvent accessible surfaces and resulting guest - accessible volumes were calculated using materials studio , employing a probe of radius 1.2 . these values are given in table 1 , while images of the surfaces are available as supporting information . these were estimated by repeating the calculation using probes of increasing size until the surface was no longer continuous . the resulting value is , effectively , the diameter of the largest sphere which can pass through the channel . images of selected structures viewed down the pores , with terminal groups shown in spacefilling mode , are shown in figure 4 ( compounds from chart 2 , varying ester group r ) and figure 5 ( compounds from chart 3 , varying c3 terminal group r ) . , obtained from the materials studio program employing a spherical probe of 1.2 radius . calculations of total solvent accessible surfaces give higher values but include small voids outside the channel region . estimated by calculating the smoothed solvent accessible surface using differing probe radii ( increments / decrements of 0.05 ) . the value given is the diameter of the largest probe for which the calculation yields a continuous surface . these values are slightly smaller than those reported in ref ( 17 ) , due to a change in the method of calculation . could be built using the assumption that r in neighboring molecules occupied alternating positions , and this model was used for the pore volume and diameter calculations . when the probe diameter is reduced to this value , voids are generated outside the channel region while a continuous pore surface has not yet appeared . mes disordered over two positions , one being removed before pore volume and diameter calculations . terminal groups r (= nhph ) and or are shown in space - filling mode , with r colored gold . terminal groups r and or (= ome ) are shown in space - filling mode , with or colored magenta . table 1 and figures 4 and 5 illustrate the wide variety of structural properties available via the npsu system . for example , starting at nearly 20% ( for 2 ) , the volume available in the pores can be tuned downward in small increments essentially to zero ( for 15 and 19 ) . indeed , by taking advantage of alloy formation , continuous variation should be possible with these compounds . unsurprisingly , pore volumes and diameters are generally determined by the size of the terminal groups , but more subtle effects are also in play . for example , in the series with varying or ( chart 2 , figure 4 ) , a 2-carbon spacer between the oxygen and an aromatic group tends to allow efficient packing of the aromatic surface against the side of the channels . thus , for pyrenyl derivative 16 , space remains down the center for hydrogen - bonded chains of water molecules . in contrast , a 1-carbon methylene spacer directs the aromatic group toward the center of the channel . in the case of pyrenyl derivative 15 , this results effectively in full occupation of the channel ; the calculated guest - accessible volume and minimum diameter are both close to zero . paradoxically , therefore , the larger terminal group ( in 16 ) leaves more space than the smaller group in 15 . the shape of the channel wall ( smooth vs corrugated ) is another feature which can be altered . as mentioned above , compounds for which r = ch2ch2ar ( e.g. , 14 , 16 , 18 ) tend to adopt structures in which the aromatic groups line the surfaces of the channels . the resulting pores are relatively smooth and cylindrical , as illustrated for 14 in figure 6 ( top ) . in other systems from chart 2 , the channel surface is presumably corrugated , but with random and/or flexible character due to disorder within the crystal . an example is provided by 13 , for which the naphthylmethyl group appears in two orientations , one roughly perpendicular and one more nearly parallel to the channel axis . well - defined corrugated pores may be accessed by placing extended substituents at r ( which is less prone to disorder ) . thus for both 24 ( r = azobenzene ) and 25 ( r = biphenyl ) the c3-substituent reaches well toward the c axis creating strongly asymmetric helical pores ( figure 6 , middle and bottom ) . this work also shows that the chemical nature of the pore walls can be subject to wide variation . the structures collected in figures 4 and 5 feature an alkenyl group c = c ( 8) , a helical strip of fluorocarbon surface ( 10 ) , an aldehyde ( 21 ) , a thioether ( 22 ) , a boc - protected amine ( 23 ) , and an iodobenzene ( 26 ) . as illustrated in figure 7 , all are positioned where they can interact with guest molecules and participate in reactions or noncovalent interactions . finally , crystallography of acetamide 29 showed that not every molecule defined by figure 3 adopts the p61 npsu structure . in this case a monoclinic ( p21 ) form obtained from methyl acetate / water was denoted 29 , and a tetragonal ( p432121 ) form which crystallized from acetone / water was denoted 29. the molecular units in the two forms are almost identical , and qualitatively different to those in the npsus ; in particular , the c3 substituent is positioned so that the nh group points inward , creating a binding site which accommodates two water molecules ( see figures s31 and s32 ) . in both crystals the packing is efficient , leaving no substantial voids ( see figures s57 and s58 ) . crystal structures of 14 , 24 , and 25 viewed perpendicular to the c axis . for the images on the left , the groups which dominate the channel surface ( or for 14 , r for 24 and 25 ) are highlighted in spacefilling mode . for the right - hand images , the smoothed solvent accessible surfaces have been added using materials studio , and the structures have then been sliced along the c - axis . space - filling representations of the channel regions in 8 , 10 , 21 , and 22 . the structures have been sliced along the c - axis to expose the pore interiors and are viewed roughly perpendicular to c and a ( see axes attached to 8) . conventional coloring is used for the distinctive groups in each structure ( or for 8 and 10 , r for 21 and 22 ) , the remaining atoms being shown as silver - blue . implies that the crystal is permeable , allowing exchange of small guest molecules , and that this process does not substantially affect the host framework . ideally , the crystals should also be able to survive the removal of all guest molecules without loss of structure and then show reversible gas adsorption to confirm porosity . as mentioned earlier , we previously demonstrated that trifluoroacetamide 2 satisfies , at least , the guest - exchange criterion . the results from evacuation were less clear - cut ; the powder xrd ( pxrd ) pattern remained essentially unchanged , but the crystals crazed and became opaque . most npsu crystals , especially those with 3-ureido substituents , showed no change in appearance on evacuation . nonetheless it was clearly desirable to establish the solvation state of a typical npsu , show that the solvent could be removed , and demonstrate that the resulting crystals were unchanged and capable of gas adsorption . we chose tris - n - phenylurea 3 for this study , as this compound is the most accessible npsu and has proved the most convenient for routine use . crystals of 3 were obtained as needles from acetone / water ( initial ratio 10:1 ) , after washing with acetone and air - drying . samples were then evacuated at room temperature and 100 c for 24 h. the three samples ( air - dried , rt evacuated , 100 c evacuated ) were then analyzed by h nmr in dmso , using a procedure which allowed the amount of background water to be measured and taken into account . the composition of the air - dried crystals was found to be 3:water : acetone = 1:3.8:0.2 . allowing for the single water molecule per steroid embedded in the channel wall , this implies that the pores are filled with 3 molecules of h2o per molecule of 3 , with a small amount of organic solvent also present . after evacuation at rt the composition was 3:water = 1:1 , implying that the channels are empty . after evacuation at 100 c for 24 h the ratio 3:water reduced slightly to 1:0.9 . this suggests some degradation , although microscopy and pxrd again showed no major changes . samples of 3 were also heated to 150 c and above , and in these cases clear signs of decomposition were observed both by microscopy ( loss of transparency ) and pxrd ( loss of diffraction peaks ) . having established that the pores could be evacuated without loss of crystallinity , we proceeded to confirm the permanent porosity of 3 using n2 gas adsorption measurements for a sample that had been heated under vacuum at 75 c for 9 h. surprisingly , the n2 adsorption predominately takes place at high relative pressures ( p / p > 0.7 ) , and there is significant hysteresis between the adsorption and desorption isotherms giving a type iv isotherm ( see e.g. , figure 8) . this hysteresis differs from that observed in mesoporous materials ( pore diameter > 20 ) that generally closes at lower relative pressures ( p / po 0.4 ) and which is related to pore evacuation involving capillary action . furthermore , the desorption isotherm falls below that of the adsorption isotherm at p / po 0.7 . desorption cycle is repeated and presumably reflect slow , nonequilibrium , kinetics of n2 adsorption . such slow kinetics is understandable if access to the pores is restricted to the relatively small number of openings located at the end of the long needle - shaped crystals , which are on average > 2 mm in length . we have previously shown that the pores in 3 are parallel to the long axis of the crystals . similar hysteresis was observed by tosi - pellenq et al . from the n2 isotherms of long ( 150 m ) microporous crystals of alpo4 - 5 , which also contain cylindrical channels ( 0.76 nm in diameter ) along the long axis of the crystals . in this case the fact that the desorption isotherm in figure 8 dips below the adsorption isotherm implies that evaporating nitrogen is lost more rapidly from the channels than gaseous n2 is readsorbed . this may relate to pressure differences between the interior and exterior of the crystals ; it is reasonable to suppose that when the crystals are compressed , inward gas transfer could be relatively slow , while internal pressure could expand the crystals and assist n2 efflux . the possibility that the effects are due to collapse of the crystal structure during n2 analysis was discounted by confirming that the structure remained unchanged , as shown by scxrd of a crystal extracted from the sample of 3 used for n2 analysis . the bet surface area calculated from the n2 adsorption isotherm is very low ( 29 m / g ) and probably represents only the external surface area of the crystals . however the pore volume of 0.17 ml / g calculated from the total n2 uptake ( 4.9 mmol / g ) is highly consistent with the guest - accessible volume ( 16.1% ) calculated from the crystal structure ( i.e. , 0.17 ml / g equates to 16% of the total volume given a crystal framework density of 0.941 g / ml ) . the pore volume obtained from n2 uptake is also consistent with the values calculated from the adsorption of liquid guests , as discussed in the following section . n2 adsorption ( ) and desorption ( ) isotherms for crystal 3 at 77 k. see text for discussion . air - dried crystals of 3 were placed in each , left for 1224 h , washed briefly with ether , and subjected to h nmr analysis . all of the substrates were adsorbed in significant amounts , as summarized in table 2 . aniline 30 formed a well - defined host guest 1:1 complex which could be characterized by x - ray crystallography . as shown in figure 9 , the aniline molecules form a helix within the channel , apparently stabilized by a close interguest chn contact ( dc hn = 2.68 ) . the anilines are also held in place by specific favorable interactions with the channel wall , including hydrogen bonds between amino nh and host ester carbonyl ( dn ho = 2.46 and n ho = 167.8 ) , nh interactions involving the second amino nh and a host phenyl group ( dn h = 2.84 ) , and ch interactions to the aniline -system . guest ratio , although in this case the guest could not be located crystallographically . a calculation of the volume of liquid absorbed per unit mass of host gave a value of 0.12 ml / g , consistent with the pore volume obtained by gas adsorption ( see above and table 2 ) . this represents a pore - filling efficiency of 70% using the pore volume calculated from the crystal structure , which is consistent with a strong affinity between the crystal and adsorbate . similar calculations based on the uptake of 3336 suggested that these were absorbed less efficiently . however the value for squalene 36 , at 65% of the maximum , is remarkable for such a large ( 30-carbon ) guest . x - ray crystal structure of 3 with adsorbed aniline , viewed along the c - axis ( top ) and a - axis ( bottom ) . the aniline is shown in space - filling mode . samples of air - dried crystalline 3 were place in aniline and then removed , washed with ether , and analyzed by h nmr after periods ranging from 2 to 180 min . the results showed that the crystals are filled to about half capacity very quickly ( within the first 2 min ) , but that subsequent adsorption is much slower . we were also interested to discover whether the aniline could be oxidized to polyaniline within the channels . indeed , treatment of the complex with peroxyammonium sulfate in 0.1 n hcl caused the crystals to turn dark violet ( after 4 h ) then green ( after 12 h ) . the diffuse - reflectance uv vis spectrum of the product showed adsorption maxima at 420 and 795 nm consistent with polyaniline formation ( see figure s75 ) . pxrd analysis showed that the npsu structure was retained , although the crystals were no longer suitable for single - crystal x - ray structure determination . another set of experiments involved the adsorption of larger guest molecules from solutions in diethyl ether . in these cases colored guests were used for ease of analysis and the potential for interesting or useful optical effects . , solutions of dyes 3743 in ether ( 1020 mm ) were added to crystals of 3 , and the mixtures left to stand for 3 days . after isolation and washing with ether , all crystals were visibly colored . in the case of 3741 the colors were strong enough to show clearly under a microscope ( see figure 10 ) . as shown in figure 10 , the colors appeared to permeate the crystals and were not localized at ends or edges . interestingly , the crystals containing nile red 41 were observed to be blue - purple ( figure 10e ) . this dye is strongly solvatochromic , its optical adsorption moving to longer wavelengths with increasing solvent polarity , and a blue or purple color suggests a highly polar environment . soaking the crystals in ether for 24 h resulted in loss of color , showing that the dye adsorption was reversible . a second npsu crystal , trifluoroacetamide 2 , was also investigated as host and was found to absorb azo - dyes 37 and 38 . the combinations of 3 with disperse red 1 ( 38 ) and azulene ( 43 ) were investigated further , to establish how much dye was included and how fast . in the case of azulene , only 1 mol % was absorbed , while equilibrium was reached within the first hour . in the case of 38 , the first 1 mol % was also absorbed quickly , but a further quantity ( nearly 1 mol % ) was taken up in a slower process over 24 h. crystals of 3 after exposure to ethereal solutions of ( a ) 37 , ( b ) 38 , ( c ) 39 , ( d ) 40 , and ( e ) 41 . despite the appearance of the crystals there was room for concern that the dyes might not be entering the channels but somehow associated with cracks or defects in the crystals . to test this possibility , we examined the colored crystals under a microscope using plane polarized light . if the dyes were occupying the channels , it seemed likely that some ( at least ) would show preferential alignments . if the transition dipole moments were to lie roughly along the channel axis ( the long axis of the crystal ) the crystals should be dichroic , i.e. , their colors should be dependent on their orientation with respect to the plane of polarization . figure 11 shows pairs of photomicrographs in which crystals of identical composition , but oriented at roughly 90 to each other , are illuminated with polarized light . each pair of images shows the same crystals , with the plane of polarization differing by 90. the crystals are clearly dichroic , changing from colored to almost colorless as the plane of polarization is rotated . the effect was observed for 237 , 238 , 337 , 338 , and 341 but not for 339 or 340 . the guests which lead to dichroism ( 37 , 38 , and 41 ) possess extended dipoles due to conjugation of an amino group with an electron acceptor . this feature should encourage the molecules to adopt a head - to - tail arrangement parallel to the channel axis . the images in figure 11 provide strong evidence that the dye molecules are indeed in the channels revealed by crystallography . it should be noted that this phenomenon of dye uptake by organic molecular crystals is rare and may be unprecedented . it is well - known that dyes may be adsorbed by inorganic crystals , such as zeolites , or by organic inorganic hybrids ( pcps / mofs ) . however , the inclusion of dyes in organic molecular crystals is normally achieved by cocrystallization , not by the interaction of substrates with macroscopically sized preformed crystals . this ability of npsus to adsorb such large guest molecules highlights their unusual combination of robust crystal structures with spacious accessible interiors . crystals of npsus with included dyes illuminated with polarized light . for each pair of images the plane of polarization is rotated through 90 between left and right . ( a ) 237 , ( b ) 238 , ( c ) 337 , ( d ) 341 . in principle , the npsu crystal packing represents a powerful tool for the design of functional materials . first , the structure needs to be generalizable , forming in ( at least ) most of the cases where it might be predicted . second , the crystals need to be truly porous so that the space within may be exploited . we have now examined the crystal structures of 26 molecules with the general structure represented in figure 3 , and of these only one ( acetamide 29 ) fails to adopt the p61 npsu arrangement . the range of npsus now includes examples with vanishingly small pores sizes , strongly corrugated pore surfaces , and several cases with potentially reactive functional groups ( ch2ch = ch2 in 8 , ch = o in 21 , sme in 22 , and nhboc in 23 ) . it is notable that neither the aldehyde nor nhboc groups , both of which are quite polar , disturbed the npsu packing . we have also shown that the pores can be evacuated without loss of integrity and that subsequent gas adsorption is possible ( although given the pressures involved and the low pore volume , applications in gas storage are unrealistic ) . more importantly , organic molecules are also absorbed , including the large rigid nile red 41 ( mw 318 ) , and the even larger but more flexible squalene 36 ( mw 411 ) . the ability to orient dye molecules suggests applications in display technology and nonlinear optics . although not all dyes showed this behavior , the tunability of the pores implies that the phenomenon should be extendable ( e.g. , by tailoring of channel diameter ) . the fact that small molecules can readily access the pores points to further applications in catalysis , sensing , and separations , especially given the chirality of the crystals and the ability to incorporate effector groups through alloy formation . we hope to explore these and other possibilities in future work .
previous work has shown that certain steroidal bis-(n - phenyl)ureas , derived from cholic acid , form crystals in the p61 space group with unusually wide unidimensional pores . a key feature of the nanoporous steroidal urea ( npsu ) structure is that groups at either end of the steroid are directed into the channels and may in principle be altered without disturbing the crystal packing . herein we report an expanded study of this system , which increases the structural variety of npsus and also examines their inclusion properties . nineteen new npsu crystal structures are described , to add to the six which were previously reported . the materials show wide variations in channel size , shape , and chemical nature . minimum pore diameters vary from 0 up to 13.1 , while some of the interior surfaces are markedly corrugated . several variants possess functional groups positioned in the channels with potential to interact with guest molecules . inclusion studies were performed using a relatively accessible tris-(n - phenyl)urea . solvent removal was possible without crystal degradation , and gas adsorption could be demonstrated . organic molecules ranging from simple aromatics ( e.g. , aniline and chlorobenzene ) to the much larger squalene ( mw = 411 ) could be adsorbed from the liquid state , while several dyes were taken up from solutions in ether . some dyes gave dichroic complexes , implying alignment of the chromophores in the npsu channels . notably , these complexes were formed by direct adsorption rather than cocrystallization , emphasizing the unusually robust nature of these organic molecular hosts .
Introduction Results and Discussion Conclusion
PMC4978194
behcet s disease ( bd ) is a chronic autoimmune / inflammatory disorder characterized by recurrent orogenital ulcers , cutaneous inflammation , and uveitis . in addition to its typical mucocutaneous and ocular manifestations , bd targets the musculoskeletal , vascular , nervous , and gastrointestinal systems.13 the prevalence of bd is geographically influenced , and it is more prevalent in countries along the silk route , particularly in the east asia4,5 and the middle east.611 its prevalence is highest in turkey , followed by egypt , morocco , iraq , saudi arabia , japan , iran , korea , and china.12,13 although the specific etiology of bd remains elusive , extensive studies have suggested that autoimmunity , genetic factors , and environmental factors are involved in its pathogenesis.3,14,15 like many autoimmune disorders , bd has significant genetic associations with particular alleles of the class i and ii human major histocompatibility complex ( mhc ) , and studies of these associations have led to significant insights into the molecular underpinnings of these disorders.1618 the human leukocyte antigen ( hla ) region on chromosome 6p21.31 contains multiple genes encoding highly variable antigen - presenting proteins and plays a key role in antigen presentation and activation of t cells.19 hla protein , hla - b*51 , encoded by hla - b is the strongest known genetic risk factor for bd . associations between bd and other factors within the mhc have also been reported , although the strong regional linkage disequilibrium complicates their confident disentanglement from hla - b*51 . single nucleotide polymorphism mapping with logistic regression of the mhc identified the hla - b / mica region and the region between hla - f and hla - a as independently associated with bd.16,2024 genetic association studies on saudi bd patients are scanty.13,25,26 the saudi population being a closed and isolated society with a high rate of consanguinity ( inbreeding ) represents a valuable resource for studying such genetic associations , and the present study was aimed at investigating the association of hla - a and b genetic variants with bd in saudi patients . in the present study , we recruited 60 bd saudi patients ( aged 2064 years ) and an equal number of healthy controls , matched for age ( 2060 years ) , sex , and ethnicity ( saudi ) from prince sultan military medical city , riyadh , saudi arabia , for genetic analysis of hla alleles . the exclusion and inclusion criteria were followed strictly for the selection of patients and controls . a questionnaire was filled for each subject to collect past medical history , drug in use , and relevant life style related questions . this study was approved by the research and ethical committee of prince sultan military medical city , riyadh , saudi arabia , and the written informed consent was obtained from each subject before participation . this research work complied with the principles of the declaration of helsinki . the diagnosis of bd was made based on the criteria of the international study group for bd.27 we evaluated the clinical features such as oral ulcers , genital ulcer , ocular inflammation , musculoskeletal , cutaneous , gastrointestinal lesions , nervous , pulmonary , cardiovascular manifestations , and vascular lesions . the active and nonactive forms of bd were determined at the time of study after the assessment of clinical parameters . a detailed information along with demographic characteristics are mentioned in our recently published article.13 peripheral blood ( 3 ml ) from healthy controls and patients was drawn in edta - containing vials , and genomic dna was extracted using the qiaamp dna mini kit ( qiagen ) according to the manufacturer s protocol . the purity of dna was determined at 260/280 nm using a nano - drop spectrophotometer ( thermo fisher scientific ) . only dna samples having a 260280 nm absorbance ratio between 1.7 and 2.0 and a final concentration of 2030 ng/l were considered appropriate . hla genotyping was performed by the reverse sequence - specific oligonucleotide polymerase chain reaction ( pcr ) technique using genotyping kits lab type sso ( one lambda ) as per the manufacturer s protocol . the regions of dna exons 2 and 3 for the loci a and b were amplified . the allele - specific biotinylated primer accompanying the kits was used for the amplification of dna . the pcr amplification was programmed at 96 c for three minutes followed by five cycles of 96 c for 20 seconds , 60 c for 20 seconds , and 72 c for 20 seconds ; 30 cycles of 96 c for 10 seconds , 60 c for 15 seconds , and 72 c for 20 seconds and extension at 72 c for 10 minutes . the amplified product was also run on 5% agarose gel ( pulsed field certified agarose ; bio - rad laboratories ) to check the amplification of specific exon of each locus . the remaining pcr product was then hybridized with oligonucleotide probes sequence - specific conjugates with fluorescent microspheres . the hybridized products were analyzed by using flow analyzer running lab scan 100 xponent ( one lambda ) and fluorescence intensity in each microsphere was identified . frequencies of various alleles of hla polymorphism were compared between bd patients and controls and analyzed by fisher s exact test and p values 0.05 were considered significant . the significance of the differences in distribution of alleles was calculated after bonferroni correction to minimize error due to multiple comparison tests . the binary logistic regression analysis was also performed for each of homozygous ( two alleles ) and heterozygous ( one allele ) hla - a or hla - b alleles contributing independently to bd . genetic data were also expressed as an odd ratio interpreted as relative risk ( rr ) according to the method of woolf as outlined by schallreuter et al.28 rr indicates the number of times the risk of disease is higher in terms of allele in bd patients than in controls . the rr was calculated for all the subjects using the following formula : rr = adbcwhere a is the number of patients with expression of allele , b the number of patients without expression of allele , c the number of controls with expression of allele , and d the number of controls without expression of allele . ef was calculated for positive association only where rr > 1 using the following formula29 : ef=(rr1)frr , wheref = aa+c preventive fraction ( pf ) indicates the hypothetical protective effect of one specific allele / genotype against the disease . pf was calculated for negative association only where rr < 1 using the following formula.29 values < 1.0 indicate the protective effect of the allele against the manifestation of disease . hla genotyping was performed by the reverse sequence - specific oligonucleotide polymerase chain reaction ( pcr ) technique using genotyping kits lab type sso ( one lambda ) as per the manufacturer s protocol . the regions of dna exons 2 and 3 for the loci a and b were amplified . the allele - specific biotinylated primer accompanying the kits was used for the amplification of dna . the pcr amplification was programmed at 96 c for three minutes followed by five cycles of 96 c for 20 seconds , 60 c for 20 seconds , and 72 c for 20 seconds ; 30 cycles of 96 c for 10 seconds , 60 c for 15 seconds , and 72 c for 20 seconds and extension at 72 c for 10 minutes . the amplified product was also run on 5% agarose gel ( pulsed field certified agarose ; bio - rad laboratories ) to check the amplification of specific exon of each locus . the remaining pcr product was then hybridized with oligonucleotide probes sequence - specific conjugates with fluorescent microspheres . the hybridized products were analyzed by using flow analyzer running lab scan 100 xponent ( one lambda ) and fluorescence intensity in each microsphere was identified . frequencies of various alleles of hla polymorphism were compared between bd patients and controls and analyzed by fisher s exact test and p values 0.05 were considered significant . the significance of the differences in distribution of alleles was calculated after bonferroni correction to minimize error due to multiple comparison tests . the binary logistic regression analysis was also performed for each of homozygous ( two alleles ) and heterozygous ( one allele ) hla - a or hla - b alleles contributing independently to bd . genetic data were also expressed as an odd ratio interpreted as relative risk ( rr ) according to the method of woolf as outlined by schallreuter et al.28 rr indicates the number of times the risk of disease is higher in terms of allele in bd patients than in controls . the rr was calculated for all the subjects using the following formula : rr = adbcwhere a is the number of patients with expression of allele , b the number of patients without expression of allele , c the number of controls with expression of allele , and d the number of controls without expression of allele . ef was calculated for positive association only where rr > 1 using the following formula29 : ef=(rr1)frr , wheref = aa+c preventive fraction ( pf ) indicates the hypothetical protective effect of one specific allele / genotype against the disease . pf was calculated for negative association only where rr < 1 using the following formula.29 values < 1.0 indicate the protective effect of the allele against the manifestation of disease . clinically , all bd patients ( 100% ) had oral ulcers , 80.32% genital ulcer , 70.49% ocular , 67.21% musculoskeletal , 60.65% cutaneous , 36.06% gastrointestinal , and 22.95% patients had nervous system involvement . the results of genotyping for hla - a and hla - b in bd patients and controls are summarized in tables 15 . the frequency of hla - a*02 ( 38.33% ) was the highest , followed by that of hla - a*26 , hla - a*31 , hla - a*68 ( 10.83% each ) , hla - a*23 , hla - a*24 ( 5.83% each ) , hla - a*01 , hla - a*30 , hla - a*32 ( 3.33% each ) , hla - a*03 , hla - a*33 ( 2.5% each ) , hla - a*11 , hla - a*29 , and hla - a*69 ( 0.83% each ) . comparison of allele frequencies between the bd patients and controls indicated that the frequencies of allele hla - a*026 and hla - a*31 were significantly higher in bd patients than in controls ( p = 0.041 , or = 3.523 , 95% ci = 1.1111.139 , ef = 0.546 , and p = 0.005 , or = 7.168 , 95% ci = 1.58132.498 , ef = 0.702 , respectively ) . however , after applying bonferroni correction , the p values are not significant ( p = 0.08 and p = 0.656 , table 1 ) . an increased frequency of hla - a*02 was also found in bd patients as compared to controls ( 38.33% vs. 29.16% ) , but the difference was not statistically significant ( p = 0.172 , table 1 ) . when the data were grouped on the basis of the active and nonactive forms of bd , the frequency of hla - a*31 allele was significantly higher in the nonactive form than in the active form of bd ( p = 0.015 ) , while the frequency of hla - a*26 did not differ significantly in two groups ( table 2 ) . the frequency of hla - b*51 was significantly higher in bd patients than in controls ( p = 0.0001 , or = 3.631 , 95% ci = 2.0866.322 , ef = 0.521 ) . moreover , on applying bonferroni correction , the frequency of hla - b*51 was found to be significantly higher in bd patients than in controls ( p = 0.0022 ) . increased frequencies of allele hla - b*07 and hla - b*08 however , the difference was not significant ( p = 0.689 and p = 0.823 , respectively ) . on the other hand , hla - b*15 was significantly lower in bd patients than in controls ( p = 0.03 , or = 0.254 , 95% ci = 0.0690.935 , pf = 0.384 , table 3 ) , though after bonferroni correction , the significance was lost ( p = 0.66 ) . stratification of genotyping results into the active and nonactive forms of bd revealed no significant difference in the allele frequencies among the two groups ( table 4 ) . the frequency distribution of homozygous / heterozygous alleles of hla - a*26 , hla - a*31 , and hla - b*51 in bd and controls is shown in table 5 . the binary logistic regression analysis performed for each of the homozygous and heterozygous hla alleles indicated that hla - b*51 allele , both in homozygous ( two alleles ) and heterozygous ( one allele ) conditions , is significantly associated with susceptibility to bd in saudi patients ( p = 0.0001 and p = 010 , respectively ) . on the other hand , hla - a*26 is associated in heterozygous ( one allele ) conditions with bd , while upon stratification of hla - a*31 into heterozygous and homozygous conditions , the association lost significance ( p = 0.089 ) . the significantly higher frequency of hla - a*26 in bd cases than in controls suggested that hla - a*26 is associated with susceptibility to bd in saudi patients . the binary logistic regression analysis ( table 5 ) also indicated that hla - a*26 is associated in heterozygous ( one allele ) conditions with bd in saudi patients . the hla - a gene has been genotyped in bd patients with different ethnicities , and hla - a*26 was reported to be associated with bd in taiwan , greece , and japan.3032 hla - a*26 has been associated with the ocular manifestation , an outcome of bd indicating its contribution to the risk of bd.31,32 itoh et al.33 found weak association of hla - a*26 with bd and suggested some secondary influence on the onset of bd . in addition , an association between the hla - a*26:01 subtype and bd has been reported in japanese and korean.16,22 hla - a*26:01 not only has been reported to be a primary susceptibility allele of bd in japan,22 but a recent study also found that the frequency of hla - a*26:01 was significantly increased in bd patients with uveitis , particularly in the hla - b*51 negative subset.32 our results also suggested that allele hla - a*31 is associated ( or = 7.168 , ef = 0.702 ) with the risk of bd . after applying bonferroni correction , the p value is not significant ( p = 0.08 ) possibly due to the small sample size . as this is the first study where hla - a*31 is found to be associated with bd susceptibility risk , these results remain to be replicated in other cohorts . however , when genotypic data were stratified on the basis of active and nonactive forms of bd , we found that the frequency of hla - a*31 was significantly ( p = 0.015 ) higher in the inactive form of bd than in the active form . in general , several earlier reports are consistent with the present study and hla - a gene has been suggested to constitute a second independent susceptibility locus.2022 kang et al.16 showed that certain hla - a alleles are responsible for the unique clinical features of bd . due to the small sample size , we could not assess any relationship between particular hla - a type and clinical feature of the patient . nevertheless , we believe that the results of our study are unlikely to be affected by systematic errors such as population stratification , because the source of our controls and cases represents the same saudi population . on the contrary , some reports indicated that hla - a alleles are not associated with increased risk of bd in palestine , jordan , iran , ireland , italy , and turkey.3438 the significantly higher frequency of hla - b*51 in saudi bd patients than in controls with p = 0.0001 , or = 3.631 , and ef = 0.521 together with bonferroni corrected p = 0.0022 indicated that hla - b*51 is strongly associated with bd susceptibility . earlier , yabuki et al.26 studied 13 saudi bd patients and reported significantly increased frequency of hla - b51 in bd patients as compared to controls . several studies across the globe in different ethnicities have shown strong evidences for hla - b*51 susceptibility to increased risk of bd.17,18,3740 hla - b*51 alone increases the risk of bd up to 40%80% in different ethnicities and known as universal risk factor for bd.18,31,41 the present finding of hla - b*51 and increased risk of bd in saudis ( rr = 3.93 ) are corroborated with earlier reports from different ethnic populations : rr = 3.51 and p = 0.065 for the japanese population,37 p = 1.35 10 and or = 5.15 for the chinese han population,40 p = 0.0003 and or = 2.39 for the korean population , rr = 3.51 for the iranian population,18 or = 6.24 for the turkish population,38 p = 4.11 10 and or = 4.63 for the sardinia population,39 or = 5.15 , p = 1.35 10 for the spanish population,17 and many more . however , hla - b*51 alone is neither necessary nor sufficient to determine bd , and several hla - a and -b alleles may independently contribute to the risk of bd.17,24,42 on the other hand , the frequency of hla - b*15 was significantly lower in saudi bd patients than in controls , suggesting that hla - b*15 may be protective against bd in saudis . contrary to our result , hla - b*15 has been associated with bd in some populations.24,42,43 ombrello et al.24 indicated that hla - b*51 , -a*03 , -b*15 , -b*27 , -b*49 , -b*57 , and -a*26 each contributed independently to bd risk in turkish population . piga and mathieu42 in a meta - analysis reported that besides hla - b*51 being primarily associated , hla - a*26 , hla - b*15 , and hla - b*5701 are also independently associated with bd and suggested for further studies to clarify the functional relevance of the different genes found to be associated with disease susceptibility and the potential interactions between genes located within and outside the mhc region . our study supports that besides hla - b*51 being primarily associated , hla - a alleles are also independently associated with susceptibility to bd . it is concluded that hla - a*26 , -a*31 , and -b*51 are associated with bd in saudi patients , while hla - b*15 may be protective . however , further studies on population genetics with larger sample size are required to strengthen these findings .
backgroundhla - b*51 has been universally associated with behcet s disease ( bd ) susceptibility , while different alleles of hla - a have also been identified as independent bd susceptibility loci in various ethnic populations . the objective of this study was to investigate associations of hla - a and -b alleles with bd in saudi patients.materials and methodsgenotyping for hla - a and hla - b was performed using hla genotyping kit ( lab type(r ) sso ) in 120 saudi subjects , including 60 bd patients and 60 matched healthy controls.resultsour results revealed that frequencies of hla - a*26 , -a*31 , and -b*51 were significantly higher in bd patients than in controls , suggesting that hla - a*26 , -a*31 , and -b*51 are associated with bd . the frequency of hla - b*15 was significantly lower in bd patients than in controls . stratification of genotyping results into active and nonactive forms of bd revealed that the frequency of hla - a*31 was significantly higher in the nonactive form than in the active form of bd , while there was no significant difference in the distribution of other alleles between the two forms of bd.conclusionthis study suggests that hla - a*26 , -a*31 , and -b*51 are associated with susceptibility risk to bd , while hla - b*15 may be protective in saudi patients . however , larger scale studies are needed to confirm these findings .
Introduction Materials and Methods HLA genotyping Statistical analysis Results Discussion Conclusion
PMC4189520
despite considerable public health efforts to curtail obesity , the epidemic has progressed over the past three decades in the united states ( us ) . epidemiological data from the most recent national health and examination survey ( nhanes 2009 - 2010 ) indicated that 68.8% of american adults are overweight or obese ( body mass index , bmi 25 kg / m ) . nearly half of this population has a bmi 30 kg / m , suggesting that about 36% or one - third of the us adult population is currently obese . while obesity has been linked to a number of chronic diseases , overwhelming epidemiological data indicate a positive correlation between bmi and bmd [ 16 ] . although the exact mechanisms underlying this observation are not fully understood , this protective effect could be attributed to increased mechanical load , imposed by a greater weight on weight - bearing bones as well as hormonal changes associated with obesity , for example , increased synthesis of estrogen and leptin by adipocytes . higher levels of estrogen , known to suppress osteoclastic bone resorption and stimulate osteoblastic bone formation , have been found in serum of obese postmenopausal women when compared to their normal weight counterparts . this fact is attributed to an increased peripheral production of estrone through increased aromatization of androstenedione by aromatase in the white adipose tissue [ 8 , 9 ] . additionally , plasma leptin levels have been directly associated with the amount of fat mass . though leptin is primarily known to influence energy intake and expenditure , it has also been shown to be an important factor in the regulation of bone remodeling . in contrast to the putative protective effects of excess body weight on bmd , evidence challenging this relationship has recently emerged . . showed that in healthy adults greater fat mass negatively correlates with bmd after correction for the mechanical loading effect of body weight . in addition , blum et al . found that serum leptin is negatively associated with bmd in premenopausal women . further , ducy et al . observed that intracerebroventricular infusion of leptin reduced bone mass in ob / ob and wild - type mice , respectively . due to the increasing incidence and prevalence of obesity and the controversial findings in the literature about the relationship between bmi and bmd , the main purpose of this study was to investigate the effects of obesity on bone mass and quality in female zucker rats , the most commonly used rat model of genetic obesity . because postmenopausal women experience rapid bone loss and are prone to gaining excess body weight , we also investigated the effect of obesity on bmd in a rat model of postmenopausal bone loss . the animal protocol for this study was approved by the institutional animal care and use committee of the university of arkansas for medical sciences . six - week old leptin receptor - deficient female ( lepr ) zucker rats and their heterozygous lean controls ( lepr ) were sham - operated ( lean - lepr , n = 6 ; obese - lepr , n = 6 ) or ovx ( lean - lepr , n = 6 ; obese - lepr , n = 6 ) by harlan industries ( indianapolis , in , usa ) and housed in the animal facilities at the arkansas children 's hospital research institute . rats were housed in polycarbonate cages ( 2/cage ) and had free access to water and a semipurified ain-93 g diet ( harlan - teklad , madison , wi , usa ) for 21 weeks . at 27 weeks , rats were sacrificed , and l4 vertebrae and tibiae were removed and cleaned of adhering tissue . l4 vertebrae were then scanned to determine bone mineral area ( bma ) , bone mineral content ( bmc ) , and bmd using dual energy x - ray absorptiometry ( dxa ; qdr-4500a elite ; hologic , waltham , ma , usa ) while tibiae were frozen at 20c for microstructural analysis . the microarchitectural trabecular bone structure of tibiae was evaluated using microcomputed tomography ( ct40 , scanco medical , switzerland ) . the tibia was scanned from the proximal growth plate in the distal direction ( 16 m / slice ) . this region included 350 images obtained from each tibia using 1024 1024 matrix resulting in an isotropic voxel resolution of 22 m . an integration time of 70 ms per projection was used , with a rotational step of 0.36 resulting in a total acquisition time of 150 min / sample . the volume of interest ( voi ) was selected as a region 25 slices away from the growth plate at the proximal end of the tibia to 125 slices . the three - dimensional ( 3d ) images were also obtained for visualization and display . the trabecular bone morphometric parameters assessed with ct included the bone volume expressed as a percentage of total volume ( bv / tv ) , trabecular number ( tb.n ) , thickness ( tb.th ) , and separation ( tb.sp ) . nonmetric parameters included structure model index ( smi ) which is an indicator of plate - rod arrangement of the bone structure and connectivedensity ( conn.d ) . a 2 2 ( group by time ) repeated measures analysis of variance ( anova ) was used to determine differences in body weight between groups and over time while one - way anova was used to determine group differences in bone parameters after treatment . in the event of a significant main effect or interaction , tukey - kramer post hoc data analyses were generated using ssps version 20.0 for windows ( spss inc . , chicago , il ) . initial and final body weights are shown in table 1 . as expected , despite the similar initial body weights , ovx rats gained significantly more weight than sham rats . additionally , mean final body weight for obese - ovx zucker rats was significantly ( p < 0.001 ) higher than in lean - ovx zucker rats ( 340 and 562 g , resp . ) . notably , the mean l4 vertebral bmd was approximately 11.5% lower in ovx rats than in the sham rats , irrespective of body weight ( table 2 ) , whereas the mean bmc of l4 vertebrae was only significantly higher in the obese - sham zucker rats compared to lean - ovx zucker rats ( table 2 ) . the 3d image analyses of the proximal tibia indicated that ovariectomy unfavorably altered trabecular microstructural parameters in both lean and obese groups ( figure 1 ) with the exception of tb.th ( figure 1(c ) ) . when compared to lean - sham zucker rats , tb.th tended to be lower in the lean - ovx zucker rats ( p = 0.06 ) and in the obese - sham zucker rats ( p = 0.09 ) ; however , this parameter was only significantly ( p = 0.005 ) lower in the obese - ovx group . although obesity and osteoporosis are two chronic conditions that have long been considered to be mutually exclusive , there is evidence that a complex relationship between the two exists . for decades , an array of epidemiological studies available have reported positive correlations between bmi and/or fat mass and bmd which led to the belief that excess body weight stimulates greater bone mass , strength , and quality primarily due to mechanical loading [ 12 , 1517 ] . at the same time , several animal studies [ 1820 ] have indicated the detrimental effects of excess body weight on bone ; however , to our knowledge , only one of these studies has taken into consideration menopausal status while examining this relationship . therefore , the purpose of our study was to investigate the effects of obesity on bone considering the menopausal status of the rat . the results of the current investigation indicate that mean bmd of l4 vertebrae was not altered in lean and obese intact zucker rats whereas it was found to be significantly lower in lean and obese - ovx zucker rats . these observations are partially corroborated by the findings of picherit at al . who observed that 6-month lean - ovx zucker rats have lower femoral bmd than lean - sham zucker rats ; however , the effect of ovariectomy on femoral bmd of obese zucker rats was not assessed . they also found the femoral bmd of obese zucker rats to be distinctly lower than that of lean zucker rats but similar to that of lean - ovx zucker rats , a discrepant result from our observations . although unexpected , in the current study , obesity did not exacerbate or attenuate the effects of ovariectomy on bmd . though the number of studies addressing the relationship between obesity and bone mass in ovx - zucker rats is limited , this relationship has been explored to a greater extent in c57bl/6j mice . for instance , nez et al . reported that overweight - ovx female c57bl/6j mice had higher whole body bmd when compared to lean - ovx mice . in contrast , they found that very obese - ovx mice ( 55% body fat ) had markedly lower bmd than the overweight - ovx mice . their findings suggest that a threshold may exist in order for body weight to exert protective effects on bone . hence , we postulate that there may be a u - shaped relationship between body weight and bone , though this needs further investigation . in addition , other investigators have reported that obesity induced by a high - fat diet significantly decreased femur and lumbar bmd in male c57bl/6j mice . in contrast , there are reports indicating that obesity induced by a high - fat diet increased spine bmd but had no effect on whole body bmd in male mice [ 22 , 23 ] . in terms of human studies , independent reports have indicated that high bmi was positively associated with greater whole - body and spine bmd in postmenopausal women [ 24 , 25 ] . case - control and retrospective studies have also demonstrated that obese postmenopausal women have greater femoral and lumbar spine bmd than nonobese postmenopausal women . the findings of a cross - sectional study by castro et al . reported that high bmi correlates with decreases and increases , by units , in the odds of low bmd in white and african - american women , respectively , suggesting a possible racial discrepancy . for instance , a study by zhao et al . reported that greater fat mass negatively correlates with bmd after correction for the mechanical loading effects of body weight in healthy adults . altogether , much of the data presented from animal as well as human studies are inconclusive with respect to the effect of obesity on bmd . with respect to bone trabecular properties , our results show that obesity did not affect any of the bone microstructural parameters in intact rats . our findings also indicate that bone microstructural parameters were unfavorably affected by ovariectomy in lean and obese zucker rats in comparison to those intact , with the exception of tb.th , which was preserved in lean - ovx zucker rats . although there is a paucity of studies reporting the effect of obesity on bone microstructural parameters in ovx zucker rats , observations similar to ours for instance , the preservation of tb.th observed in the lean - ovx rats is in agreement with the findings of yoshida et al . who reported that bone loss in ovx - wistar rats interestingly , tb.th was found to be markedly lower in obese - ovx zucker rats when compared to the other groups suggesting that obesity has a negative effect on bone . in contrast to our findings , in male c57bl/6j mice , obesity induced by a high - fat diet did not affect tb.th but negatively altered all other microstructural parameters [ 19 , 23 ] . these mouse studies examined the bone microarchitecture of femurs and lumbar vertebrae , respectively ; yet in our study the tibiae were used . thus , the difference in bone specimens may explain the discrepancy in our findings as it has been demonstrated that rates of trabecular bone loss are higher in the lumbar vertebrae than in the tibiae . nonetheless , a preservation of tb.th has been also observed in women [ 29 , 30 ] . particularly , the os des femmes de lyon ( ofely ) study reported that obese postmenopausal women had significantly greater tb.n , lower tb.sp , and similar tb.th at the distal tibia when compared to normal - weight women . therefore , it is imperative that future studies investigate the effect of obesity on specific bones in order to make relevant comparisons . the effects of ovariectomy on bone mass and microstructure in lean and obese zucker rats were anticipated as the 6-month - old ovx - rat is considered a rat model suitable to study postmenopausal bone loss . nonetheless , the validity of using obese - ovx zucker rats as a model of postmenopausal obesity remains unclear as these animals exhibit altered ovarian morphology , with decreased production of estrogen , and are not fertile . thus , one could argue that the obese intact zucker rat is not the ideal model of premenopause in comparisons involving lean zucker rats that exhibit 4-fold higher levels of estrogen than obese zucker rats . indeed , estrogen is a necessary factor for bone growth and development as well as a regulator of bone turnover in mature bones . the fact that bone mass did not differ between obese - ovx and intact zucker rats and their lean controls implicates adipose tissue as the source for the estrogen necessary to maintain bone mass in the absence of ovarian estrogen . it has been shown that most estrogen in postmenopausal women is produced in peripheral adipose tissue by aromatization of androstenedione to estrone [ 7 , 35 ] , especially in obese and overweight women . [ 3538 ] demonstrated that the rate of conversion of androstenedione to estrone is increased by obesity . another factor associated with obesity is low levels of sex hormone binding globulin which results in increased bioavailable estradiol . circulating leptin increases as body weight and fat mass increase [ 40 , 41 ] which may be another contributing factor for the maintenance of bone mass in obese - ovx zucker rats observed in this study . previous data suggest that serum leptin may play an important role in bone remodeling [ 11 , 42 ] . levels of leptin , which is a protein coded by the ob / ob gene and secreted by adipocytes , are increased in response to elevated estrogen levels [ 43 , 44 ] . aromatase activity contributes to increased estrogen synthesis in adipose tissue and may be an important factor in increasing leptin in obese zucker rats . this may partially explain why bone mass was not significantly different between obese - ovx and intact zucker rats and their respective lean controls . in fact , the literature supports this notion as leptin deficient mice have been shown to have impaired bone growth . in addition , administration of leptin to male mice has been shown to reduce bone fragility as denoted by increased work to failure and displacement in comparison to their controls . cortical bone microstructural parameters as well as dynamic histomorphometry data and biomechanical properties were not evaluated . future studies should assess these parameters as they may lead to a better understanding of the effects of obesity on bone in ovx zucker rats . even though lepr rats are known to exhibit the obesity phenotype around 4 - 5 weeks of age and in the present study these rats were obese for approximately 22 weeks , we did not longitudinally examine weight and bone parameters which should be done in the future . body composition ( e.g. , lean and fat mass ) could provide additional insight into the relationship between obesity and bone in ovx zucker rats . lastly , a larger sample size may have helped to detect significant differences among many of the parameters of interest . however , a sample size of six rats per group was used per other rat models as there were no similar studies available at the time of conducting the present study . future studies should be conducted in this rat model using a larger sample size to enable the detection of significant differences for the parameters of interest . in summary , the findings of animal and human studies regarding the effects of obesity on bone health are inconsistent and may be attributed to several factors . first , the majority of human studies report findings that are primarily derived from correlational analyses as opposed to controlled trials , thereby precluding a cause - and - effect relationship from being established . second , existing animal studies vary in the use of species and strains as well as age , gender , and estrogenic environment . third , various means of inducing obesity , for example , through genetic manipulation or high - fat diet , may influence the bone outcomes assessed . therefore , the relationship between obesity and bone health can not be established at this time . collectively , aside from the adverse effects on tb.th , the findings of the present study do not show a directional relationship between obesity and bone health in zucker rats .
obesity and osteoporosis are two chronic conditions that have been increasing in prevalence . despite prior data supporting the positive relationship between body weight and bone mineral density ( bmd ) , recent findings show excess body weight to be detrimental to bone mass , strength , and quality . to evaluate whether obesity would further exacerbate the effects of ovariectomy on bone , we examined the tibiae and fourth lumbar ( l4 ) vertebrae from leptin receptor - deficient female ( leprfa / fa ) zucker rats and their heterozygous lean controls ( leprfa/+ ) that were either sham - operated or ovariectomized ( ovx ) . bmd of l4 vertebra was measured using dual - energy x - ray absorptiometry , and microcomputed tomography was used to assess the microstructural properties of the tibiae . ovariectomy significantly ( p < 0.001 ) decreased the bmd of l4 vertebrae in lean and obese zucker rats . lower trabecular number and greater trabecular separation ( p < 0.001 ) were also observed in the tibiae of lean- and obese - ovx rats when compared to sham rats . however , only the obese - ovx rats had lower trabecular thickness ( tb.th ) ( p < 0.005 ) than the other groups . these findings demonstrated that ovarian hormone deficiency adversely affected bone mass and quality in lean and obese rats while obesity only affected tb.th in ovx - female zucker rats .
1. Introduction 2. Methods 3. Results 4. Discussion
PMC5027040
significant evidence ties gestational weight gain ( gwg ) to short- and long - term maternal and infant outcomes . to optimize maternal and child health , the institute of medicine ( iom ) provides guidelines for gwg based on prepregnancy body mass index ( bmi ) . greater gwg is recommended for women with prepregnancy bmis in the underweight ( 2840 pounds ( lbs ) , 12.718.1 kg ) or healthy weight ( 2535 lbs , 11.315.9 kg ) range , with less gwg recommended for prepregnancy overweight ( 1525 lbs , 6.811.3 kg ) and obese ( 1120 lbs , 5.09.1 kg ) women . only 22 to 40% of women attain gwg within the recommended ranges [ 28 ] , and women of lower socioeconomic status and racial / ethnic minority women have lower adherence to gwg guidelines [ 5 , 911 ] . among latina women and depending on national origin , estimates of excessive gwg range from 36 to 51% , whereas estimates of insufficient gwg range from 17 to 30% [ 7 , 9 , 10 , 12 , 13 ] . socioeconomic and racial / ethnic disparities in achieving recommended gwg are further compounded by higher pregnancy rates and greater odds of adverse birth - related outcomes among socioeconomically disadvantaged and racial / ethnic minority populations than their more affluent and white counterparts . the pregnancy rate of latina women in the us is estimated to be two - thirds higher than that of non - latino whites . within the latina population , nearly half of caribbean latina women experience gwg above iom guidelines , and puerto rican latinas are among women with the highest rates of low birth weight neonates and preterm births , both predictors of infant mortality . however , little is known about why adherence to guidelines is low among this population . identifying and understanding factors driving racial / ethnic differences in gwg are a priority to target maternal and child health disparities in this growing and at - risk population . given the numerous adverse health consequences of excessive and insufficient gwg for the mother and the offspring [ 1 , 4 , 1821 ] , understanding the risk factors for low adherence to iom - recommended gwg and intervening in at - risk groups are of utmost importance . however , little is known about the influence of early gwg ( e.g. , first trimester ) on overall gwg and other maternal and infant outcomes . a prospective study of a predominantly white female sample indicated that maternal weight change in the first trimester was a stronger predictor of birth weight than weight change in the second or third trimester . the timing and extent of gwg may also be an important determinant of birth weight as well as other maternal and prenatal outcomes ; thus , early identification of women who are at risk of excessive or inadequate gwg may be critical to guide the timing and content for intervention delivery to maximize maternal and prenatal health and reduce health disparities . to address gaps in the literature , this study aimed to examine differences in predictors of gestational weight gain ( gwg ) , assess the association of first - trimester gwg to overall gwg between non - latina white and latina women , and examine gwg status with birth outcomes . we hypothesized that women who were overweight or obese before pregnancy would have higher odds of gwg outside of iom recommendations and that first - trimester gwg status ( below , within , or above guideline ) would positively correlate with overall gwg . the study 's targeted population included non - latina white and latina women who received prenatal care from private providers and hospital clinics ( i.e. , a resident clinic and a midwifery clinic ) . the study was conducted at baystate medical center , a large tertiary care facility in western massachusetts with an average of 4,300 deliveries each year , approximately 57% of them to latina women ( primarily of puerto rican origin ) . first , electronic medical record database searches were performed for a retrospective cohort of women who had live deliveries ( preterm or full - term ) at the medical center from september 1 , 2005 , to august 31 , 2006 . women with multifetal pregnancies , unknown ethnicity , and primary language other than english or spanish were excluded . a total of 3,966 ( of 4,300 ) patient records met these criteria . based on estimates of adherence to iom guidelines in other samples , a sample size of at least 400 women thus , the second screening step consisted of randomly selecting one quarter ( n = 1,016 ) of eligible patient records , stratified by ethnicity ( non - latina white and latina ) and site of prenatal care ( hospital clinics and private providers ) , for additional participant eligibility screening via paper medical chart review . reasons for exclusion included missing data on prepregnancy weight ( n = 226 ) or height ( n = 4 ) , missing dates of prenatal measurements ( n = 138 ) , no documentation of prenatal visits in the first trimester of pregnancy ( n = 296 ) , maternal history of gastric bypass ( n = 2 ) , or maternal diagnosis of pregestational diabetes ( n = 31 ) . of excluded records , 60% were excluded for one criterion and 40% were excluded for two or more criteria . the form included fields for recording participant demographics ( date of birth , race / ethnicity , primary language , marital status , insurance type , parity , and employment status ) , psychiatric history ( i.e. , documented psychiatric diagnosis or use of psychiatric medication ) , height , and dates and measured weights at each prenatal visit . three research assistants were trained in the process of data abstraction from paper medical records until 100% interrater reliability was achieved . data from completed and cross - checked abstraction forms were scanned and were uploaded into a sas database . data abstraction was performed from 2007 to 2008 . during this time frame , revisions of iom 's gwg guidelines were anticipated and were available following data cleaning procedures and at the time of analyses . thus , the investigative team decided a priori to utilize 2009 guidelines in categorizing gwg measures ( described below ) with the goal of providing an estimate of likely nonadherence to new recommendations and associated outcomes . additionally , the 2009 guidelines did not differ greatly from former guidelines yet offered the benefit of a recommended range of gain for obese women in contrast to the previously stated at least 15 pounds ( 6.8 kg ) without an upper bound . all study protocols and procedures were approved by the baystate medical center institutional review board and the university of massachusetts medical school institutional review board . height and prepregnancy weight were obtained from prenatal forms in participants ' medical records . customarily , height is measured by obstetric provider office staff and prepregnancy weight is self - reported by pregnant women at their first prenatal appointment . prepregnancy bmi was calculated as weight ( kg)/height squared ( in meters ) and categorized as follows : underweight ( bmi < 18.5 kg / m ) ; healthy weight ( 18.5 kg / m bmi < 25 kg / m ) ; overweight ( 25 kg / m bmi < 30 kg / m ) ; and obese ( 30 kg / m bmi ) [ 17 , 23 ] . gestational weight measures were routinely obtained by clinical staff as part of standard obstetric care appointments , as is customary . at each visit , women are weighed and their weight is recorded in prenatal health records , along with gestational age . each participant 's gwg status was determined based on prepregnancy bmi , gestational age , and weight gain at the time of the weight measure . for each prepregnancy weight status category , iom - recommended trajectories of weight gain were defined ( 1 ) in terms of minimum and maximum total weight gain at week 13 ( end of first trimester ) and ( 2 ) for subsequent weeks in terms of minimum and maximum weight gain per week . thus , for each week of gestational age , a minimum and maximum recommended weight gain were calculated . first - trimester gwg status was determined using the last weight measure recorded during the first trimester . gwg status in the first trimester was assessed by comparing first - trimester gwg ( calculated by subtracting pregravid weight from weight at the last first - trimester prenatal visit ) to the iom - recommended gwg range for gestational age at the last first - trimester prenatal visit . similarly , gwg status at delivery was determined using weight measured from the last recorded prenatal appointment and was assessed by comparing total gwg ( calculated by subtracting pregravid weight from weight at the last prenatal visit prior to delivery ) to the iom - recommended gwg range for gestational age at the last prenatal visit ( the average period between the last prenatal visit and delivery is estimated at 6.6 days ) . below if weight gain for gestational age was below the lowest value of the recommended range ; appropriate or within if weight gain for gestational age was between the recommended range lowest and highest values ; and excessive or above if weight gain for gestational age was above the highest value of the recommended range . gestational age at delivery was calculated based on best dates for estimated date of confinement ( edc ) . edc is determined as per clinician evaluation considering concordance of the last menstrual period and first - trimester ultrasound and documented on the medical record based on clinical care standards . pregnancies delivered at < 37 weeks were categorized as preterm and those delivered at 37 weeks were full term . neonate birth weight recorded by nursing staff at the time of delivery was abstracted from the inpatient record . neonates were categorized as small for gestational age ( sga ) and large for gestational age ( lga ) if birth weight was < 10th and 90th percentile , respectively , of 1999 - 2000 us national reference data for singleton gestations , accounting for gestational age and gender [ 26 , 27 ] . regardless of gestational age , low birth weight ( lbw ) was defined as < 2,500 grams and high birth weight ( hbw ) or macrosomia as 4,000 grams . descriptive statistics of the study sample stratified by ethnicity were conducted using chi - square tests or fisher exact tests for categorical variables and t - tests for continuous variables . estimated means and standard errors for total gwg were computed for each ethnic group and by prepregnancy weight status category within ethnic group , adjusting for gestational age at the last prenatal visit . unadjusted associations of gwg status ( below , within , or above iom - recommended range ) with participant characteristics were estimated using contingency tables and chi - square tests . adjusted associations of gwg status with participant characteristics were estimated using multinomial logistic regression models ( within gwg guidelines as the outcome reference category ) to allow for the possibility of associations that violated the proportional odds assumption ( e.g. , a positive association with both above and below gwg guidelines ) . potential effect modification by ethnicity was examined by stratifying contingency tables of gwg status with participant characteristics by ethnicity and by including interaction terms of ethnicity with other predictors in logistic regression models . model fit was assessed using the hosmer - lemeshow goodness - of - fit chi - square statistic . infant outcomes were compared by gwg status for the entire group and by ethnicity using contingency tables , chi - square tests , and logistic regression . supplemental analyses included conducting backward elimination in the logistic regression analyses to assess whether results were similar after omitting irrelevant or redundant predictors and performing sensitivity analysis comparing results based on the 1990 iom gwg guidelines versus the 2009 iom gwg guidelines . the majority of participants were single ( 64% ) and unemployed ( 53% ) and had public health insurance ( 64% ) ( table 1 ) . less than half ( 46% ) of women had prepregnancy bmis within the healthy weight range , a quarter were obese , and more than half ( 58% ) exceeded gwg recommendations at the time of delivery . compared to white women , latina women were younger and more likely to be single and unemployed , have public insurance , and have higher parity ( p values < 0.05 ) . white women had higher prevalence of documented tobacco and alcohol use , were more likely to have a documented psychiatric history , and were more likely to deliver lga neonates than latina women ( p values < 0.05 ) . no other differences by ethnicity were observed . a comparison by prenatal care site revealed that women receiving care in hospital clinics were more likely to be younger , unmarried , unemployed , and nulliparous , have public insurance , have a psychiatric history , and have lower levels of education than those receiving care in private clinics ( p values < 0.01 ) . average gwg adjusted for gestational age at delivery was 36.3 lbs ( se = 0.92 ) ( 16.5 kg ( se = 0.42 ) ) for white women and 32.4 lbs ( se = 0.88 ) ( 14.7 kg ( se = 0.36 ) ) for latina women ( p < 0.0001 ) . average gwg by prepregnancy weight status category were as follows : 37.9 lbs ( se = 2.3 ) ( 17.48 kg ( se = 1.0 ) ) for underweight participants ; 36.7 lbs ( se = 0.9 ) ( 16.6 kg ( se = 0.4 ) ) for healthy weight participants ; 35.3 lbs ( se = 1.2 ) ( 35.3 kg ( se = 0.5 ) ) for overweight participants ; and 28.0 lbs ( se = 1.2 ) ( 12.7 kg ( se = 0.5 ) ) for obese participants . across prepregnancy weight status categories , adherence to iom gwg recommendations was poor among both ethnic groups , with only 27% gaining within recommended ranges . ethnic differences in gwg status at time of delivery for the overall sample were observed , with latina women less likely to gain in excess than white women ( p = 0.016 ) ( figure 1 ) . latina women were more likely to gain within the iom - recommended range than white women across all prepregnancy weight status categories , with the exception of the underweight category ( among underweight participants , white women were more likely to have gwg within recommended ranges than latinas ) ( figure 2 ) . table 2 presents unadjusted associations between demographics , behavioral factors and psychiatric history , and gwg status . gwg status was significantly associated with ethnicity , employment status at pregnancy onset , prepregnancy bmi , and first - trimester gwg ( p values < 0.05 ) . in logistic regression models , no effect modification by ethnicity was indicated ( p values for interaction terms > 0.05 ) ; thus , results are presented for the entire sample . multivariable logistic regression models estimating participant characteristics associated with gwg status at time of delivery ( table 3 ) indicated that odds of above - guideline gwg at time of delivery were greater among prepregnancy overweight and obese women compared to healthy weight women ( or = 3.4 , ci = 1.86.5 ; or = 4.5 , ci = 2.39.0 , resp . ) and among those with first - trimester gwg above guidelines compared to those with gwg within guidelines ( or = 4.9 , ci = 2.88.8 ) . odds of below - guideline gwg at time of delivery were greater among prepregnancy underweight and obese women compared to healthy weight ( or = 5.3 , ci = 1.420.2 ; or = 3.5 , ci = 1.48.7 , resp . ) and among women with first - trimester gwg below guidelines compared to within - guideline gwg ( or = 3.0 , ci = 1.36.8 ) . odds of below - guideline gwg were lower among women receiving care at hospital clinics compared to those receiving care from a private provider and among past smokers compared to never smokers ( or = 0.3 , ci = 0.10.9 ; or = 0.3 , ci = 0.11.0 , resp . ) . thus , table 4 presents estimates of associations between gwg status and length of pregnancy ( preterm versus full - term ) and birth weight parameters for the overall sample . gwg status was unrelated to pregnancy length but was associated with birth size ( a higher percentage of sga in pregnancies with below - guideline gwg and a higher percentage of lga in pregnancies with above - guideline gwg ; p values < 0.05 ) . observed ethnic differences in birth size ( table 1 ) by which white women were more likely to have lga neonates and latina women were more likely to have sga neonates were not impacted when adjusted for gwg status ( data not shown ) . supplemental analyses from running more parsimonious models and from sensitivity tests did not yield results that were substantially different from those presented ( data not shown ) . findings from this retrospective cohort study provide insights for identifying women at risk for nonadherence to iom - recommended gwg and for developing targeted interventions . above - guideline gwg was greater in this cohort ( 58% ) than in previous studies of multiethnic samples ( 35%57% in prior studies ) [ 24 , 7 ] , suggesting that rates of above - guideline gwg may continue to increase , especially among white women . as noted in other populations [ 2 , 7 , 30 ] , prepregnancy weight status predicted gwg in this study . targeting weight prior to pregnancy is desirable but may be unfeasible for numerous reasons , such as lack of pregnancy intentionality . targeting weight change during pregnancy may be a more feasible window , as a majority of women seek prenatal care during the first - trimester and are motivated to modify health behaviors . to our knowledge , this is the first study to examine first - trimester gwg status as a predictor of gwg status at time of delivery in a multiethnic sample of women , with first - trimester gwg status predicting overall gwg status among non - latina white and latina women . along with other research , study findings indicate that the first trimester of pregnancy may be a critical and feasible window to promote healthy gwg and associated maternal and neonatal outcomes ; thus , the identification of women who are at elevated risk for below or above gwg guidelines ( e.g. , prepregnancy underweight and overweight / obese women ) and subsequent delivery of targeted interventions for these subgroups during early prenatal care should be emphasized . for both non - latina white and latina women in our study sample , maternal smoking status ( previous smoker prior to pregnancy ) was associated with lower odds of below - recommended gwg which is consistent with previous research indicating that smoking during pregnancy is related to lower gwg and smoking cessation associated with greater gwg [ 2 , 3 , 32 , 33 ] . between 29% and 70% of women reportedly quit smoking upon becoming pregnant ; thus , health care provider attention to smoking history and smoking patterns during pregnancy , with particular focus given to previous or current smokers during early prenatal care , is important to optimize gwg throughout pregnancy . a larger proportion of sga infants were born to latina women than non - latina white women , with the prevalence of sga ( 12.2% ) and preterm delivery ( 13.0% ) among latina women in our sample slightly higher than national estimates for latina women ( 9%-10% ) . we did not find an association between gwg status at time of delivery and pregnancy length as previously found . in addition , we did not find ethnic differences in low or high birth weight , which is in contrast to prior data indicating that puerto rican latinas have some of the highest rates of low birth weight neonates and preterm births in the us . multiple factors not assessed in this retrospective cohort study ( e.g. , prior preterm births , gestational diabetes mellitus ) may contribute to and account for differences in birth outcomes observed in this study compared to previous studies . in addition , conventional measures of gwg may introduce bias when studying gwg - preterm birth associations . additional studies with larger , ethnically diverse samples are needed to elucidate predictors driving racial / ethnic disparities in birth weight outcomes . study strengths include the sample 's ethnic and socioeconomic diversity ( i.e. , white / latina women , public / commercial insurance , and hospital clinics / private provider ) and inclusion of women who delivered pre- and full - term ( previous studies have been limited to women who delivered full - term ) [ 2 , 7 ] . although no data were available on place of birth , most latinos in the region where the study was conducted are of puerto rican descent , a largely understudied population with considerable health disparities , including infant mortality . study limitations include the retrospective study design and the use of existing medical record data ( with data gathered within the context of clinical activities rather than by trained research staff ) . however , all providers completed similar maternal and prenatal medical forms , which were routinely filed in the hospital medical record database prior to delivery . participants ' self - reported prepregnancy weight ( as opposed to prepregnancy weight measured in a clinical or research setting ) was used to determine gwg status . however , the iom guidelines are based on studies that similarly use self - reported prepregnancy weight , and self - reported prepregnancy weight has been found to be highly correlated with clinically measured weight [ 4042 ] . information available on smoking patterns during pregnancy ( i.e. , number of cigarettes , quit date ) was restricted . furthermore , smoking status data was collected in the context of the first prenatal appointment and may be subject to social desirability bias and may only reflect smoking status at the first prenatal visit . however , the prevalence of smoking in our sample ( 19% ) is consistent with smoking rates among white [ 2 , 3 , 32 , 33 ] and latina pregnant women in previous studies . presence of gestational diabetes , shown to be associated with birth weight [ 44 , 45 ] , was not controlled for . women without a first - trimester prenatal visit and with missing prepregnancy bmi data were excluded from analysis ; as systematic biases might exist between women who were or were not missing these data , findings may not be representative of the larger population from which the study sample was drawn . study findings may not be generalizable to other ( non - puerto rican ) latino subgroups . lastly , the study was not adequately powered to examine ethnic differences in pregnancy outcomes by gwg status ; thus , results of gwg associated with outcomes of interest by ethnicity are exploratory . understanding factors that contribute to inadequate and excessive gwg is critical to the development of interventions that seek to optimize recommended gwg . additional researches on racial / ethnic differences in the influence of early gwg on gwg and other maternal and neonatal outcomes are needed to guide the development of interventions tailored for socioeconomically and ethnically diverse populations .
this study examined racial / ethnic differences in gestational weight gain ( gwg ) predictors and association of first - trimester gwg to overall gwg among 271 white women and 300 latina women . rates of within - guideline gwg were higher among latinas than among whites ( 28.7% versus 24.4% , p < 0.016 ) . adjusted odds of above - guideline gwg were higher among prepregnancy overweight ( or = 3.4 , ci = 1.86.5 ) and obese ( or = 4.5 , ci = 2.39.0 ) women than among healthy weight women and among women with above - guideline first - trimester gwg than among those with within - guideline first - trimester gwg ( or = 4.9 , ci = 2.88.8 ) . gwg was positively associated with neonate birth size ( p < 0.001 ) . interventions targeting prepregnancy overweight or obese women and those with excessive first - trimester gwg are needed .
1. Introduction 2. Methods 3. Results 4. Discussion
PMC3698591
the lack of practical criteria to manage the optimal degree of wetness on dentin substrate complicates the wet bonding for clinicians . as a result , it is applied differently among operators and in instructions of the manufacturer so that over - wetting or over - drying may occur instead of ideal moisture on the dentin . excess water can not be easily evaporated due to low vapor pressure and presence of hema contained in many adhesives , retaining water in the bonding site ; phase separation of resin components may take place . this residual water is capable of diluting the concentration of adhesive monomers , preventing monomers diffusion to the full depth of the demineralized dentin and adequate polymerization of the monomers inside the collagen network . subsequently , the formed porous hybrid layer is more susceptible to water degradation over time . the adverse effects of water on the adhesive interface can be minimized when the etched dentin is kept in a dry state . dry bonding may be obtained during air drying of the cavity after rinsing by clinician to ensure the frosted etched appearance of the enamel . however , air drying the demineralized dentin leads to collapsed collagen network . as a result the formed sub - optimal and porous hybrid layer may account for immediate low bond strength and long - term degradation of resin - dentin bond . although it seems that dried etched dentin is easily obtained , in clinical situation , air blowing of the smear layer and smear plugs - free dentin for water evaporation leads to increased outward fluid flowing from the pulp . different studies showed the effectiveness of oxalate solution in blocking the orifice of dentinal tubules . therefore , it can reduce the outward flow of dentinal fluid during bonding procedure . in this way , simultaneous tubular occlusion and possible maintenance of the collagen matrix stability in the absence of water can be a beneficial approach to provide high and stable dentin bonding . ethylene - diamine tetra acetic acid ( edta ) is a molecule containing four carboxylic acids that function as a chelating agent at neutral ph . some studies reported a favorable effect of edta - conditioning to provide sufficient dentin bond strength . it is capable of selectively removing hydroxyapatite , preserving the structural stability of the collagen matrix . this stability is attributed to lack of alteration of the native fibrillar structure of the collagen during dissolving the mineral phase of the dentin . hence , habelitz et al . suggested that the edta - conditioned dentin may be less affected by air drying due to the presence the unaltered collagen containing most of their intrafibrillar mineral . based on the above - mentioned points , the interfering effect of water with bonding performance of the simplified etch - and - rinse ( one - bottle ) adhesives may be prevented by the combination of edta - conditioning and occluding effect of oxalate desensitizers during dry bonding without compromising bonding efficacy . therefore , the aim of this study was to evaluate whether this combination produces the adhesive bond strength similar to that made using conventional wet bonding on acid - etched dentin . one - hundred and twenty extracted sound human third molars were used in the current study . the teeth were stored in 1% chloramines t solution for 2 weeks , and then in distilled water at 4c before use . after removing the roots , the midcoronal dentin surfaces were exposed by removing the occlusal enamel with a diamond saw ( letiz , 1600 , germany ) under running water . the flat dentin surfaces were polished with silicon carbide paper to standardize the smear layer . the specimens were randomly divided into 12 groups of 10 teeth each . in the first four groups , one - step plus ( os ) was used and optibond solo plus ( op ) was applied in the other four groups . in the remaining four groups , adper single bond ( sb ) the bonding procedures were performed as follows : in the control groups 1 , 5 , and 9 ( wet / acid ) , after phosphoric acid etching for 15 s and rinsing , the dentin surfaces were gently air dried for 5 s while leaving the moist dentin . then , os , op , and sb were applied according to the manufacturer 's instructions , respectively [ table 1 ] . materials used in the current study in the experimental groups 2 , 6 , and 10 ( wet / edta ) , the dentin surfaces were conditioned with 0.1 m edta solution ( ph 7.4 , merck co. , germany ) for 60 s instead of using the phosphoric acid etching . the remaining bonding procedures were performed as in respective control groups . in the experimental groups 3 , 7 , and 11 ( dry / edta ) , edta conditioning and bonding procedures were performed similar to the previous respective groups , with the exception of dry bonding . after rinsing , the conditioned surfaces were extensively air dried for 30 s with oil - free compressed air . in control , wet / edta , and dry / edta groups , we had removed the pulp tissue prior to preparing the specimens for bonding procedure . in the experimental groups 4 , 8 , and 12 ( dry / edta + ox ) , the bonding procedures were performed similarly to the previous respective groups , only an ox ( bisblock , bisco ) was added to bonding procedures . after edta conditioning and rinsing , ox was applied and dwelled onto the dentin surfaces for 30 s ; then , the surfaces were rinsed for 60 s and dry bonding was performed . after curing the adhesives for 20 s at 600 mw / cm with a light curing unit ( vip junior , bisco , schaumburg , il , usa ) , a resin composite ( z250 ) was placed on the cured adhesive using a cylindrical split mold with a height of 2.5 mm and surface diameter of 2 mm . two increments of 1 mm and 1.5 mm were applied and separately cured for 40 s. after 24 h water storage and thermocycling ( 1000 times ) , bond strength test was performed . shear bond strength ( sbs ) was measured with a universal testing machine ( instron z020 , zwick , roell , germany ) . a knife - edge shearing rod at a cross head speed of 1 mm / min was applied to load the specimens until fracture and bond strength in mpa was recorded . the data were analyzed using two - way analysis of variance ( anova ) and tukey 's honestly significant differences ( hsd ) hsd post - hoc tests for pair - wise comparisons at a significance level of 0.05 . after testing , the fracture modes were evaluated under a stereomicroscope ( ziess ) at 10 and classified according to the predominant mode of fractures as adhesive , cohesive in dentin , cohesive in composite and mixed , a combination of adhesive and cohesive [ table 2 ] . the mean bond strength and standard deviations of the 12 groups are presented in table 2 , and the results of two - way anova are shown in table 3 . the use of edta instead of phosphoric acid did not alter sbs of the three used adhesives . when dry bonding by edta , os showed significantly lower sbs than those of wet bonding by phosphoric acid or edta ( p < 0.0001 ) while both op and sb revealed similar sbs in the three bonding conditions ( p = 0.91 ) . results of two way analysis of variance by adding the ox treatment to the dry bonding by edta , a significant reduction in sbs was observed for op ( p < 0.0001 ) , but not for os and sb ( p > 0.05 ) . sbs of os in dry bonding with or without ox was similar , being significantly lower than wet bonding with phosphoric acid etching ( p < 0.05 ) . only sb had comparable bond strength in four bonding conditions ( p > 0.05 ) . the pair - wise comparisons of four bonding conditions for each adhesive ( os and op ) are summarized in table 4 . results of pair - wise comparisons of four bonding conditions was performed by tukey hsd test for each of two adhesives fracture analysis revealed that most of the fractures of groups 3 and 4 ( os in dry bonding with edta conditioning with or without ox ) and group 8 ( op with ox ) were adhesive mode . in other groups , in the current study , three one - bottle adhesives with different solvent content ( acetone and water / ethanol ) and ph were used under four bonding conditions . edta - conditioning and acid etching revealed similar bond strengths under wet bonding for these adhesives . the thin edta - demineralized collagen matrix contains intrafibrillar mineral , preserving its spongy and stable state , and hence may improve resin infiltration . the resultant more homogenous hybrid layer may be strong and could produce a high bond strength due to possible chemical interaction between acidic / functional monomers with calcium in residual mineral within the collagen fibrils . based on the mentioned properties of edta - conditioned dentin , it was speculated that this dentin is less influenced by dehydration . in the current study , this hypothesis was supported when op and sb were bonded to dried edta - conditioned dentin , but not for os . acetone - based adhesives are more sensitive to an accurate wet bonding technique than ethanol - based ones and require greater surface wetness due to the high water - chaser effect of acetone . ethanol / water - based adhesives possess the capability of promoting re - expansion of dried collapsed matrix during the infiltration of solvated resin monomers . however , a substantial decrease of bond strength to acid - etched dentin was reported for two ethanol / water based adhesives ( excite , op ) following air drying for 10 s. reis et al . suggested that fluid flowing from the retained moist pulp tissue during dry bonding may account for insignificant reduction of bond strength of sb compared to wet bonding . more severe dry bonding condition was performed and bond strength was significantly decreased . in the current study , the pulp tissue was removed prior to bonding procedures ( except for groups with ox ) similar to the latter study to eliminate the interfering effect of water permeation within the tubules ; however , the bond strength of sb and op was not altered under dry bonding on edta - conditioned dentin . this finding might indicate more stability of the dried dentin following edta - conditioning compared to acid etching . in clinical situations , controlling the level of moisture may practically be more difficult and non - uniform wetness may exist on different regions of dentin surface . furthermore , re - wetting capability of naturally moisture dentin may mitigate the effects of dry and wet bonding . hence , the application of ox may be beneficial to occlude dentinal tubules , optimizing different wetness conditions ; in the current study , its compatibility with the adhesives used in dry bonding with edta was evaluated . the reduced permeability of acid - etched dentin was the result of precipitating calcium oxalate crystals below the demineralized dentin . thus , ox did not compromise the bond strength of relatively neutral etch - and - rinse adhesives ( such as sb and one - step ) . however , in two recent studies , ox application decreased bond strength of sb , one - step and scotchbond multi - purpose , but it had no effect on prime and bond nt . in literature , the name of commercial products ( all parts of each product name ) is commonly started with capital letters . nt associated with prime and bond is the name of one commercial product . we corrected the products names in whole of the article text . according to the results of the current study , ox had an adverse effect on bonding efficacy of op that may be due to low ph of this adhesive . the higher number of adhesive fractures together with a decrease in bond strength in groups 3 , 4 and 8 may have been caused by the lower bonding effectiveness of these bonding conditions . therefore , only sb with a relatively high ph and the ethanol / water content exhibited the compatibility with the combination of ox treatment on the edta - conditioned dentin under dry bonding . dry bonding associated to ox pre - treatment may enhance the removal of solvents and residual water after the application of the adhesive , and formation of resin tags . however , dry bonding may lead to collapsed collagen matrix in the etched dentin . based on our results , edta - conditioning possibly improves resin penetration in dry bonding condition for the ethanol / water adhesive , resulting in less fibrils exposure . nevertheless , scanning electron microscopic evaluations are needed to explain these results and actual interaction of different adhesive systems on ox treated dentin in dry and wet conditions . moreover , edta treatment may extract and inactivate matrix metalloproteinase 's involved in degradation of the exposed collagens . these long - term effects can be studied in laboratory tests with more simulating in vivo situations such as using a positive pulpal pressure and the presence of fluid flowing . nevertheless , the need for separate enamel etching , difficult control of edta solution only on the dentin surfaces , and the relatively long time ( 60 s ) needed for its application might be disadvantages in clinical practice . based on the results of this in vitro study , among the three adhesives used , only an ethanol / water based adhesive with a relatively low acidity could benefit from the association of ox pretreatment and edta - conditioning , in a relatively severe dry bonding technique .
background : elimination of water entrapment in hybrid layer during bonding procedure would increase bonding durability.aims:this study evaluated the effect of oxalate desensitizer ( ox ) pretreatment on bond strength of three one - bottle adhesives to ethylene - diamine tetra acetic acid ( edta)-conditioned dentin under dry bonding.materials and methods : three adhesive systems , one - step plus ( os ) , optibond solo plus ( op ) and adper single bond ( sb ) were bonded on dentin surfaces under four bonding conditions : ( 1 ) wet - bonding on acid - etched dentin , ( 2 ) wet bonding on edta - conditioned dentin , ( 3 ) dry bonding on edta - conditioned dentin , ( 4 ) dry bonding associated with ox on the edta - conditioned dentin . after storage and thermo cycling , shear bond strength test was performed . data were analyzed using two - way analysis of variance and tukey tests.results:wet bonding with edta or acid etching showed similar bond strength for test adhesives . dry bonding with edta significantly decreased the bond strength of os , but it had no effect on the bonding of op and sb . ox application in the forth bonding condition , in comparison with the third condition , had a negative effect on the bond strength of op , but not influence on os and sb.conclusions:the use of an ox on edta - conditioned dentin compromised the bonding efficacy of os and op under dry bonding but compatible for sb .
INTRODUCTION MATERIALS AND METHODS RESULTS DISCUSSION CONCLUSIONS
PMC4629271
perhaps because of ingrained cultural beliefs about the infallibility of computation , people show a level of trust in computed outputs that is completely at odds with the reality that nearly zero provably error - free computer programs have ever been written 2 , 3 . it has been estimated that the industry average rate of programming errors is about 15 50 errors per 1000 lines of delivered code . that estimate describes the work of professional software engineers not of the graduate students who write most scientific data analysis programs , usually without the benefit of training in software engineering and testing 5 , 6 . the recent increase in attention to such training is a welcome and essential development 7 11 . nonetheless , even the most careful software engineering practices in industry rarely achieve an error rate better than 1 per 1000 lines . since software programs commonly have many thousands of lines of code ( table 1 ) , it follows that many defects remain in delivered code even after all testing and debugging is complete . defects occur not only in the top - level program being run but also in compilers , system libraries , and even firmware and hardware and errors in such underlying components are extremely difficult to detect . of course , not every error in a program will affect the outcome of a specific analysis . for a simple single - purpose program , it is entirely possible that every line executes on every run . in general , however , the code path taken for a given run of a program executes only a subset of the lines in it , because there may be command - line options that enable or disable certain features , blocks of code that execute conditionally depending on the input data , etc . furthermore , even if an erroneous line executes , it may not in fact manifest the error ( i.e. , it may give the correct output for some inputs but not others ) . finally : many errors may cause a program to simply crash or to report an obviously implausible result , but we are really only concerned with errors that propagate downstream and are reported . in combination , then , we can estimate the number of errors that actually affect the result of a single run of a program , as follows : number of errors per program execution = total lines of code ( loc ) * proportion executed * probability of error per line * probability that the error meaningfully affects the result * probability that an erroneous result appears plausible to the scientist . for these purposes , using a formula to compute a value in excel counts as a line of code , and a spreadsheet as a whole counts as a programso many scientists who may not consider themselves coders may still suffer from bugs . all of these values may vary widely depending on the field and the source of the software . for a typical analysis in bioinformatics , i ll speculate at some plausible values : 100,000 total loc ( neglecting trusted components such as the linux kernel).20% executed10 errors per 1000 lines10% chance that a given error meaningfully changes the outcome10% chance that a consequent erroneous result is plausible 100,000 total loc ( neglecting trusted components such as the linux kernel ) . 10 errors per 1000 lines 10% chance that a given error meaningfully changes the outcome 10% chance that a consequent erroneous result is plausible so , we expect that two errors changed the output of this program run , so the probability of a wrong output is effectively 100% . let s imagine a more optimistic scenario , in which we write a simple , short program , and we go to great lengths to test and debug it . in such a case , any output that is produced is in fact more likely to be plausible , because bugs producing implausible outputs are more likely to have been eliminated in testing . 1 error per 1000 lines 10% chance that a given error meaningfully changes the outcome 50% chance that a consequent erroneous result is plausible here the probability of a wrong output is 5% . the factors going into the above estimates are rank speculation , and the conclusion varies widely depending on the guessed values . measuring such values rigorously in different contexts would be valuable but also tremendously difficult . regardless , it is sobering that some plausible values indicate total wrongness all the time , and that even conservative values suggest that an appreciable proportion of results are erroneous due to software defects above and beyond those that are erroneous for more widely appreciated reasons . all of these values may vary widely depending on the field and the source of the software . for a typical analysis in bioinformatics , i ll speculate at some plausible values : 100,000 total loc ( neglecting trusted components such as the linux kernel).20% executed10 errors per 1000 lines10% chance that a given error meaningfully changes the outcome10% chance that a consequent erroneous result is plausible 100,000 total loc ( neglecting trusted components such as the linux kernel ) . 10 errors per 1000 lines 10% chance that a given error meaningfully changes the outcome 10% chance that a consequent erroneous result is plausible so , we expect that two errors changed the output of this program run , so the probability of a wrong output is effectively 100% . let s imagine a more optimistic scenario , in which we write a simple , short program , and we go to great lengths to test and debug it . in such a case , any output that is produced is in fact more likely to be plausible , because bugs producing implausible outputs are more likely to have been eliminated in testing . 1 error per 1000 lines 10% chance that a given error meaningfully changes the outcome 50% chance that a consequent erroneous result is plausible here the probability of a wrong output is 5% . the factors going into the above estimates are rank speculation , and the conclusion varies widely depending on the guessed values . regardless , it is sobering that some plausible values indicate total wrongness all the time , and that even conservative values suggest that an appreciable proportion of results are erroneous due to software defects above and beyond those that are erroneous for more widely appreciated reasons . a response to concerns about software quality that i have heard frequently particularly from wet - lab biologists is that errors may occur but have little impact on the outcome . this may be because only a few data points are affected , or because values are altered by a small amount ( so the error is in the noise ) . the above estimates account for this by including terms for meaningful changes to the result and the outcome is plausible . nonetheless , in the context of physical experiments , it is tempting to believe that small errors tend to reduce precision but have less effect on accuracy i.e . if the concentration of some reagent is a bit off then the results will also be just a bit off , but not completely unrelated to the correct result . we can not apply our physical intuitions , because software is profoundly brittle : small bugs commonly have unbounded error propagation . a sign error , a missing semicolon , an off - by - one error in matching up two columns of data , etc it is rare that a software bug would alter a small proportion of the data by a small amount . more likely , it systematically alters every data point , or occurs in some downstream aggregate step with effectively global consequences . in general , software errors produce outcomes that are inaccurate , not merely imprecise . bugs that produce program crashes or completely implausible results are more likely to be discovered during development , before a program becomes delivered code ( the state of code on which the above errors - per - line estimates are based ) . consequently , published scientific code often has the property that nearly every possible output is plausible . when the code is a black box , situations such as these may easily produce outputs that are simply accepted at face value : an indexing off - by - one error associates the wrong pairs of x s and y s .a correlation is found between two variables where in fact none exists , or vice versa.a sequence aligner reports the best match to a sequence in a genome , but actually provides a lower - scoring match.a protein structure produced from x - ray crystallography is wrong , but it still looks like a protein .a classifier reports that only 60% of the data points are classifiable , when in fact 90% of the points should have been classified ( and worse , there is a bias in which points were classified , so those 60% are not representative).all measured values are multiplied by a constant factor , but remain within a reasonable range . an indexing off - by - one error associates the wrong pairs of x s and y s . a correlation is found between two variables where in fact none exists , or vice versa . best match to a sequence in a genome , but actually provides a lower - scoring match . a protein structure produced from x - ray crystallography is wrong , but it still looks like a protein . a classifier reports that only 60% of the data points are classifiable , when in fact 90% of the points should have been classified ( and worse , there is a bias in which points were classified , so those 60% are not representative ) . all measured values are multiplied by a constant factor , but remain within a reasonable range . a software error may produce a spurious result that appears significant , or may mask a significant result . if the error occurs early in an analysis pipeline , then it may be considered a form of measurement error ( i.e. , if it systematically or randomly alters the values of individual measurements ) , and so may be taken into account by common statistical methods . however : typically the computed portion of a study comes after data collection , so its contribution to wrongness may easily be independent of sample size , replication of earlier steps , and other techniques for improving significance . for instance , a software error may occur near the end of the pipeline , e.g. in the computation of a significance value or of other statistics , or in the preparation of summary tables and plots . the diversity of the types and magnitudes of errors that may occur 16 19 makes it difficult to make a general statement about the effects of such errors on apparent significance . however it seems clear that , a substantial proportion of the time ( based on the above scenarios , anywhere from 5% to 100% ) , a result is simply wrong rendering moot any claims about its significance . all hope is not lost ; we must simply take the opportunity to use technology to bring about a new era of collaborative , reproducible science 20 22 . open availability of all data and source code used to produce scientific results is an incontestable foundation 23 27 . beyond that , we must redouble our commitment to replicating and reproducing results , and in particular we must insist that a result can be trusted only when it has been observed on multiple occasions using completely different software packages and methods . this in turn requires a flexible and open system for describing and sharing computational workflows . projects such as galaxy , kepler , and taverna have made inroads towards this goal , but much more work is needed to provide widespread access to comprehensive provenance of computational results . perhaps ironically , a shared workflow system must itself qualify as a trusted componentlike the linux kernel in order to provide a neutral platform for comparisons , and so must be held to the very highest standards of software quality . crucially , any shared workflow system must be widely used to be effective , and gaining adoption is more a sociological and economic problem than a technical one .
errors in scientific results due to software bugs are not limited to a few high - profile cases that lead to retractions and are widely reported . here i estimate that in fact most scientific results are probably wrong if data have passed through a computer , and that these errors may remain largely undetected . the opportunities for both subtle and profound errors in software and data management are boundless , yet they remain surprisingly underappreciated .
Computational results are particularly prone to misplaced trust How frequently are published results wrong due to software bugs? Scenario 1: A typical medium-scale bioinformatics analysis Scenario 2: A small focused analysis, rigorously executed Software is exceptionally brittle Many erroneous results are plausible Software errors and statistical significance are orthogonal issues What can be done?
PMC3461729
each year , 13 million or nearly one quarter of all deaths worldwide result from preventable environ - mental causes relating mainly to water , sanitation and hygiene ; indoor and outdoor pollution ; harmful use of chemicals such as pesticides ; and climate change16 . these risk factors , which are both avoidable and preventable , play a role in more than 80 per cent of diseases that are routinely reported to the world health organization . children , especially from poor families are most vulnerable to illness and death due to these diseases . however , simple and cost - effective interventions are available , which , if implemented early and effectively , can prevent most of these deaths . the earthquake followed by a tsunami in japan 's fukushima daiichi on march 11 , 2011 is considered as one of the greatest nuclear and environmental disasters in human history . the 2010 floods , the worst in the history of pakistan killed more than 1500 people . during 2010 - 2011 , unprecedented floods also took a heavy toll of life in ladakh , northern india as well as in cities of melbourne , australia and rio de janeiro , brazil . the long - standing health problems associated with ground water contamination with arsenic and fluoride in parts of india are examples of the health consequences associated with environmental risk factors7 . the factors including globalization , rapid industrialization , urbanization , unplanned and unregulated development activities , increase in transport , over - dependence on pesticides in agriculture , and climate change indicate that the negative health consequences associated with environmental causes are likely to worsen in the future , unless action is taken urgently . recent studies and systematic reviews indicate that the environmental factors are responsible for an estimated 24 per cent of the global burden of disease in terms of healthy life years lost and 23 per cent of all deaths2 . while 25 per cent of all deaths in developing countries are attributable to environmental factors , only 17 per cent of deaths in the developed countries are due to such factors . children are the worst sufferers of the adverse impact of environmental risks , as an estimated 24 per cent of all deaths in children under 15 are due to diarrhoeal diseases , malaria and respiratory diseases , all of which are environmentally - related2 . it is also evident that much of the disease burden is attributable to a few critical risk factors ( table and fig . ) . these include unsafe water and sanitation , exposure to indoor smoke from cooking fuel , outdoor air pollution , exposure to chemicals such as arsenic , and climate change . unsafe water , sanitation and poor hygiene contribute to a large number of deaths , estimated at about 0.45 million in india alone . environmental risk factors and the diseases contributed diseases and the fraction attributable to environmental risk factors . while good progress has been made with respect to drinking water availability , the situation in many countries of asia relating to sanitation continues to remain bad . currently , 2.5 billion people lack sanitation facilities , with coverage being poorest in south asia ; as many as 629 million population in india is without sanitary facilities . according to unicef , 67 per cent of the rural population in india still practice open defecation89 . among some countries of the south - east asia region namely bangladesh , bhutan , sri lanka , and india , the proportions of population without access to sanitation during 2010 were 44 , 56 , 8 and 66 per cent , respectively3 . the progress in the region as a whole has been slow - from 34 per cent with sanitary facilities in 2000 to 43 per cent in 2010 . given this situation , it is clear that mdg 7 relating to water and sanitation is unlikely to be met by 2015 . the link with disease is clear as unsafe water and sanitation contribute to 94 per cent of the diarrhoeal disease burden . unfortunately , drinking water and sanitation have not received the kind of political commitment these deserve although the benefits can go beyond health and economic development and to enhance personal and national dignity . assigning the highest priority including allocation of appropriate resources and fully integrating water , sanitation , and hygiene in disease reduction strategies is therefore , an important priority and an essential prerequisite for national development . among the south - east asia region countries , only nepal has higher proportion ( 69% ) of population without access to improved sanitation than india3 . each year , an estimated 42 per cent of lower respiratory tract infections or pneumonia are associated with indoor and outdoor pollution including second hand - smoke10 . long - term exposure to suspended particulate matter from indoor burning of solid fuel such as wood is a major cause of respiratory diseases such as pneumonia , asthma , chronic obstructive lung diseases ( copd ) especially among children1113 . according to who , outdoor air pollution contributes to 800 , 000 deaths each year globally and about 60 per cent of them are in asia , caused by domestic consumption of fuel , motor vehicles especially those running on diesel , industries and burning of all kinds of waste2 . these factors together with second - hand smoking are leading to ischaemic heart disease , acute respiratory infections , asthma , and lung cancer . for example , construction of the aswan dam in egypt led to an increase in malaria and schistosomiasis . it is estimated that about 42 per cent of malaria occurring in asia and africa is attributed to environmental factors such as land use , deforestation and water resource management16 . similarly , growing rice crops , pig rearing , vector breeding and exposure to unsafe water all play an important role in transmission of acute encephalitis syndrome which , in 2011 alone accounted for 6800 cases and 820 deaths , mostly children below 15 yr , with the epicenter in the state of uttar pradesh , india ( dr a.k . the environmental factors contribute greatly to the impending pandemic of dengue as well as to transmission of schistosomiasis in many countries including china and indonesia . of the 12.7 million cases each year , 19 per cent are estimated to be attributable to the environment17 . the second most common cancer - stomach cancer due to helicobacter pylori infection is associated with poor sanitation and overcrowding conditions where the poor live . recently , many reports indicate an increasing incidence of cancer in agricultural heartland of punjab1819 and suggestions have been made of a possible link with environmental causes , such as use of pesticides which have been found both in water and soil . exposure to indoor smoke from solid fuel or tobacco smoke or by smog can trigger an asthma attack , especially during winter20 . in many cities in india and elsewhere this has resulted in a major increase in the number of children consulting health care workers for asthma . in one study carried out in new delhi , india , hospital emergency room visits for asthma and chronic obstructive lung disease increased by 21 and 24 per cent due to high levels of ambient air pollution21 . with regard to the prevalence of copd which is clearly linked with environmental factors , the rates seem to vary between countries due to the level of environmental risk factors . more than one - third of deaths due to copd , are attributable to environmental causes . reports on the hazard of ground water pollution in india date back to 194522.23 . today , 30 per cent of urban and 90 per cent of rural households in india depend on untreated surface or ground water and this causes an enormous adverse health impact in many areas24 . two examples of a serious health situation due to contamination of ground water used for drinking purposes are of particular concern24 . more than 60 million people living across 20 states of india are exposed to fluoride contamination ( more than 1.5 mg / l ) and are at risk of serious health effects , ranging from dental fluorosis to crippling skeletal fluorosis , both conditions being irreversible . bone deformity results from an excess fluoride content in water which prevents absorption of calcium , essential for bone development . high concentration of arsenic in ground water is a major public health problem in west bengal affecting nearly 50 million population . in bangladesh , the arsenic problem is considered as a public health emergency - the largest poisoning of a population in history25 . arsenic contamination has been detected in 59 of the 64 districts and 249 of the 463 sub - districts in bangladesh . estimates suggest that a quarter of the 6 - 8 million tube wells in bangladesh may contain arsenic levels more than 50 ppb or 0.05 , estimates indicate that between 30 - 40 million people are at risk through exposure from arsenic in drinking water . arsenic can cause severe and irreversible health effects , even at low levels of exposure and over a prolonged period of time . the symptoms can start at childhood and with continued exposure get increasingly worse . besides skin diseases such as hyperkeratosis , death due to cancer in addition , environmental conditions make south asia prone to disasters and public health emergencies such as floods and earthquakes which cause much suffering and economic loss . the situation is likely to get worse due to climate change and the health impact is likely to be serious for poor people living in developing world especially in asia and africa2830 . it will lead to an increase in vector - borne and water - borne diseases , heat stroke , asthma , cardiovascular diseases , and threaten food security by causing more floods and drought . while reducing greenhouse emissions is an individual responsibility , urgent action at adaptation is necessary by strengthening surveillance and response capacities in the countries to enable them to be resilient in coping with the adverse impact of climate change . besides the environment , these include socio - economic factors , prevailing customs and traditions as well programmes and policies , and the access and use of health services by the affected communities . the paucity of such information at the national level remains a major constraint for advocacy . to fill this information gap , environment and health impact assessments can help in systematically identifying the policies , programmes or developmental activities that are likely to have a major impact on the health of the local population . such information can make a critical input for deciding on the right policies and projects . assessment is a multi - disciplinary approach in which a combination of methods is used to obtain qualitative and quantitative data preferably using a check list . such an assessment can also identify the risk factors that can lead to health problems in relation to such activities as construction of buildings , transport systems , housing , energy , industry , urbanization , water , nutrition , etc . the data so obtained can help guide decision makers while planning and implementing such policies , programmes and development projects . such information can also help all relevant sectors and local bodies to ( i ) understand health consequences of various projects , ( ii ) keep health in mind while planning and implementing new projects and agree to the concept of health in all policies as a guiding principle , and ( iii ) finally ensure that the health of the local population is safeguarded while engaging in a new project or development activity . it is clear that environmental factors will continue to have an impact well into the future and , in fact , the situation is likely to get worse . a few strategic approaches are highlighted below that can help mitigate the health problems arising from environmental causes and to meet the challenge of health and environment : ( i ) developing an evidence - base for action : there is , at present a paucity of information on the environmental health impact in the countries , the transmission pathways , and on the populations at risk . more detailed and precise data on the health impact relating to water , air , food , and climate which could help in setting priorities and developing appropriate national policies are needed . more focused research is needed to understand the environmental factors , and their impact on economic development and on the daily lives of the people . a national database on health and environment can help establish and monitor the relationship between the distribution and trends of various diseases associated with environmental risk factors , the areas which are vulnerable and where risks are high , and the populations having the greatest need for environmental and health interventions . a mechanism for collecting and sharing information on environment and health and on country experiences could be useful . best practices in india such as use of plastics for road construction work , and levying green tax on vehicles entering manali and using it for environmental protection in himachal pradesh , provision of gas cylinders to populations in uttarakhand so that they do not have to go to the forest for firewood and thereby protecting the forest cover , solar energy expansion in gujarat , total sanitation programme in states such as haryana , sulabh experience in technical innovation in low cost toilets , ban on gutka and pan masala by madhya pradesh and seven other states in india , constructing ecological latrines in nepal and many such examples could be shared through an information clearing house . ( ii ) strengthening national environmental health policy , strategy and infrastructure : to address issues relating to health and the environment requires a comprehensive and inter - sectoral approach through preparation and implementation of a national environment and health action plan ( nehap ) . supported with data from the environment and health impact assessment , a working group with representatives from the environment , health and other sectors can identify priorities which can then be adopted by the ministries of health and environment . the plan , along with and allocation of adequate human and financial resources if implemented seriously and on a sustained manner , can go a long way to mitigate the problem emanating from the interaction between the environment and health . a national advisory board on environment and health can help advise and periodically monitor implementation of the plan . strengthening physical infrastructure such as provision of safe drinking water supply , functioning sewage treatment system , availability of non - polluting fuel for motor vehicles , clean cookstoves , biogas and solid waste management are responsibilities that national and local governments must take seriously and urgently . proper allocation of resources for such services if demanded by the general population can become a priority for decision makers . ( iii ) sustaining inter - sectoral co - ordination and partnerships : most environmental risk factors lie outside the health sector , the action to protect human health therefore , cuts across various sectors such as the government sector namely ministries of environment , agriculture , transport , energy , urban development , water resources and rural development , as well as the private sector . currently in many countries a broad - based , high - level national steering committee represented by various relevant government ministries , civil society including non - governmental organizations , the private sector and chaired by the highest level of government meeting at regular intervals could help mobilize all sectors and ensure a co - ordinated implementation of nehap . many programmes are presently underway that deal directly or indirectly with health and environment . there is a need to bring synergy among these programmes such as diarrhoeal disease control , water and sanitation , non - communicable diseases , etc . an overarching mechanism for functional collaboration among various programmes could assist in joint planning , decision making on priorities , and on deciding on activities to monitor . ( iv ) augmenting public participation and social mobilization : protecting the environment is every citizen 's responsibility . to keep the environment clean now and for future generations , it is necessary to enlist support from the public to safeguard fresh water sources , observe good sanitary practices and personal hygiene , and discourage all actions that harm the environment . the media and community - based organizations have an important role to play in creating public awareness in both urban and rural areas . while the former can reach a large section of the population with health messages using electronic or print media , community - based organizations can use an interpersonal approach and facilitate behaviour change . a social movement is needed to discourage traditional practices such as open defecation , throwing garbage including plastic bottles and bags on the road , and burning of all kinds of waste ; and promote practices such as hand washing and personal hygiene , reducing , reusing and recycling items such as papers , using only eco - friendly and biodegradable materials , promoting the use of public transport , planting more trees , and avoiding second - hand smoke . ( v ) the stewardship role of health and capacity building : health has a critical role in advocacy and in mobilizing and supporting other sectors to contribute in the area of health and environment . in order to do so , leadership skills of health professionals must be built in negotiating with other relevant sectors to play their role in protecting the environment and health . the health sector could also take a lead in carrying out health impact assessments and advise other sectors in developing policies that protect human health . in addition , health professionals , civil society and other stakeholders need to be periodically re - oriented on environmental health issues and priorities . the environment has a major impact on health and investing in environmental health is certainly a good investment . rapid urbanization , industralization , globalization and an increasing population is putting further stress on the environment . if strategic actions are not taken urgently by all sectors , the problem is likely to worsen thereby impacting human health directly . given that the environment is closely linked with each of the eight mdgs , without priority being assigned to interaction between environment and health , it will be a challenge to achieve mdgs . the future of the planet now rests solely on what we decide and do today .
a substantial burden of communicable and non - communicable diseases in the developing countries is attributable to environmental risk factors . who estimates that the environmental factors are responsible for an estimated 24 per cent of the global burden of disease in terms of healthy life years lost and 23 per cent of all deaths ; children being the worst sufferers . given that the environment is linked with most of the millennium development goals ( mdgs ) , without proper attention to the environmental risk factors and their management , it will be difficult to achieve many mdgs by 2015 . the impact of environmental degradation on health may continue well into the future and the situation in fact , is likely to get worse . in order to address this challenge , two facts are worth noting . first , that much of the environmental disease burden is attributable to a few critical risk factors which include unsafe water and sanitation , exposure to indoor smoke from cooking fuel , outdoor air pollution , exposure to chemicals such as arsenic , and climate change . second , that environment and health aspects must become , as a matter of urgency , a national priority , both in terms of policy and resources allocation . to meet the challenge of health and environment now and in the future , the following strategic approaches must be considered which include conducting environmental and health impact assessments ; strengthening national environmental health policy and infrastructure ; fostering inter - sectoral co - ordination and partnerships ; mobilizing public participation ; and enhancing the leadership role of health in advocacy , stewardship and capacity building .
Environment as a major determinant of health Burden of communicable and non-communicable diseases attributable to environmental risks Specific examples of environment-related health crises Environment and health impact assessment: a key for policy and programme development Protecting health and preventing disease through healthy environments Conclusions
PMC4832082
congenital anomalies of the urachal remnant range from a diverticulum to a sinus or a patent urachal remnant extended from the bladder up to the umbilical fossa . the latter anomaly often induces inflammation along the tract and/or urine leakage from the umbilicus . the principal treatment of an urachal remnant is the complete resection of the whole tract . this requires a long midline skin incision in the lower abdomen , which inevitably causes a major cosmetic disadvantage of a conspicuous scar formation . to alleviate this drawback , laparoscopic excision of the urachal remnant was first demonstrated in 1992 by neufung et al . , and since then several trials of laparoscopic surgery to correct the urachal anomaly have been reported , , , , , , , , . however , the techniques , including port placement arrangements and division and suture of the bladder , have not yet been standardized . furthermore , the method of closing the bladder opening after resection of the remnant tract is still controversial . some authors maintain the usefulness of a stapler under laparoscopic view to close the bladder apex , a technique which contradicts the opinion of urologists who argue that absorbable sutures should be used to minimize the risk of urolithiasis . here , we review our experience of excising the urachal remnant using the abdominal wall - lift laparoscopy . this method provides for the relatively free use of conventional instruments and techniques and eliminates pneumoperitoneum- or co2-related complications . for optimal incision and closure of the bladder apex a 21-year - old previously healthy woman was referred to our hospital with complaints of periumbilical discharge and pain on urination . she had a lean physique with body mass index of 17 [ 42.6 kg/(1.58 m ) ] . a physical examination found purulent discharge from the bottom of the umbilicus and a reddish tinge to the surrounding skin . abdominal computed tomography(ct ) and magnetic resonance imaging(mri ) revealed an abscess in the umbilical region that was connected to the bladder via a long band , in part via a long tube - like structure(fig . after treatment with antibiotics and anti - inflammatory drugs , an elective laparoscopic surgery was performed . under general anesthesia , the patient was placed in a leg - open position and a transurethral foley catheter was inserted . the surgeon and camera surgeon stood on the left side of the patient with the monitor in a caudal position relative to the patient . through a 15 mm infra - umbilical incision , the subcutaneous tissue and the anterior layer of the rectus fascia were dissected in a t - shape . the urachal remnant was then continuously dissected distally as far as possible in the preperitoneal space . a small - sized lapprotector ( hakko , nagano , tokyo ) was inserted through the umbilical incision into the preperitoneal space . for the abdominal wall - lift , two wires were placed subcutaneously and pulled upward , 2 cm below the umbilicus and halfway between the umbilicus and the pubis , respectively ( fig . 2 ) . a rigid , straight - viewing laparoscope was inserted via the lapprotector. the urachal remnant was dissected toward the bladder mainly with a bipolar sealing device ( ligasure)(covidien , mn ) . the dissection was mostly performed in the preperitoneal space and sometimes in the abdominal cavity . a 6 cm pfannenstiel incision was added 2 cm above the pubis to get access to the conjunction of the urachal remnant to the bladder . after the bladder was filled with 300 ml of saline through the foley catheter , a bladder cuff including the urachal insertion was excised along with the whole urachal sinus . the opening at the bladder apex was closed with absorbable sutures ( 4 - 0 vicryl)(johnson & johnson , nj ) under direct vision . the peritoneal defects were closed with several running sutures under a laparoscopic view from the umbilical incision ( fig . 3 ) . seprafilm ( kaken pharmaceutical , tokyo , japan ) was placed under the sutured peritoneum to prevent adhesion . the patient had no complaints of symptoms 18 months postoperatively and was satisfied with the cosmetic results . the umbilical - incision scar was hardly visible and the pfannenstiel incision was concealed by regrowth of pubic hair . under general anesthesia , the patient was placed in a leg - open position and a transurethral foley catheter was inserted . the surgeon and camera surgeon stood on the left side of the patient with the monitor in a caudal position relative to the patient . through a 15 mm infra - umbilical incision , the subcutaneous tissue and the anterior layer of the rectus fascia were dissected in a t - shape . the urachal remnant was then continuously dissected distally as far as possible in the preperitoneal space . a small - sized lapprotector ( hakko , nagano , tokyo ) was inserted through the umbilical incision into the preperitoneal space . for the abdominal wall - lift , two wires were placed subcutaneously and pulled upward , 2 cm below the umbilicus and halfway between the umbilicus and the pubis , respectively ( fig . 2 ) . a rigid , straight - viewing laparoscope was inserted via the lapprotector. the urachal remnant was dissected toward the bladder mainly with a bipolar sealing device ( ligasure)(covidien , mn ) . the dissection was mostly performed in the preperitoneal space and sometimes in the abdominal cavity . a 6 cm pfannenstiel incision was added 2 cm above the pubis to get access to the conjunction of the urachal remnant to the bladder . after the bladder was filled with 300 ml of saline through the foley catheter , a bladder cuff including the urachal insertion was excised along with the whole urachal sinus . the opening at the bladder apex was closed with absorbable sutures ( 4 - 0 vicryl)(johnson & johnson , nj ) under direct vision . the peritoneal defects were closed with several running sutures under a laparoscopic view from the umbilical incision ( fig . 3 ) . seprafilm ( kaken pharmaceutical , tokyo , japan ) was placed under the sutured peritoneum to prevent adhesion . the patient had no complaints of symptoms 18 months postoperatively and was satisfied with the cosmetic results . the umbilical - incision scar was hardly visible and the pfannenstiel incision was concealed by regrowth of pubic hair . the lumen near the umbilicus was covered with stratified squamous cells . inflammatory cell infiltration was mild and no abscess formation was found . the patient had no complaints of symptoms 18 months postoperatively and was satisfied with the cosmetic results . the umbilical - incision scar was hardly visible and the pfannenstiel incision was concealed by regrowth of pubic hair . the urachal remnant is a rare congenital anomaly with an incidence of 1:300,000 in infants and 1:5000 in adults . infection can occur as a common complication of the urachal remnant , and urachal carcinomas have also been reported , . until recently , excision of the urachal remnant was performed by a laparotomy with a long skin incision from the umbilicus to the suprapubic area , . after the description of a laparoscopic resection of the urachal remnant by neufang et al . in 1992 , several more reports on laparoscopic techniques have been published , , , , , , , , . a laparoscopic resection of the urachal remnant has been suggested to be technically feasible and minimally invasive . it has also been suggested that a laparoscopic procedure provides better cosmesis , thus contributing to the quality of life of young female patients in particular . according to earlier reports , laparoscopic management of the urachal remnant seems to be safe and relatively easy , but attention should be made to the following points . first , various port placement arrangements have been proposed . before the advent of single - port laparoscopic surgery , a camera port and other trocars were placed away from the umbilicus , and a 30 oblique laparoscope was used to observe the lesion in the anterior abdominal wall . in the current case , we used the abdominal wall - lift laparoscopy to get a good view of the preperitoneal space with a camera port at the umbilicus . in this way we could insert a straight - viewing laparoscope and some additional manipulating instruments without the need for additional trocars . second , an incomplete resection of the remnant may lead to recurrent infections . in this sense , complete excision is essential although the edge of the bladder has an ill - defined border . furthermore , the technique chosen to close the bladder opening after resection of the remnant tract ( stapler or absorbable sutures ) is also important and has consequences for possible urolithiasis . for precise detection and division of the bladder junction followed by meticulous suturing of the bladder opening , we suggest that surgeons make an additional incision above the pubis , as described in the present case . the pfannenstiel incision that we used provides a good view of the operative field as well as an excellent cosmetic outcome . third , peritoneal defects can occur during excision of the urachal remnant even with a preperitoneal approach . but the direct sutures from the umbilical incision are easily done under a laparoscopic view with an abdominal wall - lift . the combined use of a bio - absorptive adhesion - preventive film may be of some help to prevent intestinal obstruction after surgery . in conclusion , we propose that the abdominal wall - lift technique is a promising surgical option for patients with a symptomatic urachal remnant , in terms of optimal procedures and satisfactory cosmetic results . written informed consent was obtained from the patient for publication of this case and any accompanying images .
highlightsabdominal wall - lift technique is a promising surgical option for patients with a symptomatic urachal remnant.the pfannenstiel incision provides a good view of the operative field as well as an excellent cosmetic outcome.urachal sinus excision using the abdominal wall - lift laparoscopy seems to surpass the previously reported methods in term of safety , cosmetics , and adequacy of surgical procedures .
Introduction Case report Materials and surgical technique Pathological findings Postoperative treatment Discussion Conclusion Conflict of interest Source of funding Ethical approval Consent Author contribution Supplementary data
PMC4649788
this research work was supported by the office of the national research council of thailand and the faculty of pharmacy , silpakorn university , nakhon pathom , thailand .
eleven heavy metals in various products of moringa oleifera were analyzed to determine eleven heavy metals ( al , as , cd , cr , cu , fe , pb , mn , hg , ni , and zn ) using inductively coupled plasma - mass spectrometry . the products of m. oleifera were purchased in nakhon pathom , thailand . all products were digested with nitric acid solution before determining the concentrations of heavy metals . the recoveries of all heavy metals were found to be in the range of 99.89 - 103.05% . several criteria such as linearity , limits of detection , limits of quantification , specificity , precision under repeatability conditions and intermediate precision reproducibility were evaluated . results indicate that this method could be used in the laboratory for determination of eleven heavy metals in m. oleifera products with acceptable analytical performance . the results of analysis showed that the highest concentrations of as , cr , hg , and mn were found in tea leaves while the highest concentrations of al , cd , cu , fe , ni , pb , and zn were found in leaf capsules . continuous monitoring of heavy metals in m. oleifera products is crucial for consumer health .
Financial support and sponsorship: Conflicts of interest:
PMC4630363
neurilemmoma is defined as " a neoplasm that arises from schwann cells of the cranial , peripheral , and autonomic nerves " . clinically , these tumors may present as cranial neuropathy , abdominal or soft tissue mass , intracranial lesion , or spinal cord compression . the terminology regarding neurilemmoma may be confusing with neurinoma , neurocytoma , peripheral glioma , perineurial fibroblastoma and schwannoma . today , they are the most common type of peripheral nerve sheath tumor ( pnst ) , but they are rarely found in deep peroneal nerve sensory branch9121617 ) . we report an interesting case of neurilemmoma of deep peroneal nerve sensory branch that triggered sensory change with compression test on lower leg and induced changes on pre- and post - operative infrared ( ir ) thermographic images . ir thermography was taken at 23. it is the first to document a case of thermal change arising from neurilemmoma with compression test . a 52-year - old woman had complained of pain at lower back and right leg for 9 months . a local clinician made a diagnosis of herniated lumbar disc ( hld ) and performed a pain block on the patient 's spine . the symptoms persisted , however , and a mass sized approximately 2 cm by 2 cm was palpated on the lateral side of leg , between right side of tibia and fibula , about 3.5 cm proximal to lateral malleolus . she was subsequently referred to our hospital . upon visiting our hospital , she initially presented with sensory change on the dorsal side of the right foot , particularly the big toe . percussion of the mass caused severe pain at fibular area , dorsal side of the right foot , and the big toe . no neurologic signs were examined except for sensory change with compression test and mild numbness at the fibula and dorsal area of foot . ultrasonography of the lower extremity discovered an oval - shaped mass sized 1.42 cm by 0.77 cm by 1.12 cm , considered as probably benign pnst arising from the right deep peroneal nerve sensory branch ( fig . after 1 month , the low back pain was relieved by medication , but right leg pain remained , and an excision of the mass was planned . a computed tomography ( ct ) scan with contrast of the both lower extremities was taken , which revealed a soft tissue mass within the sensory branch of right deep peroneal nerve ( fig . when the palpable mass was compressed , the patient complained of sensory change , severe pain on fibula ( upward from mass to below the knee ) and dorsal area of foot , especially on fibular area and big toe . at that time , a thermal change was found on ir thermograpy , 31.83 at fibular area , 31.45 at dorsal area : specifically , the temperature of the fibular area was elevated by 0.75 and dorsal area was decreased by 0.15 ( fig . a vertical skin incision was made under spinal anesthesia , and sharp and blunt dissection was used to expose the mass . gross identification demonstrated a mass sized 1.4 cm by 1.2 cm and shaped like a pigeon egg ( fig . 4 ) . the tumor capsule was incised parallel to the running direction of nerve , and totally enucleated , with special attention paid to deep peroneal nerve sensory branch in order to avoid damaging the nerve . the patient complained of mild sensory change on dorsum of foot , which was relieved partially over 1 month and completely over 2 months of period . when the excision site was compressed , temperature was 31.49 at fibular area , 31.32 at dorsal area ( table 1 ) . there is no severe sensory change , except mild op wound pain , and thermal elevation like pre - op status . the temperatures at dorsal foot area do not show any significant difference in pre- and post - op status . neurilemoma , also known as schwannoma , is a common benign tumor of the peripheral nerve1119 ) . neurilemmomas usually arise at the intracranial cavity but may be found on other sites of the body . odom et al.14 ) reviewed previous literature and reported that schwannoma of the leg comprise about 7.09% of all cases . the initial impression of our case was herniated lumbar disc ( hld ) , which was ultimately found to be incorrect . there has been a previous case where a patient with schwannoma of the peroneal nerve presented with sciatica , which was initially misdiagnosed as hld13 ) . this patient 's perceived sciatic pain or l5 dermatomal pain could have lead to the misdiagnosis . in our case , the patient underwent several sessions of pain block in a local hospital , and there was no clear sign of improvement of low back pain . causes of nerve irritational symptom include hld , ischialgia , piriform muscle syndrome , polyneuropathy , pernoneal nerve trauma , pressure upon peroneal nerve due to wearing immobilization , fracture or expansive process in the region of tibial head , lipoma , ganglioma , synovial cysts from popliteal region , anatomic variability , and others14 ) , and differential diagnosis should be made after considering the innervation of the deep peroneal nerve . the diagnostic modalities available for the differential diagnosis include ct scan , ultrasonography , magnetic resonance imaging , and electromyography , among other31420 ) . simple radiographs are not of much value when a patient presents with neurologic symptoms . ultrasonography has been considered as a useful screening tool3 ) , but it is difficult to perform such costly examination from the onset . this examination measures the change in temperature radiating from the body resulting from alteration of subcutaneous capillary blood flow . thus , for the precise examination , it is important to maintain a constant temperature and environment in thermography laboratory . this was first used by lawson in 1956 for the diagnosis of breast cancer8 ) and is now being used as a novel method objectively quantifying the subjective sense of pain2 ) . in order to minimize the errors , our thermography lab maintains temperature at 23 , we use difference between temperature of before and after compression of the lesion . the ir thermograpy demonstrated thermographic changes when the neurilemoma induced sensory change with compression test on the fibular area , dorsal side of foot , and the big toe . the area of thermographic change was found to be related to the path of deep peroneal nerve sensory branch . in hld accompanied by leg pain , hyperthermic regions resulting from local dilation arise on the posterior lumbar skin that correlates to anatomic site of the compressed nerve due to antidromic stimulation . the information is transmitted to the recurrent meningeal nerve , or sinuvertebral nerve , located at the spinal cord , and consequently , the autonomic output caused by the reflex arc leads to local vasoconstriction and hypothermia28 ) . in our case , however , there was greater blood flow at the pain site , which further increased upon inducing sensory change with compression test , resulting in hyperthermia ( fig . while the degree of pain intensified in accordance with compression test , vasodilation and increased temperature of the skin was observed , rather than hypothermia and vasoconstriction caused by sympathetic nerves as seen in hld . this may be explained by autonomic dysfunction following nerve injury ( in this case , tumor growth on peroneal nerve sensory branch ) , including changes of the sympathetic tone and norepinephrine synthesis1 ) . sympathetic nervous system usually induces vasoconstriction by secretion of adrenaline , but sometimes , the cholinergic fibers within the sympathetic nerves releases acetylcholine , which may cause vasodilation and diaphoresis1015 ) . the former may be more significant in hld , whereas in our case , the latter may have been the stronger factor . vasodilation resulting from the stimulation of the dorsal root , which is thought to be a separate entity from sympathetic nervous system , has been discovered . this arises from spinal gray matters , and the preganglionic fibers passes the dorsal root and dorsal root ganglions , changing postganglionic fiber and ultimately , peripheral ganglions and skin temperature21 ) . furthermore , parasympathetic nerves are also involved in regulation of skin temperature , which may be a clue explaining the rise in temperature at fibular area and cutaneous blood flow following sensory change with compression test . interestingly the temperatures at dorsal foot area do not show any significant changes and were elevated more than fibular area . the study performed by zhang et al.21 ) presents thermal data of the upper body measured indoor at 23. in this case , the mean temperatures of upper body sectors are distributed from 29 to 32. this study demonstrates lower extremity temperature of our case , being maintained at 31.08 to 31.83 regardless of compression . in other words , our case of ir thermography data follows the temperature distribution pattern obtained at 23 indoor . the magnitude temperature of difference between left and right side of the same anatomic location was 0.1 , 0.2 , 0.3 , and 0.4 regardless of indoor temperature ( 20 or 23 ) , in both 1995 and 1999 study21 ) . the elevated temperature of the foot dorsum compared to the fibula before the surgery ( table 1 ) may be attributed to tumor effect . the situation where the temperature of dorsum was 0.58 higher instead of 0.3 was determined as abnormal6 ) . this is also well demonstrated in uematsu , where carpal tunnel syndrome was correlated with electromyography ( emg)18 ) . the degree of median nerve lesion was classified as mild , moderate , or severe according to emg findings , and mild and moderate compression group demonstrated thermal elevation while in severe compression group , both thermal elevation and depression was observed . in other words , palmar temperature rises as the severity of median nerve compression increases and then drops as thenar muscle atrophy develops as the result of chronic severe carpal tunnel syndrome . since peripheral nerves include sympathetic nerves involved in vasoconstriction of the anatomic area , the tumor located at sensory branch of the deep peroneal nerve may have suppressed the vasoconstriction of the surround area , resulting in vasodilation and the following hyperthermia of the right dorsum and toe compared to the left side . there are numerous other factors influencing skin temperature , including emotional stress and depression , which can affect autonomic dysfunction4 ) , as well as the circadian rhythm , which results in diurnal variation of body temperature7 ) . it shows temperature change according to autonomic nervous system tone or change in circulation caused by various etiologies . therefore , although it shows temperature changes , it does not clarify what the nature of the causative lesion is . considering these variables , future studies and analyses controlling for these factors may be needed . when the mass , pnst , was compressed , there is a ir thermographic - thermal elevation at pnst proximal part with subjective pain . mass excision relieves the patient 's symptom , and there is no definite thermal change at post - operative ir thermograpy . however , tumors that occurred in peroneal nerve have a similar symptoms with hld , or symptoms caused by nerve irritation or compression . although , ir thermographic findings are not specific , as discussed previously , there are differences in ir thermography , and it is useful and supportive in our case . therefore , there are possible usefulness of ir thermography in evaluating symptoms caused by nerve irritation or compression , and possible limitation from various etiologies . further study will be needed to clarify which range of temperature change can be interpreted as significant result and to get specific result without numerous other factors influencing skin temperature .
we report a case of neurilemmoma of deep peroneal nerve sensory branch that triggered sensory change with compression test on lower extremity . after resection of tumor , there are evoked thermal changes on pre- and post - operative infrared ( ir ) thermographic images . a 52-year - old female presented with low back pain , sciatica , and sensory change on the dorsal side of the right foot and big toe that has lasted for 9 months . she also presented with right tibial mass sized 1.2 cm by 1.4 cm . ultrasonographic imaging revealed a peripheral nerve sheath tumor arising from the peroneal nerve . ir thermographic image showed hyperthermia when the neurilemoma induced sensory change with compression test on the fibular area , dorsum of foot , and big toe . after surgery , the symptoms and thermographic changes were relieved and disappeared . the clinical , surgical , radiographic , and thermographic perspectives regarding this case are discussed .
INTRODUCTION CASE REPORT DISCUSSION CONCLUSION
PMC4919765
crystal nucleation in liquids has countless practical consequences in science and technology , and it also affects our everyday experience . one obvious example is the formation of ice , which influences global phenomena such as climate change , as well as processes happening at the nanoscale , such as intracellular freezing . on the other hand , controlling nucleation of molecular crystals from solutions is of great importance to pharmaceuticals , particularly in the context of drug design and production , as the early stages of crystallization impact the crystal polymorph obtained . even the multibillion - dollar oil industry is affected by the nucleation of hydrocarbon clathrates , which can form inside pipelines , endangering extraction . finally , crystal nucleation is involved in many processes spontaneously occurring in living beings , from the growth of the beautiful nautilus shells to the dreadful formation in our own brains of amyloid fibrils , which are thought to be responsible for many neurodegenerative disorders such as alzheimer s disease . each of the above scenarios starts from a liquid below its melting temperature . this supercooled liquid(12 ) is doomed , according to thermodynamics , to face a first - order phase transition , leading to a crystal . before this can happen , however , a sufficiently large cluster of crystalline atoms ( or molecules or particles ) must form within the liquid , such that the free energy cost of creating an interface between the liquid and the crystalline phase will be overcome by the free energy gain of having a certain volume of crystal . this event stands at the heart of crystal nucleation , and how this process has been , is , and will be modeled by means of computer simulations is the subject of this review . the past few decades have witnessed an impressive body of experimental work devoted to crystal nucleation . for instance , thanks to novel techniques such as transmission electron microscopy at very low temperatures ( cryo - tem ) , we are now able to peek in real time into the early stages of crystallization . a substantial effort has also been made to understand which materials , in the form of impurities within the liquid phase , can either promote or inhibit nucleation events , a common scenario known as heterogeneous nucleation . however , our understanding of crystal nucleation is far from being complete . this is because the molecular ( or atomistic ) details of the process are largely unknown because of the very small length scale involved ( nanometers ) , which is exceptionally challenging to probe in real time even with state - of - the - art measurements . hence , there is a need for computer simulations , and particularly molecular dynamics ( md ) simulations , where the temporal evolution of the liquid into the crystal is more or less faithfully reproduced . unfortunately , crystal nucleation is a rare event that can occur on time scales of seconds , far beyond the reach of any conventional md framework . in addition , a number of approximations within the computational models , algorithms , and theoretical framework used have been severely questioned for several decades . although the rush for computational methods able to overcome this time - scale problem is now more competitive than ever , we are almost always forced to base our conclusions on the ancient grounds of classical nucleation theory ( cnt ) , a powerful theoretical tool that nonetheless dates back 90 years to volmer and weber . in fact , these are exciting times for the crystal nucleation community , as demonstrated by the many reviews covering several aspects of this diverse field . this particular review is focused almost exclusively on md simulations of crystal nucleation of supercooled liquids and supersaturated solutions . we take into account several systems , from colloidal liquids to natural gas hydrates , highlighting long - standing issues as well as recent advances . although we review a substantial fraction of the theoretical efforts in the field , mainly from the past decade , our goal is not to discuss in detail every contribution . instead , we try to pinpoint the most pressing issues that still prevent us from furthering our understanding of nucleation . we introduce the theoretical framework of cnt ( section 1.1 ) , the state - of - the - art experimental techniques ( section 1.2 ) , and the md - based simulation methods ( section 1.3 ) that in the past few decades have provided insight into nucleation . in section 2 , we put such computational approaches into context , describing both achievements and open questions concerning the molecular details of nucleation for different types of systems , namely , colloids ( section 2.1 ) , lennard - jones ( lj ) liquids ( section 2.2 ) , atomic liquids ( section 2.3 ) , water ( section 2.4 ) , nucleation from solution ( section 2.5 ) , and natural gas hydrates ( section 2.6 ) . in the third and last part of the article ( section 3 ) , we highlight future perspectives and open challenges in the field . almost every computer simulation of crystal nucleation in liquids invokes some elements of classical nucleation theory ( cnt ) . this theory has been discussed in great detail elsewhere , and we describe it here for the sake of completeness and also to introduce various terms used throughout the review . cnt was formulated 90 years ago through the contributions of volmer and weber , farkas , becker and dring , and zeldovich , on the basis of the pioneering ideas of none other than gibbs himself . cnt was created to describe the condensation of supersaturated vapors into the liquid phase , but most of the concepts can also be applied to the crystallization of supercooled liquids and supersaturated solutions . according to cnt , clusters of crystalline atoms ( or particles or molecules ) of any size are treated as macroscopic objects , that is , homogeneous chunks of crystalline phase separated from the surrounding liquid by a vanishingly thin interface . this apparently trivial assumption is known as the capillarity approximation , which encompasses most of the strengths and weaknesses of the theory . according to the capillarity approximation , the interplay between the interfacial free energy , , and the difference in free energy between the liquid and the crystal , , fully describes the thermodynamics of crystal nucleation . in three dimensions , the free energy of formation , , for a spherical crystalline nucleus of radius r can thus be written as the sum of a surface term and a volume term1this function , sketched in figure 1 , displays a maximum corresponding to the so - called critical nucleus size n*2where is the number density of the crystalline phase . the critical nucleus size represents the number of atoms that must be included in the crystalline cluster for the free energy difference , , to match the free energy cost due to the formation of the solid liquid interface . clusters of crystalline atoms occur within the supercooled liquid by spontaneous , infrequent fluctuations , which eventually lead the system to overcome the free energy barrier for nucleation3triggering the actual crystal growth ( see figure 1 ) . sketch of the free energy difference , , as a function of the crystalline nucleus size n. a free energy barrier for nucleation , , must be overcome to proceed from the ( metastable ) supercooled liquid state to the thermodynamically stable crystalline phase through homogeneous nucleation ( purple ) . heterogeneous nucleation ( green ) can be characterized by a lower free energy barrier , , and a smaller critical nucleus size , nhet * , whereas in the case of spinodal decomposition ( orange ) , the supercooled liquid is unstable with respect to the crystalline phase , and the transformation to the crystal proceeds in a barrierless fashion . the three snapshots depict a crystalline cluster nucleating within the supercooled liquid phase ( homogeneous nucleation ) or as a result of the presence of a foreign impurity ( heterogeneous nucleation ) , as well as the simultaneous occurrence of multiple crystalline clusters in the unstable liquid . this scenario is often labeled as spinodal decomposition , although the existence of a genuine spinodal decomposition from the supercooled liquid to the crystalline phase has been debated ( see text ) . the kinetics of crystal nucleation is typically addressed by assuming that no correlation exists between successive events increasing or decreasing the number of constituents of the crystalline nucleus . in other words , the time evolution of the nucleus size is presumed to be a markov process , in which atoms in the liquid either order themselves one by one in a crystalline fashion or dissolve one by one into the liquid phase . in addition , we state that every crystalline nucleus lucky enough to overcome the critical size n * quickly grows to macroscopic dimensions on a time scale much smaller than the long time required for that fortunate fluctuation to come about . if these conditions are met , the nucleation rate , that is , the probability per unit time per unit volume of forming a critical nucleus does not depend on time , leading to the following formulation of the so - called steady - state nucleation rate 4where kb is the boltzmann constant and is a prefactor that we discuss later . the steady - state nucleation rate is the central quantity in the description of crystallization kinetics , as much as the notion of critical nucleus size captures most of the thermodynamics of nucleation . all quantities specified up to now depend on pressure and most notably temperature . in most cases , the interfacial free energy , , is assumed to be linearly dependent on temperature , whereas the free energy difference between the liquid and solid phases , , is proportional to the supercooling , ( or the supersaturation ) . several approximations exist to treat the temperature dependence of ( 36 ) and , which can vary substantially for different supercooled liquids . in any case , it follows from eq 3 that the free energy barrier for nucleation , decreases with supercooling . in other words , the farther one is from the melting temperature , the larger the thermodynamic driving force for nucleation is . interestingly , in the case of supercooled liquids , kinetics goes the other way , as the dynamics of the liquid slow down with supercooling , thus hindering the occurrence of nucleation events . in fact , although a conclusive expression for the prefactor the latter is still lacking , it is usually written within cnt as5where is the number of possible nucleation sites per unit volume , is the zeldovich factor ( accounting for the fact that several postcritical clusters might still shrink without growing into the crystalline phase ) , and is a kinetic prefactor . the latter should represent the attachment rate , that is , the frequency with which the particles in the liquid phase reach the cluster rearranging themselves in a crystalline fashion . however , in a dense supercooled liquid , also quantifies the ease with which the system explores configurational space , effectively regulating the amplitude of the fluctuations possibly leading to the formation of a crystalline nucleus . in short , we can safely say that involves the atomic or molecular mobility of the liquid phase , more often than not quantified in terms of the self - diffusion coefficient , which obviously decreases with supercooling . thus , for a supercooled liquid , the competing trends of and lead , in the case of diffusion - limited nucleation , to a maximum in the nucleation rate , as depicted in figure 2 . the same arguments apply when dealing with processes such as the solidification of metallic alloys . in the case of nucleation from solutions , and however , the dependence of the kinetic prefactor on supersaturation is much weaker than the temperature dependence of characteristic of supercooled liquids . as a result , there is usually no maximum in the nucleation rate as a function of supersaturation for nucleation from solutions . illustration of how certain quantities from cnt vary as a function of supercooling , t , for supercooled liquids . the free energy difference between the liquid and the solid phase , the interfacial free energy , and the kinetic prefactor are reported as functions of t in a generic case of diffusion - limited nucleation , characterized by a maximum in the steady - state nucleation rate . is zero at the melting temperature , and is vanishingly small at the glass transition temperature . although is supposed to play a minor role compared to the exponential term in eq 4 , the kinetic prefactor has been repeatedly blamed for the quantitative disagreement between experimental measurements and computed crystal nucleation rates . atomistic simulations could , in principle , help to clarify the temperature dependence as well as the microscopic origin of and also of the thermodynamic ingredients involved in the formulation of cnt . however , quantities such as are not only infamously difficult to converge within decent levels of accuracy but can even be ill - defined in many situations . for instance , it remains to be seen whether , which , in principle , refers to a planar interface under equilibrium conditions , can be safely defined when dealing with small crystalline clusters of irregular shapes . in fact , the early stages of the nucleation process often involve crystalline nuclei whose size and morphology fluctuate on a time scale shorter than the structural relaxation time of the surrounding liquid . moreover , the dimensions of such nuclei can be of the same order as the diffuse interface between the liquid and the solid phases , thus rendering the notion of a well - defined value quite dangerous . as an example , joswiak et al . recently showed that , for liquid water droplets , can strongly depend on the curvature of the droplet . the mismatch between the macroscopic interfacial free energy and its curvature - dependent value can spectacularly affect water - droplet nucleation , as reported by atomistic simulations of droplets characterized by radii on the order of 0.51.5 nm . some other quantities , such as the size of the critical cluster , depend in many cases rather strongly on the degree of supercooling . this is the case , for example , for the critical nucleus size n * , which can easily span 2 orders of magnitude in just 10 c of supercooling . given the old age of cnt , it is no surprise that substantial efforts have been devoted to extend and/or improve its original theoretical framework . the most relevant modifications possibly concern the issue of two - step nucleation . many excellent works have reviewed this subject extensively ( see , e.g. , refs ( 18 , 24 , 52 , and 53 ) ) , so that we provide only the essential concepts here . in the original formulation of cnt , the system has to overcome a single free energy barrier , corresponding to a crystalline nucleus of a certain critical size , as depicted in figure 3 . when dealing with crystal nucleation from the melt , it is rather common to consider the number of crystalline particles within the largest connected cluster , n , as the natural reaction coordinate describing the whole nucleation process . in many cases , the melt is dense enough that local density fluctuations are indeed not particularly relevant and the slow degree of freedom is in fact the crystalline ordering of the particles within the liquid network . however , one can easily imagine that , in the case of crystal nucleation of molecules in solutions , for example , the situation can be quite different . specifically , in a realistically supersaturated solution , a consistent fluctuation of the solute density ( concentration ) could be required just to bring a number n of solute molecules close enough to form a connected cluster . assuming that the molecules involved in such a density fluctuation will also order themselves in a crystalline fashion on exactly the same time scale is rather counterintuitive . schematic comparison of one - step versus two - step nucleation for a generic supersaturated solution . ( a ) sketch of the free energy difference gn , n as a function of the number of solute molecules in the largest connected cluster ( they can be ordered in a crystalline fashion or not ) ( n ) and of the number of crystalline molecules within the largest connected cluster ( n ) . the one - step mechanism predicted by cnt ( purple ) is characterized by a single free energy barrier for nucleation , gn , n,one - step*. in contrast , the two - step nucleation requires a free energy barrier , gn,two - step * , to be overcome through a local density fluctuation of the solution , leading to a dense , but not crystalline - like , precursor . the latter can be unstable ( green ) or stable ( orange ) with respect to the liquid phase , being characterized by a higher ( green ) or lower ( orange ) free energy basin . once this dense precursor has been obtained , the second step consists of climbing a second free energy barrier , gn , two - step * , corresponding to the ordering of the solute molecules within the precursor from a disordered state to the crystalline phase . ( b ) one - step ( purple ) and two - step ( green and orange ) nucleation mechanisms visualized in the density ( n)ordering ( n ) plane . the one - step mechanism proceeds along the diagonal , as both n and n increase at the same time in such a way that a single free energy barrier has to be overcome . in this scenario , the supersaturated solution transforms continuously into the crystalline phase . on the other hand , within a two - step nucleation scenario , the system has to experience a favorable density fluctuation along n first , forming a disordered precursor that , in a second step , orders itself in a crystalline fashion , moving along the ( n ) coordinate and ultimately leading to the crystal . in fact , the formation of crystals from molecules in solution often occurs according to a two - step nucleation mechanism that has no place in the original formulation of cnt . in the prototypical scenario depicted in figure 3 , a first free energy barrier , gn,two - step * , has to be overcome by means of a density fluctuation of the solute , such that a cluster of connected molecules of size n * is formed . this object does not yet have any sort of crystalline order , and depending on the system under consideration , it can be either unstable or stable with respect to the supersaturated solution ( see figure 3 ) . subsequently , the system has to climb a second free energy barrier , gn , two - step * , to order the molecules within the dense cluster in a crystalline - like fashion . a variety of different nucleation scenarios have been loosely labeled as two - step , from crystal nucleation in colloids ( see section 2.1 ) or lennard - jones liquids ( see section 2.2 ) to the formation of crystals of urea or nacl ( see section 2.5 ) , not to mention biomineralization ( see , e.g. , refs ( 18 and 53 ) ) and protein crystallization ( see , e.g. , refs ( 54 and 55 ) ) . in all of these cases , cnt as it is formulated is simply not capable of dealing with two - step nucleation . this is why , in the past few decades , a number of extensions and/or modifications of cnt have been proposed and indeed successfully applied to account for the existence of a two - step mechanism . here , we mention the phenomenological theory of pan et al . , who wrote an expression for the nucleation rate assuming a free energy profile similar to the one sketched in figure 3 , where dense metastable states are involved as intermediates on the path toward the final crystalline structure . the emergence of so - called prenucleation clusters ( pncs ) , namely , stable states within supersaturated solutions , which are known to play a very important role in the crystallization of biominerals , for example , was also recently fit into the framework of cnt by hu et al . they proposed a modified expression for the excess free energy of the nucleus that takes into account the shape , size and free energy of the pncs as well as the possibility for the pncs to be either metastable or stable with respect to the solution . a comprehensive review of the subject is offered by the work of gebauer et al . it is worth noticing that these extensions of cnt are mostly quite recent , as they were triggered by overwhelming experimental evidence for two - step nucleation mechanisms . cnt is also the tool of the trade for heterogeneous crystal nucleation , that is , nucleation that occurs on account of the presence of a foreign phase ( see figure 1 ) . in fact , nucleation in liquids occurs heterogeneously more often than not , as in some cases , the presence of foreign substances in contact with the liquid can significantly lower the free energy barrier . a typical example is given by the formation of ice : as we shall see in sections 2.4.1 and 2.4.2 , it is surprisingly difficult to freeze pure water , which invariably takes advantage of a diverse portfolio of impurities , from clay minerals to bacterial fragments , to facilitate the formation of ice nuclei . heterogeneous nucleation is customarily formulated within the cnt framework in terms of geometric arguments . specifically6where f( ) 1 is the shape factor , a quantity that accounts for the fact that three different interfacial free energies must be balanced : , , and . for instance , considering a supercooled liquid nucleating on top of an ideal planar surface offered by the foreign phase , we obtain the so - called young s relationwhere is the contact angle , namely , a measure of the extent to which the crystalline nucleus wets the foreign surface . thus , the contact angle determines whether and how much it could be easier for a critical nucleus to form in an heterogeneous fashion , as for 0 < , the volume - to - surface energy ratio is larger for the spherical cap nucleating on the foreign surface than for the sphere nucleating in the liquid . this simple formulation is clearly only a rough approximation of what happens in reality . at first , the contact angle is basically a macroscopic quantity , of which the microscopic equivalent is in most cases ill - defined on the typical length scales involved in the heterogeneous nucleation process . in addition , in most cases , the nucleus will not be shaped like a spherical cap , and to make things more complicated , many different nucleation sites with different morphologies typically exist on the same impurity . finally , the kinetic prefactor , , becomes even more obscure in heterogeneous nucleation , as it is plausible that the foreign phase will affect the dynamical properties of the supercooled liquid . moving toward strong supercooling , several things can happen to the supercooled liquid phase . whether one can avoid the glass transition largely depends on the specific liquid under consideration and on the cooling rate ( see , e.g. , ref ( 58 ) ) . assuming that the system can be cooled sufficiently slowly , hence avoiding both the glass transition and crystal nucleation , one can , in principle , enter a supercooled regime in which the liquid becomes unstable with respect to the crystalline phase . this region of the phase diagram is known as the spinodal region , where the tiniest perturbation , for example , of the local density or the degree of ordering leads the system toward the crystalline phase without paying anything in terms of free energy ( see figure 1 ) . in fact , below a certain critical temperature , , the free energy barrier for nucleation is zero , and the liquid transforms spontaneously into the crystal on very short time scales . the same picture holds for molecules in solution , as nicely discussed by gebauer et al . , and it can not , by definition , be described by conventional cnt , according to which a small value persists even at the strongest supercoolings . although spinodal regimes have been observed in a variety of scenarios , the existence of a proper spinodal decomposition from the supercooled liquid to the crystalline phase has been debated ( see , e.g. , ref ( 61 ) ) . enhanced - sampling md simulations , which we discuss in section 2.2 , have suggested that barrierless crystal nucleation is possible at very strong supercooling , whereas other works claim that this is not the case ( see , e.g. , ref ( 63 ) ) . here , we simply note that , at strong supercooling not necessarily within the presumed spinodal regime a number of assumptions on which cnt relies become , if not erroneous , ill - defined . the list is long , and in fact , a number of nucleation theories able to at least take into account the emergence of a spinodal decomposition exist , although they have mostly been formulated for condensation problems . in any case , the capillarity approximation is most likely to fail at strong supercoolings , as the size of the critical nucleus becomes exceedingly small , down to losing its meaning in the event of a proper spinodal decomposition . moreover , we shall see , for instance , in section 2.2 , that the shape of the crystalline clusters is anything but spherical at strong supercooling and , at the same time , the kinetic prefactor assumes a role of great importance . in fact , nucleation at strong supercooling might very well be dominated by , as the mobility of the supercooled liquid is what really matters when the free energy barrier for nucleation approaches vanishingly small values . strong supercooling is important because this is the regime in which most computational studies have been performed . large values of t imply high nucleation rates and smaller critical nuclei , although as one moves away from , most of the assumptions of cnt are progressively invalidated . at this point , given the substantial approximations of cnt and especially its old age , the reader might be waiting for us to introduce the much more elegant , accurate , and comprehensive theories that experiments and simulations surely embrace today . sadly , this is not the case . many of them , such as dynamical nucleation theory , mean - field kinetic nucleation theory , and coupled flux theory , are mainly limited to condensation problems , and some others have only rarely been applied , for example , to crystallization in glasses , such as diffuse interface theory . several improvements on cnt have been proposed , targeting specific aspects such as the shape of the crystalline nuclei or the finite size of the nonsharp crystal liquid interface . nucleation theories largely unrelated to cnt can also be found , such as classical density functional theory ( cdft ) ( classical , not to be confused with the celebrated quantum mechanical framework of hohenberg and kohn ) . a fairly complete inventory of nucleation theories , together with an excellent review of nucleation in condensed matter , can be found elsewhere . here , we do not discuss the details of any of these approaches , as indeed none of them has been consistently used to model crystal nucleation in liquids . this is because cnt , despite having many shortcomings , is a simple yet powerful theory that is able to capture at least qualitatively the thermodynamics and kinetics of nucleation for very different systems , from liquid metals to organic crystals . it has been extended to include heterogeneous nucleation , and it is fairly easy to modify it to take into consideration multicomponent systems such as binary mixtures as well . several different experimental approaches have been employed to understand the thermodynamics and kinetics of crystal nucleation in liquids . although this review discusses theory and simulations almost exclusively , we present in this section a concise overview of the state - of - the - art experimental techniques to highlight their capabilities as well as their limitations . a schematic synopsis focusing on both spatial and temporal resolutions is sketched in figure 4 , and an inventory of notable applications is reported in table 1 . as already stated , nucleation is a dynamical process usually occurring on very small time and length scales ( nanoseconds and nanometers , respectively ) . thus , obtaining the necessary spatial and temporal resolutions is a tough technical challenge . overview of some of the experimental methods that have been applied to characterize nucleation . ranges of the spatial and temporal resolutions typical of each approach are reported on the x and y axes , respectively . for instance , colloids offer a playground where simple microscopy can image the particles involved in the nucleation events , which occur on such long time scales ( seconds ) that a full characterization in time of the process has been achieved . specifically , confocal microscopy has led to three - dimensional imaging of colloidal systems , unraveling invaluable information about the critical nucleus size , for example . in a similar fashion , sleutel et al . achieved molecular resolution of the formation of two - dimensional glucose isomerase crystals by means of atomic force microscopy . this particular investigation featured actual movies showing both crystal growth and the dissolution of precritical clusters , as well as providing information about the influence of the substrate . in addition , cryo - tem techniques have recently provided two - dimensional snapshots of nucleation events at very low temperatures . in selected cases , where the time scales involved are again on the order of seconds , dynamical details have been obtained , as in the cases of caco3 , metal phosphate , and magnetite . however , more often than not , crystal nucleation in liquids takes place within time windows too small ( nanoseconds ) to allow for a sequence of snapshots to be taken with high - spatial - resolution instruments . in these cases , microscopic insights can not be obtained , and much more macroscopic measurements have to be performed . in this context , several experimental approaches aim at examining a large number of independent nucleation events for a whole set of rather small configurations of the system , basically performing an ensemble average . for example , in droplet experiments , nucleation is characterized as a function of time or temperature . freezing is identified for each nucleation event within the ensemble of available configurations by techniques such as femtosecond x - ray scattering , optical microscopy , and powder x - ray diffraction . from these data , the nucleation rate is often reconstructed by measuring either metastable zone widths or induction times ( several examples are listed in , e.g. , refs ( 111115 ) ) , thus providing a solid connection to theoretical frameworks such as cnt ( see section 1.1 ) . an essential technical detail within this class of measurements is that the volume available for each nucleation event has to be as small as possible to reduce the occurrence of multiple nucleation events within the same configuration . high - throughput devices such as the lab - on - a - chip can significantly improve the statistics of the nucleation events , thus enhancing the capabilities of these approaches . another line of action focuses on the study of large , macroscopic systems . freezing is detected by techniques such as differential scanning calorimetry , fourier transform infrared spectroscopy ( ftirs ) , and analytical ultracentrifugation or by some flavor of chamber experiments . in this case , the frozen fraction of the overall system and/or the nucleation temperatures can be obtained , and in some cases , nucleation rates have been extracted ( see table 1 ) . finally , experimental methods that can detect nucleation and the formation of the crystal ( predominantly by means of optical microscopy ) but do not provide any microscopic detail have helped to shed light on issues such as the role of the solvent or impurities . this is usually possible by examining the amount of crystalline phase obtained along with its structure . even though there are a large number of powerful experimental techniques and new ones emerging ( e.g. , ultrafast x - ray ) , it is still incredibly challenging to obtain microscopic - level insight into nucleation from experiments . as we shall see now , md simulations provide a powerful complement to experiments . when dealing with crystal nucleation in liquids , atomistic simulations should provide a detailed picture of the formation of the critical nucleus . the simplest way to achieve this is by so - called brute - force md simulations , which involve cooling the system to below the freezing temperature and then following its time evolution until nucleation is observed . brute - force simulations are the antagonist of enhanced - sampling simulations , where specific computational techniques are used to alter the dynamics of the system so as to observe nucleation on a much shorter time scale . monte carlo ( mc ) techniques , although typically coupled with enhanced sampling techniques , can be used to recover , but the calculation of requires other methods , such as kinetic monte carlo ( kmc ) . the natural choice to simulate nucleation events is instead md simulations , which directly provide the temporal evolution of the system . isobaric ensemble ( npt ) , where p ( usually ambient pressure ) and t < are kept constant by means of a barostat and a thermostat , respectively . such computational tweaking is a double - edged sword . in fact , nucleation and most notably crystal growth are exothermic processes , and within the length scale probed by conventional atomistic simulations ( 110 nm ) , it is necessary to keep the system at constant temperature . on the other hand , in this way , dynamical and structural effects in both the liquid and the crystalline phases due to the heat developed during the nucleation events are basically neglected . although the actual extent of these effects is not yet clear , forcing the sampling of the canonical ensemble is expected to be especially dangerous when dealing with very small systems affected by substantial finite - size effects . more importantly small coupling constants and clever approaches ( e.g. , stochastic thermostats ) can be employed to limit the effects of the thermostats , but in general , care must be taken . the same reasoning applies for p and barostats as well . a density change of the system is usually associated with nucleation , the crystalline phase being more ( or less , in the case of , e.g. , water ) dense than the liquid parent phase . three conditions must be fulfilled to extract from brute - force md simulations:(1)the system must be allowed to evolve in time until spontaneous fluctuations lead to a nucleation event.(2)the system size must be significantly larger than the critical nucleus.(3)significant statistics of nucleation events must be collected . the system must be allowed to evolve in time until spontaneous fluctuations lead to a nucleation event . the most daunting obstacle is probably the first one because of the so - called time - scale problem . in most cases , nucleation is a rare event , meaning that it usually occurs on a very long time scale ; precisely how long depends strongly on t . a rough estimate of the number of simulation steps required to observe a nucleation event within a molecular dynamics run is reported in figure 5 . under the fairly optimistic assumption that classical md simulations can cope with up to 10 molecules on a time scale of nano-/microseconds , there is only a very narrow set of conditions for which brute - force classical md simulations could be used to investigate nucleation , usually only at strong supercooling . time scales typical of first - principles simulations , also reported in figure 5 assuming up to 10 molecules , indicate that unbiased ab initio simulations of nucleation events are unfeasible . nucleation rate as a function of the simulation time needed within an md simulation to observe a single nucleation event . the blue shaded region highlights the approximate simulation times currently affordable by classical md simulations ; clearly , only very fast nucleation processes can be simulated with brute - force md . for homogeneous ice nucleation , and can typically be observed for t = 30 k and t = 80 k , respectively . in the derivations of classical and ab initio simulation times , 10 and 10 molecules , respectively , were considered , together with the number density of a generic supercooled liquid , = 0.01 molecules . the number of atoms ( or molecules ) in the system defines the time scale accessible to the simulation and , thus , the severity of the time - scale problem . the reason large simulation boxes , significantly larger than the size of the critical nucleus , are needed is because periodic boundary conditions will strongly affect nucleation ( and growth ) if even the precritical nuclei are allowed to interact with themselves . this issue worsens at mild supercooling , where the critical nucleus size rapidly increases toward dimensions not accessible by md simulations . third , it is not sufficient to collect information on just one nucleation event . nucleation is a stochastic event following a poisson distribution ( at least ideally ; see section 1.1 ) , and so to obtain the nucleation rate , one needs to accumulate decent statistics . taking these issues into consideration , various approaches for obtaining have emerged . one approach , known as the yasuoka matsumoto method , involves simulating a very large system , so that different nucleation events can be observed within a single run . in this case , large simulation boxes are needed to collect sufficient statistics and to avoid spurious interactions between different nuclei . another family of methods involves running many different simulations using much smaller systems , which is usually computationally cheaper . once a collection of nucleation events has been obtained , several methods for extracting can be employed . the simplest ones ( mean lifetime and survival probability methods ) involve the fitting of the nucleation times to poisson statistics . a more in - depth technique , the so - called mean first - passage method , allows for a detailed analysis of the nucleus population but requires a probability distribution in terms of nucleus size . the literature offers a notable number of works in which brute - force md simulations have been successfully applied . most of them rely on one approach to circumvent the above - mentioned issues , particularly the time - scale problem . as we shall see in section 2 , to simulate nucleation events , one almost always has to either choose a very simple system or increase the level of approximation sometimes dramatically , for instance , by coarse - graining the interatomic potential used . in the previous section , we introduced the time - scale problem , the main reason brute - force md simulations are generally not feasible when studying crystal nucleation . enhanced sampling methods alter how the system explores its configurational space , so that nucleation events can be observed within a reasonable amount of computational time . broadly speaking , one can distinguish between free - energy methods and path - sampling methods , both of which have been extensively discussed elsewhere ( see , e.g. , refs ( 156 and 162164 ) ) . thus , only the briefest of introductions is needed here . of the many enhanced sampling methods , only a handful this is because information is needed about both the thermodynamics of the system ( the free energy barrier for nucleation ) and the kinetics of the nucleation process ( the kinetic prefactor ) . when dealing with crystal nucleation in supercooled liquids , free - energy - based methods are rather common , such as umbrella sampling ( us ) and metadynamics . in both cases , and indeed in almost all enhanced sampling methods currently available , the free energy surface of the actual system is coarse - grained by means of one or more order parameters or collective variables . the choice of the order parameter is not trivial and can have dramatic consequences . an external bias is then applied to the system , leading to a modified sampling of the configurational space that allows for the reconstruction of the free energy profile with respect to the chosen order parameter and , thus , for the computation of the free energy barrier . however , there is a price to be paid : upon introduction of an extra term into the system hamiltonian , the actual dynamics of the system is to some extent hampered , and much of the insight into the nucleation mechanism is lost . one needs complementary methods , usually aimed at estimating the probability for the system on top of the nucleation barrier in the space of the selected order parameter to get back to the liquid phase or to evolve into the crystal . most frequently , such methods are based on some flavor of transition state theory , such as the bennett chandler formulation , and require a massive set of md or kmc simulations to be performed . on the other hand , the ever - growing family of path - sampling methods can provide direct access to the kinetics of the nucleation process . these approaches again rely on the definition of an order parameter , but instead of applying an external bias potential , an importance sampling is performed so as to enhance the naturally occurring fluctuations of the system . within the majority of the path - sampling approaches currently used , including transition interface sampling ( tis ) and forward flux sampling ( ffs ) , the ensemble of paths connecting the liquid and the crystal is divided into a series of interfaces according to different values of the order parameter . by sampling the probability with which the system crosses each of these interfaces , a cumulative probability directly related to the nucleation rate can be extracted . other path - sampling techniques such as transition path sampling ( tps ) rely instead on the sampling of the full ensemble of the reactive trajectories . in both cases , by means of additional simulations involving , for example , committor analysis distribution and thermodynamic integration , one can subsequently extract the size of the critical nucleus and the free energy difference between the solid and liquid phases , respectively . many different path - sampling methods are available , but to our knowledge , only tps , tis , and most prominently ffs have allowed for estimates of crystal nucleation rates . under certain conditions , path - sampling methods do not alter the dynamics of the system , allowing for invaluable insight into the nucleation mechanism . however , they are particularly sensitive to the slow dynamics of strongly supercooled systems , which hinder the sampling of the paths and makes them exceptionally expensive computationally . although the past few decades have taught us that enhanced sampling techniques are effective in tackling crystal nucleation of colloids ( see section 2.1 ) , lennard - jones melts ( see section 2.2 ) , and other atomic liquids ( see section 2.3 ) , only recently have these techniques been applied to more complex systems . one challenging scenario for simulations of nucleation is provided by the formation of crystals from solutions characterized by very low solute concentration . although this occurrence is often encountered in real systems of practical interest , it is clearly extremely difficult for md simulations , even if aided by conventional enhanced sampling techniques , to deal with just a few solute molecules dissolved within 1010 solvent molecules . in these cases , the diffusion of the solute plays a role of great relevance , and the interaction between solvent and solute can enter the nucleation mechanism itself . thus , obtaining information about the thermodynamics , let alone the kinetics , of nucleation at very low solute concentrations is presently a formidable task . however , efforts have been devoted to further our understanding of solute migration and solute - nuclei association , for example , as demonstrated by the pioneering works of gavezzotti and co - workers and more recently by kawska and co - workers . in the latter work , the authors illustrate an approach that relies on the modeling of the subsequent growth step , where solute particles ( often ions ) are progressively added to the ( crystalline or not ) cluster . after each of these growth steps , a structural optimization of the cluster and the solvent by means of md simulations although this method can not provide quantitative results in terms of the thermodynamics and/or the kinetics of nucleation , it can , in principle , provide valuable insight into the very early stages of crystal nucleation when dealing with solutions characterized by very low solute concentrations . on a final note , we mention seeded md simulations . this technique relies on simulations in which a crystalline nucleus of a certain size is inserted into the system at the beginning of the simulation . although useful information about critical nucleus size can be obtained in this way , the method does not usually allow for a direct calculation of the nucleation rate . however , seeded md simulations are one of the very few methods by which it is currently possible to investigate solute precipitate nucleation ( see , e.g. , knott et al . ) . in this case , the exceedingly low attachment rate of the solute often prevents both free - energy- and transition - path - sampling - enhanced sampling methods from being applied effectively . as we shall see in the next few sections , the daunting computational costs , together with the delicate choice of order parameter and the underlying framework of cnt , still make enhanced - sampling simulations of crystal nucleation in liquids an intimidating challenge . we have chosen to review different classes of systems , which we present in order of increasing complexity . we start in sections 2.1 and 2.2 with colloids and lennard - jones liquids , respectively . these systems are described by simple interatomic potentials that allow large - scale md simulations , and thus with them , many aspects of cnt can be investigated and nucleation rates calculated . in some cases , the latter can can be directly compared to experimental results . as such , colloids and lennard - jones liquids represent a sort of benchmark for md simulations of crystal nucleation in liquids , although we shall see that our understanding of crystal nucleation is far from satisfactory even within these relatively easy playgrounds . in section 2.3 , we discuss selected atomic liquids of technological interest such as liquid metals , supercooled liquid silicon , and phase - change materials for which nucleation occurs on very small time scales . as the first example of a molecular system , we review the body of computational work devoted to unraveling both the homogeneous ( section 2.4.1 ) and heterogeneous ( section 2.4.2 ) formation of ice , offering a historical perspective guiding the reader through the many advances that have furthered our understanding of ice nucleation in the past decades . next , we present an overview of nucleation from solution ( section 2.5 ) , where simulations have to deal with solute and solvent . we take into account systems of great practical relevance such as urea molecular crystals , highlighting the complexity of the nucleation mechanism , which is very different from what cnt predicts . finally , section 2.6 is devoted to the formation of gas hydrates . as a general rule , increasing the complexity of the system raises more questions about the validity of the assumptions underpinning cnt . the reader will surely notice that simulations have revealed many drawbacks of cnt along the way and that reaching decent agreement for the nucleation rate between experiments and simulations still remains a formidable task . one reason for this is the simplicity of the interatomic potential customarily used to model them : the only interaction a hard - sphere particle experiences comes from elastic collisions with other particles . because there is no attractive force between particles , a hard - sphere system is entirely driven by entropy . as a consequence , the phase diagram is very simple and can be entirely described with one single parameter , the volume fraction . only two different phases are possible : a fluid and a crystal . at volume fractions < 0.494 , the system is in its fluid state ; at 0.492 < < 0.545 , the system will be a mixture of fluid and crystalline states ; and at > 0.545 , the thermodynamically most stable phase is the crystal . the transformation from fluid to crystal occurs through a first - order phase transition . despite their simplicity , colloids made of polymers are commonly used for this purpose , the most prominent example being poly(methyl methacrylate ) ( pmma ) spheres coated with a layer of poly(12-hydroxystearic acid ) . after the spheres have been synthesized , they are dissolved in a mixture of cis - decaline and tetraline , which enables the use of a wide range of powerful optical techniques to investigate nucleation . the possibility of using these large hard spheres in nucleation experiments has two major advantages : first , a particle size larger than the wavelength used in microscopy experiments makes it possible to track the particle trajectories in real space . in additional , nucleation occurs in a matter of seconds , which allows experimentalists to follow the complete nucleation process in detail . compared to other systems , it is therefore possible to observe the critical nucleus directly , for example , by confocal microscopy ( see section 1.2 ) , which is of crucial importance for understanding nucleation . these qualities of hard - sphere systems make them ideal candidates to advance our understanding of nucleation . as such , it is not surprising that the freezing of hard spheres is better characterized than any other nucleation scenario , and in fact , a number of excellent reviews in this field already exist . our aim here is thus not to give a detailed overview of the field but to highlight some of the milestones and key discoveries and connect them to other nucleation studies . to keep the discussion reasonably brief , we limit the latter to neutral and perfectly spherical hard - sphere systems . however , we note that a sizable amount of work has been devoted to a diverse range of colloidal systems , such as nonspherical particles , charged particles , and mixtures of different colloidal particles , to name just a few . readers interested in the state of the art in about 2000 are referred to other reviews . in the early 2000s , two major advances in the field were made , one on the theoretical side and the other experimentally . in 2001 , auer and frenkel computed absolute nucleation rates of a hard - sphere system using kmc simulations . they did so by calculating pcrit , the probability of forming a critical nucleus spontaneously , and , the kinetic prefactor . they found that the experimental and theoretical nucleation rates disagreed by several orders of magnitude . this was surprising , because simulations did really well in describing all sorts of properties of hard spheres before . it was worrisome because only very few sound approximations were made by auer and frenkel to obtain their nucleation rates . the authors suggestion , that the problem lay in experiments or , more precisely , in the interpretation of experiments , showed a possible way to resolve the discrepancy . in the same year , gasser et al . conducted ground - breaking experiments , imaging the nucleation of a colloidal suspension in real space using confocal microscopy . four snapshots of their system containing approximately 4000 particles are shown in figure 6 . this was a significant step , because nucleation had previously been investigated indirectly , using the structure factor obtained from light - scattering experiments , for example . in their study , they were able to directly measure the size of a critical nucleus for the first time . achieving sufficient temporal and spatial resolution at the same time is possible thus far only for colloidal systems ( for more details about experimental techniques , see section 1.2 ) . they found that the nucleus was rather aspherical with a rough surface ; both of these effects are completely neglected in cnt . note that aspherical nuclei also appear in lj systems , for example ( see section 2.2.1 ) . in addition , a random hexagonal close - packed ( rhcp ) structure for the hard spheres was observed , in good agreement with auer and frenkel . this is interesting , because slightly different systems such as soft spheres and lennard - jones particles seem to favor body - centered - cubic ( bcc ) stacking . however , gasser et al.s study did not resolve the discrepancy between experimental and simulated nucleation rates , as their results were in agreement with earlier small - angle light - scattering experiments . red ( large ) and blue ( small ) spheres show crystal- and liquid - like particles , respectively . the size of the observed volume is 58 m by 55 m by 20 m , containing about 4000 particles . after shear melting of the sample , snapshots were taken after ( a ) 20 , ( b ) 43 , ( c ) 66 , and ( d ) 89 min . much of the subsequent work focused on trying to resolve this discrepancy between experiments and simulation . found experimental evidence supporting a two - step crystallization process ( see section 1.1.2 ) in hard - sphere systems . other systems such as proteins and molecules in solution ( see section 2.5 ) were well - known at that time to crystallize through a more complex mechanism than that assumed by cnt . even for hard - sphere systems , two - step nucleation processes were reported before 2006 ; the occurrence of this mechanism was attributed to details of the polydispersity of the hard spheres , however . the new insight provided by schpe et al . in 2006 and 2007 was that the two - step nucleation process is general , and as such , it does not depend on either polydispersity or volume fraction . in 2010 , simulations performed by schilling et al . supported these experimental findings . using unbiased mc simulations , schilling et al . not even the simplest model system seemed to follow the traditional picture assumed in cnt . could this two - step mechanism explain why the computational rates disagreed with experiments ? at first , it seems like a tempting explanation , because auer and frenkel had to introduce order parameters to calculate absolute nucleation rates . such a conclusion , however , automatically presupposes a reaction pathway , which might not necessarily match the nucleation pathway taken in experiments . showed in the same year , however , that very different computational approaches [ brute - force md , us , and ffs , which we described earlier ( see section 1.3.2 ) ] led to the same nucleation rates , all in agreement with auer and frenkel . they therefore concluded that the discrepancy between simulations and experiments did not lie in the computational approach employed by auer and frenkel . they offered two possible explanations , one being that hydrodynamic effects , completely neglected in the simulations , might play a role and the other being possible difficulties in interpreting the experiments . schilling et al . tried to address one of the key issues when comparing experiments with simulations : uncertainties and error estimation . whereas the determination of the most characteristic quantity in hard - sphere systems , the volume fractions , is straightforward for simulations , experimentalists are confronted with a more difficult task in this case . the typical error in determining the volume fraction experimentally is about 0.004 , which translates into an uncertainty in the nucleation rate of about an order of magnitude . upon taking these considerations into account , the authors concluded that the discrepancy can be explained by statistical errors and uncertainties . does this mean that the past 10 years of research tried to explain a discrepancy that is actually not there ? rightfully pointed out that , whereas the rates between experiments and simulations coincide at high volume fraction , they still clearly disagree in the low - volume - fraction regime . no simple rescaling justified by statistical uncertainty could possibly resolve that discrepancy . in their article , they also addressed a different issue . in a computational study in 2010 , kawasaki and tanaka obtained , by means of brownian dynamics , nucleation rates in good agreement with experiments , contrary to the nucleation rates computed by auer and frenkel using brute - force md . it should be noted that kawasaki and tanaka did not use a pure hard - sphere potential , but used a weeks chandler andersen potential instead . was the approximation of a hard - sphere system , something that can never be fully realized in experiments , the problem all the time ? what filion et al . showed is that different computational approaches ( brute - force md , us , and fss ) all lead to the same nucleation rates , all of them in disagreement with what kawasaki and tanaka found . through a detailed evaluation of their approach and that of kawasaki and tanaka , they concluded that their rates are more reliable . the discrepancy was back on the table , where it still remains and is as large as ever . for a detailed comparison between experimental and computational rates , the message we want to convey here is that the disagreement between simulations and experiments in the simplest system still persists today . it is worth mentioning that this fundamental disagreement between simulations and experiments is not unique to colloids . other systems such as water ( sections 2.4.1 and 2.4.2 ) and molecules in solution ( section 2.5 ) also show discrepancies of several orders of magnitude in nucleation rates . this long - standing debate is of great relevance to all investigations dealing with systems modeled using any flavor of hard - sphere potential . a notable example in this context is the crystallization of proteins , which are usually treated as hard spheres . despite basically neglecting most of the complexity of these systems , this substantial approximation has allowed for a number of computational studies that , although outside the scope of this review , certainly contributed to furthering our understanding of the self - assembly of biological particles . beyond hard spheres , the lennard - jones liquid is a widely studied model system that does just that . it can be seen as the natural extension of the hard - sphere model , to which it becomes equivalent when the strength of the attractive interactions goes to zero . lj liquids were first introduced in 1924 , and since then , they have been the subject of countless computational studies . lj potentials allow for exceedingly fast md simulations , and a wide range of thermodynamic information is available for them , such as the phase diagram and the interfacial free energy . the stable structure of the lj system up to is a face - centered - cubic ( fcc ) crystal ; slightly less stable in free energy is a hexagonal - close - packed ( hcp ) structure , which , in turn , is significantly more stable then a third body - centered - cubic ( bcc ) phase . with his study of liquid argon in 1964 , rahman reported what is probably the first lj md simulation . his findings showed good agreement with experimental data for the pair distribution function and the self - diffusion coefficient , thus demonstrating that lj potentials can properly describe noble elements in their liquid form at ambient pressure . to the best of our knowledge , nucleation of lj liquids was investigated for the first time in 1969 by de wette et al . and in 1976 by mandell et al . for two - dimensional and three - dimensional systems , respectively . early simulations investigating the condensation of lj vapors into a liquid already indicated a substantial discrepancy with cnt rates . it is worth noticing that the order parameter for crystal - like particles presented by ten wolde et al . fostered a considerable amount of later work devoted to improving the order parameters customarily used to describe crystal nucleation from the liquid phase ( see , e.g. , ref ( 254 ) ) . in 2008 , kalikmanov et al . compared cnt and cdft ( see section 1.1 ) simulations with condensation data for argon . they found that cnt spectacularly failed to reproduce experimental condensation rates , underestimating them by up to 26 orders of magnitude . this disagreement triggered a number of computational studies aimed at clarifying the assumption of the sphericity of the critical nucleus within the freezing of lj liquids . by embedding pre - existing spherical clusters into supercooled lj liquids , bai and li found values of the critical nucleus size in excellent agreement with cnt within a broad range of temperatures . however , these results have been disputed by the umbrella sampling simulations of wang et al . , for example , as well as the path - sampling investigation of moroni et al . in both cases in addition , moroni et al . pointed out that the critical nucleus size is determined by a nontrivial interplay between the shape , the size , and the degree of crystallinity of the cluster . such a scenario is clearly much more complex than the usual cnt picture , as it violates the capillarity approximation ( see 1.1.1 ) . nonspherical nuclei were also observed by trudu et al . , who extended the conventional cnt formula to account for ellipsoidal nuclei . such a tweak gave much better estimations of both the critical nucleus size and the nucleation barrier . recall that the shape of the critical nuclei can be observed experimentally in very few cases ( see sections 1.2 and 2.1 ) . however , at very strong supercooling , things fell apart because of the emergence of spinodal effects ( see section 1.1 ) . note that cnt fails at strong supercooling even without the occurrence of spinodal effects , as the time lag ( transient time ) needed for structural relaxation into the steady - state regime results in a time - dependent nucleation rate . for instance , huitema et al . showed that incorporating the time dependence into the kinetic prefactor yields an improved estimate of nucleation rates . in fact , by embedding extensions to the original cnt framework , one can , in some cases , recover a reasonable agreement between simulations and experiments even at strong supercooling . as an example , peng et al . also showed that including enthalpy - based terms in the formulation of the temperature dependence of substantially improves the outcomes of cnt . another aspect that has been thoroughly addressed within the crystal nucleation of lj liquids is the structure of the crystalline clusters involved . the mean - field theory approach of klein and leyvraz suggests a decrease of the nucleus density as well as an increase of the bcc character when moving toward the spinodal region . these findings were confirmed by the umbrella sampling approach of ten wolde et al . , who reported a bcc shell surrounding fcc cores . furthermore , wang et al . showed that the distinction between the crystalline clusters and the surrounding liquid phase falls off as a function of t . in fact , the free energy barrier for nucleation , computed by means of umbrella sampling simulations ( see section 1.3.2 ) , was found to be on the order of kbt at t = 52% . in addition , the nuclei undergo substantial structural changes toward nonsymmetric shapes , a finding validated by the metadynamics simulations of trudu et al . the same authors investigated the nucleation mechanism close to the critical temperature for spinodal decomposition , ( see section 1.1.4 ) , where the free energy basin corresponding to the liquid phase turned out to be ill - defined , that is , already overlapping with the free energy basin of the crystal . such a finding suggested that , below , there is no free energy barrier for nucleation , indicating that the liquid is unstable rather than metastable and that the crystallization mechanism has changed from nucleation toward the more collective process of spinodal decomposition ( see section 1.1.4 and figure 1 ) . insights into the interplay between nucleation and polymorphism have been provided by the simulations of ten wolde et al . , among others , suggesting that , within the early stages of the nucleation process , the crystalline clusters are bcc - like , later turning into fcc crystalline kernels surrounded by bcc shells . performed a cdft study to determine the difference between the free energy barrier for nucleation required for the creation of an fcc or bcc critical nucleus . in addition , the difficulty for nucleation of the three different crystal orientations for fcc was ranked ( 100 ) > ( 110 ) > ( 111 ) . these studies confirm the presence of a two - step mechanism ( see section 1.1.2 ) and the validity of ostwald s step rule for the lj model . as we will see later ( e.g. , homogeneous ice nucleation , section 2.4.1 ) , nucleation through metastable phases has also been observed for more complicated liquids . important contributions regarding polymorph control during crystallization were made by desgranges and delhommelle , who investigated nucleation under different thermodynamic conditions . by keeping the temperature constant and altering the pressure , they were able to influence the amount of bcc particles . this reached up to a point where the nucleus was almost purely bcc - like . calculation of the bcc liquid line in the phase diagram showed that these nucleation events occurred in the bcc existence domain . additionally , the transformation from fcc to hcp during crystal growth , well after the critical nucleus size has been reached , was studied by changing the temperature at constant pressure . as depicted in figure 7 , at t = 10% , a small number of hcp atoms were observed surrounding the fcc core , whereas at t = 22% , much larger hcp domains formed within the crystallite , suggesting that the conversion from hcp to fcc is hindered at higher temperatures . on a final note , we emphasize that many findings related to polymorphism are often quite dependent on the choice of the order parameters employed . this issue is not limited to lj systems , and it is especially important when dealing with similarly dense liquid and crystalline phases ( e.g. , metallic liquids ) , where order parameters usually struggle to properly distinguish the different crystalline phases from the liquid . in particular , it remains to be seen whether the fractional bcc , fcc , and hcp contents of the lj nuclei that we have discussed will stand the test of the last generation of order parameters . cross section of postcritical crystalline clusters of 5000 lj particles for t = ( a ) 10% and ( b ) 22% . fcc- , hcp- , and bcc - like particles are depicted in gray , yellow , and red , respectively . at t = 22% substantial hcp domains form within the crystallite , whereas at t = 10% , hcp particles can be found almost exclusively on the surface of the fcc core . reprinted with permission from ref ( 247 ) . heterogeneous crystal nucleation has also been investigated for a variety of lj systems . used umbrella - sampling simulations ( see section 1.3.2 ) to calculate the free energy barrier for heterogeneous nucleation of an lj liquid on top of an ideal impurity , represented by a single fcc ( 111 ) layer of lj particles . by explicitly varying the lattice spacing of the substrate , asub , they calculated as a function of asub aequi , where aequi is the lattice spacing of the equilibrium crystalline phase . aequi = 0 , whereas for large values of asub aequi , nucleation proceeds within the bulk of the supercooled liquid phase . these findings support the early argument of the zero lattice mismatch introduced by turnbull and vonnegut to justify the striking effectiveness of agi crystals in promoting heterogeneous ice nucleation . in fact , in several situations , one can define a disregistry or lattice mismatch as7values of close to or even equal to zero have often been celebrated as the main ingredient that makes a crystalline impurity particularly effective in promoting heterogeneous nucleation . however , the universality of this concept has been severely questioned in the past few decades , as we shall see in section 2.4.2 for heterogeneous ice nucleation . nonetheless , it seems that the argument regarding zero lattice mismatch can hold for certain simple cases , as demonstrated by mithen and sear , who studied heterogeneous nucleation of lj liquids on the ( 111 ) and ( 100 ) faces of an fcc crystal by means of ffs simulations ( see section 1.3.2 ) . they reported a maximum in the heterogeneous nucleation rate for a small , albeit nonzero , value of ( see figure 8) . the difference between their study and that of wang et al . is simply that many more values of were taken into account by mithen and sear , thus allowing the maximum of to be determined more precisely . on a different note , dellago et al . performed tis simulations ( see section 1.3.2 ) to investigate heterogeneous crystal nucleation of lj supercooled liquids on very small crystalline impurities . they found that even tiny crystalline clusters of just 10 lj particles can actively promote nucleation and that the morphology of the substrate can play a role as well . specifically , whereas fcc - like clusters were rather effective in enhancing nucleation rates , no substantial promotion was observed for icosahedrally ordered seeds . nucleation rates computed with the ffs method for a rigid hexagonal surface of lj atoms in contact with a lj liquid . potentials 1 and 2 describe the interaction between substrate and liquid and differ only slightly by the value of they use . these results show that the maximum in the nucleation rate occurs at nonzero values of the lattice mismatch . reprinted with permission from ref ( 271 ) . mc simulations performed by page and sear have demonstrated that confinement effects can be of great relevance as well . they computed heterogeneous nucleation rates for a lj liquid walled inbetween two flat crystalline planes characterized by a certain angle sub . a maximum of was found for a specific value of sub , boosting the nucleation rate by several orders of magnitude with respect to the promoting effect of a flat crystalline surface . in addition , different values of sub led to the formation of different crystalline polymorphs . recently probed the influence of structured and structureless lj potential walls or nucleation rates . both types of wall were found to increase the temperature at which nucleation occurs . we shall see in section 2.4.2 that the interplay between the morphology of the substrate and the strength of the liquid md simulations of lj liquids are computationally cheap , making them the perfect candidates to examine how finite - size effects impact crystal nucleation . the seminal work of honeycutt and andersen took into account up to 1300 lj particles at , which turned out to be too few particles to completely rule out the effects of periodic boundary conditions . in fact , the authors suggested that extra care had to be taken because of the diffuseness of the interface between the supercooled liquid phase and the crystalline nucleus , which can induce an artificial long - range order in the system , leading to a nonphysically high nucleation rate . these findings are particularly relevant , as the critical nucleus size at this t value is on the order of just a few tens of particles , representing a tiny fraction of the whole system . only a few years later , swope and andersen investigated the same effects by taking into account up to 10 lj particles subjected to the same strong supercooling as probed by honeycutt and andersen . according to their large - scale md simulations , . this outcome must be carefully pondered , as currently , the vast majority of simulations dealing with crystallization of realistic systems can not afford to take into account system sizes 3 orders of magnitude larger than the size of the critical nucleus . examined the nucleation of an lj liquid in a wide range of temperatures ( 70140 k ) . although nonphysical instantaneous crystallization was observed for systems on the order of 500 particles , simulation boxes containing about 10000 particles seemed to be free from finite - size effects . recently described a novel class of finite - size effects unrelated to periodic boundary conditions . in fact , they showed that the equilibrium density of critical nuclei , , can effectively influence the absolute value of nucleation rates . specifically , at very strong supercooling , the critical nuclei will on average form very shortly after the transient time , whereas at mild t , the stochastic nature of nucleation will lead to a consistent scatter of the nucleation times . in other words , in the latter scenario , either exceedingly large systems must be taken into account , or a sizable number of independent simulations must be performed to deal with the long tails of the distribution of nucleation times . examples include the sutton chen potentials for several metals and the tosi fumi potential for molten salts such as nacl . terms accounting for the directionality of covalent bonds have been included , for example , in the stillinger weber potential for si ; the bond order potentials of tersoff for si , gaas , and ge ; and the reactive potential of brenner for carbon - based systems . another class of interatomic potential is based on the concept of local electronic density and includes , for instance , the finnis sinclair potentials for metallic systems , the whole family of the embedded - atom - method ( eam ) potentials , and the glue potential for au and al . many of these potentials are still incredibly cheap in terms of computer time , thus allowing for large - scale , unbiased md simulations . recently , massively parallel md runs succeeded in nucleating supercooled liquid al and fe using an eam potential and a finnis as up to 10 atoms were taken into account , actual grain boundaries were observed , providing unprecedented insight in to the crystal growth process . the nucleation of bcc fe crystallites and the evolution of the resulting grain boundaries at different temperatures can be appreciated in figure 9a . the sizable dimension of the simulation boxes ( 50 nm ) allowed nucleation events to be observed within hundreds of picoseconds , and grain coarsening ( i.e. , the process by which small crystallites end up incorporated into larger ones ) is also clearly visible . mere visual inspection of the nucleation trajectories depicted in figure 9a suggests different nucleation regimes as a function of temperature . in fact , the same authors calculated a temperature profile for the nucleation rate , shown in figure 9b , that demonstrates the emergence of a maximum of value characteristic of diffusion - limited nucleation ( see section 1.1.1 ) . crystal nucleation of supercooled fe by means of large - scale md simulations . yellow circles highlight small crystalline grains doomed to be incorporated into the larger ones later on because of grain coarsening . ( b ) nucleation rate as a function of temperature . reprinted with permission from ref ( 290 ) . a field that has greatly benefited from md simulations is the crystallization of metal clusters , as nicely reviewed by aguado and jarrold . for instance , it is possible to probe the interplay between the size of the clusters and the cooling rate upon crystal nucleation and growth . in this context , shibuta reported three different outcomes for supercooled liquid mo nanoparticles modeled by means of a finnis sinclair potential , namely , the formation of a bcc single crystal , a glassy state , or a polycrystalline phase . in some cases , nucleation rates obtained from simulations were consistent with cnt , as in the case of ni nanodroplets , for which nucleation events were again observed by means of brute - force md simulations using the sutton chen potential . the influence of the redox potential on the nucleation process has also been investigated . milek and zahn employed an enhanced flavor of the eam potential to study the nucleation of ag nanoparticles from solution . they established that the outcome of nucleation events is strongly influenced by the strength of the redox potential , able to foster either a rather regular fcc phase or a multitwinned polycrystal . similar to what was done for lj liquids , the effects of confinement were assessed for au nanodomains modeled using the glue potential by pan and shou . according to their findings , l and chen instead investigated surface - layering - induced crystallization of ni si nanodroplets using a modified eam potential . it seems that , for this particular system , atoms proximal to the free surface of the droplet assume a crystalline - like ordering on very short time scales , thus triggering crystallization in the inner regions of the system . no such effect has been reported in the case of surface - induced crystallization in supercooled tetrahedral liquids such as si and ge , as investigated by li et al . through ffs simulations employing both tersoff and stillinger the presence of the free surface facilitates crystal nucleation for this class of systems as well , but surface layering was not observed . instead , the authors claimed that the surface reduces the free energy barrier for nucleation as it introduces a pressure - dependent term in the volume free energy change expected for the formation of the crystalline clusters . the situation is quite different for surface - induced ice nucleation , at least according to the coarse - grained mw model of molinero and moore . in fact , haji - akbari et al . recently investigated ice nucleation in free - standing films of supercooled mw water using both ffs and us , finding that , in these systems , crystallization is inhibited in the proximity of the vapor very recently , gianetti et al . extended the investigation of haji - akbari et al . to the crystallization of a whole family of modified stillinger weber liquids with different degrees of tetrahedrality , locating a crossover from surface - enhanced to bulk - dominated crystallization in free - standing films as a function of . another seminal study by li et al . , again using ffs , focused on homogeneous ice nucleation within supercooled mw water nanodroplets , where nucleation rates turned out to be strongly size dependent and in general consistently smaller ( by several orders of magnitude ) than the bulk case . ffs was also applied by li et al . to examine homogeneous nucleation of supercooled si . ffs has also been successful in predicting homogeneous crystal nucleation rates in molten nacl , modeled using a tosi fumi potential by valeriani et al . large discrepancies between their results and experimental nucleation rates can be appreciated when cnt is used to extrapolate the calculations to the milder supercooling probed by the actual measurements . given that the authors obtained consistent results using two different enhanced sampling methods , this study hints again at the many pitfalls of cnt . a unique example of a class of materials for which nucleation can be effectively addressed by brute - force md simulations is given by so - called phase - change materials . these systems are of great technological interest as they are widely employed in optical memories ( e.g. , dvd - rw ) and in a promising class of nonvolatile memories known as phase - change memory , based on the fast and reversible transition from the amorphous to the crystalline phase . although crystal nucleation in amorphous systems , especially metallic and covalent glasses , is beyond the scope of this review , we refer the reader to the excellent work of kelton and greer for a detailed introduction . here we just note that in phase change memories the amorphous phase is often heated above the glass transition temperature , so that crystal nucleation occurs within the supercooled liquid phase . phase - change materials used in optical and electronic devices are typically tellurium - based chalcogenide alloys ( see ref ( 305 ) ) . the family of the pseudobinary compounds ( gete)x(sb2te3)y represents a prototypical system . although both the structure and dynamics of these systems are far from trivial , nucleation from the melt takes place on the nanosecond time scale for a wide range of supercooling . thus , with phase - change materials , we have a great opportunity to investigate nucleation in a complex system by means of brute - force md simulations . we note that the crystallization of these systems has been extensively characterized by different experimental techniques [ particularly tem and afm ( see section 1.2 ) ; the crystallization kinetics has also been recently investigated by means of ultrafast - heating calorimetry and ultrafast x - ray imagining ] , but because of the exceptionally high nucleation rates , it is difficult to extract information about the early stages of the nucleation process . unfortunately , phase - change materials require ab initio methods or sophisticated interatomic potentials with first - principles accuracy . in fact , several attempts have been made to study nucleation in phase - change materials by ab initio md in very small systems . although these studies provided useful insights into the nucleation mechanism , severe finite - size effects prevented the full characterization of the crystallization process . the limited length and time scales typical of first - principles calculations were recently outstripped in the case of the prototypical phase - change material gete by the capabilities of a neural network interatomic potential . such potentials allow for a computational speedup of several orders of magnitude compared to conventional ab initio methods while retaining an accuracy close to that of the latter . although nucleation rates have not yet been calculated , detailed investigations of homogeneous and heterogeneous nucleation have already been reported . for instance , as shown in figure 10 , a single - crystalline nucleus formed in a 4000-atom model of supercooled liquid gete in the 625675 k temperature regime within a few hundred picoseconds . on the same timescale , several nuclei appeared below 600 k , suggesting that the free energy barrier for nucleation is vanishingly small for this class of materials just above the glass transition temperature . this is because of the fragility of the supercooled liquid , which displays a substantial atomic mobility even at large supercoolings . thus , in this particular case , the kinetic prefactor ( see eq 5 ) is not hindered that much by the strong supercooling , whereas the free energy difference between the liquid and the crystal ( see eq 1 ) skyrockets as expected , leading to the exceedingly high nucleation rates characteristic of these materials . fast crystallization of supercooled gete by means of md simulations with neural - network - derived potentials . the number of crystalline nuclei larger than 29 atoms at different temperatures in the supercooled liquid phase is reported as a function of time ( notice the exceedingly small time scale at strong supercooling ) . two snapshots at the highest and lowest temperatures showing only the crystalline atoms are also reported . at high temperature , a single nucleus is present , whereas several nuclei ( each one depicted in a different color ) appear at low temperature . copyright 2013 american chemical society . in conclusion , whereas md simulations have by no means exhausted the field of crystal nucleation of atomic liquids , they have certainly provided insight into a number of interesting systems and paved the way for the study of more complex systems , as we shall see in the following sections . ice nucleation impacts many different areas , ranging from aviation to biological cells and earth s climate . it is therefore not surprising that a considerable body of work has been carried out to understand this fundamental process . we can not cover it all here ; instead , we give a general overview of the field , starting with a discussion of nucleation rates . this allows us to directly compare experiments and simulations and to identify strengths and weaknesses of different approaches . experimental nucleation rates have been measured over a broad range of temperatures , most often with micrometer - sized water droplets so as to avoid heterogeneous nucleation . in figure 11 , we bring together nucleation rates obtained from various experiments , along with computed nucleation rates . compilation of homogeneous nucleation rates for water , obtained by experiments and simulations . the x axis shows the supercooling with respect to the melting point of different water models or 273.15 k for experiment . the y axis shows the logarithm of the nucleation rate in m s. rates obtained with computational approaches are shown as solid symbols ; experimental rates are shown as crossed symbols . for each computational study , the computational approach and the water force field used is not included in this graph , because their study was conducted at a small supercooling ( 20 k ) , which resulted in a very low estimated nucleation rate far outside this plot ( it would correspond to 83 on the y axis ) . taborek performed measurements with different setups , namely , using sorbitan tristearate ( sts ) and sorbitan trioleate ( sto ) as surfactants . data for the graph were taken from refs ( 124 , 299 , 301 , and 322335 ) . accessing nucleation rates from md simulations became feasible only in the past few years as a result of advances in force fields ( such as the coarse - grained mw potential ) and enhanced sampling techniques described earlier ( see section 1.3.2 ) . these methods have therefore been widely used for studies of not only homogeneous but also heterogeneous nucleation ( see section 2.4.2 ) . from the comparison of experimental and computational nucleation rates reported in figure 11 , first , nucleation rates vary hugely with supercooling , by a factor of more than 10 . second , nucleation rates differ substantially ( approximately 10 orders of magnitude ) between simulations ( solid symbols ) and experiments ( crossed symbols ) at relatively small supercoolings ( 3050 k ) . at larger supercoolings , the agreement appears to be slightly better , even though very few simulations have been reported at very strong supercooling . the third striking feature is that , whereas the experimental results agree well with each other ( within 12 orders of magnitude ) , the computational rates differ from each other by a factor of approximately 10 . what is the cause of disagreement between different computational approaches ? part of the reason is certainly that different water models lead to different rates ; see , for example , espinosa et al . yet , even if the same water model is employed , the rates do not agree with each other very well . an early study by moore and molinero succeeded in calculating the avrami exponent for the crystallization kinetics of ice from brute - force md simulations at very strong supercooling , obtaining results remarkably similar to experiment . however , mw nucleation rates turned out to be far less encouraging . in fact , li et al . and reinhardt and doye both performed simulations using the mw model , obtaining nucleation rates that differed by about 5 orders of magnitude . the only major difference was the enhanced sampling technique employed , ffs by li et al . and us by reinhardt and doye . the statistical uncertainties of the two approaches ( 12 orders of magnitude ) are much smaller than the 5-orders - of - magnitude discrepancy between the two studies . it was also shown that the two methods agree very well with each other for colloids , for example ( see section 2.1 ) . the use of different computational approaches therefore also seems to be unlikely as the source of the disagreement . what the cause is remains elusive . because we can not cover all of the work shown in figure 11 in detail here , we now discuss just two studies first , that of sanz et al . , which agrees best with the experimental rates . the authors used the tip4p/2005 and tip4p / ice water models in combination with seeded md simulations ( see section 1.3.2 . for more details , seeding involves considerably more assumptions than , for example , us or ffs . in particular , the approach assumes a cnt - like free energy profile , although it does not usually employ the macroscopic interfacial free energy . furthermore , the temperature dependence of key quantities such as and ( see section 1.1.1 ) is approximated . nevertheless , the agreement between their nucleation rates and experiment seemingly outperformed other approaches . in a more recent work , espinosa et al . however , it should be noted that the good agreement between the nucleation rates reported in refs ( 51 and 326 ) and the experimental data could originate from error cancellation . in fact , whereas the rather conservative definition of crystalline nucleus adopted in these works will lead to small nucleation barriers ( and thus to higher nucleation rates ) , the tip4p family of water models is characterized by small thermodynamic driving forces to nucleation , which , in turn , results in smaller nucleation rates . the second work we briefly discuss here is the very recent study ( 2015 ) of haji - akbari and debenedetti . the authors directly calculated the nucleation rate at 230 k of an all - atom model of water ( tip4p / ice ) using a novel ffs sampling approach . this was a tour de force , but strikingly , their rates differed from experiment by about 11 orders of magnitude . the authors noted that this might be as close as one can actually get to experiment with current classical water models . this is because of the extreme sensitivity of nucleation rates to thermodynamic properties such as and , which , according to cnt , enter exponentially ( section 1.1.1 ) in the definition of . for instance , an uncertainty of only 67% for at 235 k leads to an error of about 9 orders of magnitude in . experimental estimates for range from 25 to 35 mn / m ; computational estimates range from about 20 to 35 mn / m . as another example , haji - akbari and debenedetti explicitly quantified the extent to which the tip4p / ice model underestimates the free energy difference between the crystalline and liquid phases and found that the mismatch between and alone leads to an overestimation of the free energy barrier for nucleation of about 60% , which translates into nucleation rates up to 9 orders of magnitude larger . in fact , taking into account such a discrepancy brings the results of haji - akbari and debenedetti within the confidence interval of the experimental data . thus , it is clear that we simply do not know some key quantities accurately enough to expect perfect agreement between simulations and experiments . in addition to issues of modeling water / ice accurately , finite - size effects can be expected to also play a role [ as they do with lennard - jones systems ( section 2.2 ) and molecules in solution ( section 2.5 ) ] . only recently was this issue addressed explicitly for ice nucleation by english and tse in unbiased simulations with the mw model . they were able to simulate systems containing nearly 10 million water molecules on a microsecond time scale and found that larger systems favor the formation of crystallization precursors compared to smaller ones . interestingly , lifetimes of the precursors were found to be less sensitive to system size . a quantitative understanding of finite - size effects on nucleation rates remains elusive nevertheless . in summary , it can be said that , in terms of accurate nucleation rates , experiments are still clearly superior to simulations . however , the advantage of simulations is that the nucleation mechanism can also be obtained , which , at present , is not possible with experiments , although femtosecond x - ray laser spectroscopy might be able to partially overcome this limitation in the near future . in 2002 were the first to report a nucleation event in an unbiased simulation based on an all - atom model of water ( tip4p ) . their landmark work opened the doors to the study of ice nucleation at an atomistic level . they found that nucleation took place once a sufficient number of long - lived hydrogen bonds were formed with a nucleus of ice . recent evidence suggests that , most likely , their nucleation trajectory was driven by finite - size effects . together with the simulations of vrbka and jungwirth , also affected by severe finite - size effects , the work of matsumoto et al . remains , to date , the only seemingly unbiased md simulation observing homogeneous ice nucleation with an all - atom force field . what really enabled the community to investigate ice formation at a molecular level was the development of the coarse - grained mw potential for water in the early 2010s . using unbiased md simulations based on the mw force field , moore and molinero in 2011 provided evidence that , in the supercooled regime around the homogenous nucleation temperature , th , the fraction of 4-fold - coordinated water molecules increases sharply prior to a nucleation event . in a separate work , the same authors suggested that , at very strong supercooling , the critical nucleus is mostly made of cubic ice , which subsequently evolves into a mixture of stacking - disordered cubic and hexagonal ice layers . in the same year , li et al . identified another structural motif that might play a role in ice nucleation . they consistently observed a topological defect structure in growing ice nuclei in their ffs simulations based on the mw representation of water . this defect , depicted in figure 12a , can be described as a twin boundary with 5-fold symmetry , and it has also been observed in nucleation simulations of tetrahedral liquids simulated with the stillinger weber potential , on which the mw coarse - grained model was built . ( a ) formation of a topological defect with 5-fold symmetry during homogeneous ice nucleation . the snapshots ( i iv ) show the time evolution of the defect structure , indicated by black dashed lines . reprinted with permission from ref ( 322 ) ( copyright 2011 royal society of chemistry ) , in which li et al ( b ) nucleation of an ice cluster forming homogeneously from i0-rich precritical nuclei . water molecules belonging to ic , ih , a clathrate - like phase , and i0 are depicted in yellow , green , orange , and magenta , respectively . ( iii ) the crystalline cluster evolves in a postcritical nucleus , formed by an ic - rich core surrounded by an i0-rich shell . ( iv ) the same postcritical nucleus as depicted in iii , but only particles with 12 or more connections ( among ice - like particles ) are shown . the color map refers to the order parameter q12 specified in ref ( 325 ) , from which this image was reprinted with permission ( copyright 2014 nature publishing group ) . the unbiased md simulations on which the analysis is based feature 10000 mw molecules . in 2012 , another significant leap in understanding the nucleation mechanism of ice from a structural point of view was made by combining experimental and computational techniques . specifically , malkin et al . showed that ice forming homogeneously is stacking - disordered ( the corresponding ice structure was called isd ) , meaning that it is made out of cubic and hexagonal ice layers stacked in a random fashion . in 2014 , two studies substantiated the potential relevance of precursor structures prior to ice formation . liquid transition in supercooled water in a molecular model of water ( st2 ) . in their study , the authors sampled the energy landscape of supercooled water and found two metastable liquid basins corresponding to low - density ( ldl ) and high - density ( hdl ) water . the appealing idea behind the transition from hdl to ldl prior to ice nucleation is that ldl is structurally closer to ice than hdl . note that the existence of two metastable liquid basins was not a general finding : the mw model does not have a basin for ldl , for example . indeed , the presence of this liquid liquid phase transition is a highly debated issue . another conceptually similar idea is ice formation through ice 0 ( i0 ) , proposed by russo et al . instead of a liquid liquid phase transition that transforms water into another liquid state prior to nucleation , the authors proposed a new ice polymorph ( i0 ) to bridge the gap between supercooled water and ice . i0 is a metastable ice polymorph and is structurally similar to the supercooled liquid . it has a low interfacial energy with both liquid water and ice ic / ih . russo et al . therefore proposed i0 to bridge liquid water to crystalline ic / ih . indeed , the authors found i0 at the surface of growing ice nuclei in md simulations ; we show part of a nucleation trajectory in figure 12b . furthermore , they showed that the shape of the nucleation barrier is much better described by a core shell - like model ( ic / ih core surrounded by i0 ) compared to the classical nucleation model . this is important , because it suggests that models that are based solely on cnt assumptions might not be appropriate for describing homogeneous ice nucleation . however , the emergence of i0 has not yet been reported by any other nucleation study , including the recent work of haji - akbari and debenedetti that we previously mentioned in the context of nucleation rates . in that work , the authors performed a topological analysis of the nuclei , validated by the substantial statistics provided by the ffs simulations . as depicted in figure 13 , the majority of nuclei that reached the critical nucleus size contained a large amount of double - diamond cages ( ddcs , the building blocks of ic ) , whereas nuclei rich in hexagonal cages ( hcs , the building blocks of ih ) had a very low probability to overcome the free energy barrier for nucleation . in addition , even postcritical nuclei had a high content of ddcs , whereas hcs did not show any preference to appear within the core of the postcritical nuclei . this evidence is consistent with the findings reported in ref ( 323 ) and in contrast with the widely invoked scenario in which a kernel of thermodynamically stable polymorph ( in this case , ih ) is surrounded by a shell of a less stable crystalline structure ( in this case , ic ) . ( left ) a typical double - diamond cage ( ddc , blue ) and a hexagonal cage ( hc , red ) , the building blocks of ic and ih , respectively . ( right ) temporal evolution of an ice nucleus from ( i , ii ) the early stages of nucleations up to ( v , vi ) postcritical dimensions , as observed in the ffs simulations of haji - akbari and debenedetti . about 4000 water molecules , modeled with the tip4p / ice potential , were considered in the npt ensemble at t 40 k. one can clearly notice the abundance of ddcs throughout the whole temporal evolution . in contrast , hc - rich nuclei have only a marginal probability to cross the nucleation barrier ( see text ) . reprinted with permission from ref ( 327 ) . copyright 2015 national academy of sciences . in the past few years , the understanding of homogeneous ice nucleation has improved dramatically . we now have a good understanding of the structure of ice that forms through homogeneous nucleation , stacking - disordered ice . furthermore , there is very good agreement ( within 2 orders of magnitude ) between experimental nucleation rates in a certain temperature range . computational methods face the problem of being very sensitive to some key thermodynamic properties ; the nucleation rates they predict are therefore less accurate . on the other hand , they allow us to study conditions that are very challenging to probe experimentally , and they also provide insight into the molecular mechanisms involved in the crystallization process . as mentioned in the previous section , homogeneous ice nucleation becomes extremely slow at moderate supercooling . we do not , for example , have to wait for temperatures to reach 30 c before we have to use a deicer on our car windows . in fact , the formation of ice in nature occurs almost exclusively heterogeneously , thanks to the presence of foreign particles . these ice - nucleating agents facilitate the formation of ice by lowering the free energy barrier for nucleation ( see figure 1 ) . indeed , the work of sanz et al . , in which homogeneous ice nucleation was studied using seeded md ( see section 1.3.1 ) , found rates so low at temperatures above t = 20 k that they concluded that all ice nucleation above this temperature must occur heterogeneously . homogeneous nucleation is still of great importance in atmospheric processes and climate modeling , as under certain conditions , both heterogeneous and homogeneous nucleation are feasible routes toward the formation of ice in clouds , as reported in ref ( 354 ) , for example . in addition to the challenges ( both computational and experimental ) faced when investigating homogeneous ice nucleation , one also has to consider the structure of the water surface interface and how this impacts the nucleation rate . generally , the experimental data for the rates and characterization of the interfacial structure come from two different communities : climate scientists have provided much information on how various particles , often dust particles or biological matter such as pollen , affect ice nucleation ( as depicted in figure 14 ) , whereas surface scientists have invested a great deal of effort in trying to understand , at the molecular level , how water interacts with and assembles itself at surfaces ( see , e.g. , ref ( 355 ) ) . this means that there is a huge gap in our understanding , as the surfaces of the particles used to obtain rates are often not characterized , whereas surface science experiments are generally carried out at pristine , often metallic , surfaces under ultrahigh - vacuum conditions . we will see in this section that computational studies have gone some way toward bridging this gap , although there is still much work to be done should we wish to quantitatively predict a material s ice - nucleating efficacy . potential immersion - mode ice nucleus concentrations , nice , a measure of the efficiency of a given substance to boost heterogeneous ice nucleation , as a function of temperature for a range of atmospheric aerosol species . note the wide range of nucleating capability for materials as diverse as soot and bacterial fragments over a very broad range of temperatures . copyright 2012 royal society of chemistry . from a computational perspective , it is the surface science experiments that lend themselves most readily to modeling . in fact , even relatively expensive computational methods such as density functional theory ( dft ) , which have not featured much in this article , have proven indispensable in furthering our understanding of how water behaves at surfaces , especially when used in conjunction with experiments ( see , e.g. , refs ( 355 and 357 ) for an overview ) . as such , early computational studies focused on understanding how the surface affected the first few layers of water , especially with respect to the concept of lattice mismatch ( see section 2.2 ) , where a surface that has a structure commensurate with ice acts as a template for the crystal . nutt and co - workers investigated the adsorption structures of water at a model hexagonal surface and at baf2(111 ) using interaction potentials derived from ab initio calculations . although the surfaces under investigation had structures that matched the basal face of ice well , they found disordered structures of water to be more favorable than ice - like overlayers . using dft , hu and michaelides investigated the adsorption of water on the ( 001 ) face of the clay mineral kaolinite , a known ice - nucleating agent in the atmosphere . the ( 001 ) surface of kaolinite exposes a pseudohexagonal arrangement of oh groups that were proposed to be the cause of its good ice - nucleating ability . although they found that a stable ice - like layer could form at the surface , the amphoteric nature of the kaolinite surface , depicted in figure 15 , meant that all of water molecules could participate in four hydrogen bonds , making further growth on top of the ice - like layer unfavorable . investigated adsorption of water on kaolinite using the clayff and spc / e potentials and grand canonical monte carlo ( gcmc ) . although some hexagonal patches of water were seen in the contact layer , the overall structure was mostly disordered , and the hexagonal structures that did form were strained relative to those found in ice . also using gcmc , cox et al . they found that , for atomically flat surfaces , a nominally zero lattice mismatch produced disordered contact layers comprising smaller - sized rings ( i.e. , pentagons and squares ) and observed hexagonal ice - like layers only for surfaces with larger lattice constants . the amphoteric nature of kaolinite is important to its ice - nucleating ability . ( left ) ice - like contact layers at the kaolinite surface , with the ( a , c ) basal and ( b , d ) prism faces of ice adsorbed on kaolinite , as viewed from the ( a , b ) side and ( c , d ) top . ( right ) adsorption energy of ice on kaolinite when bound through either its basal face ( red data ) or its prism face ( blue data ) for varying numbers of layers of ice . ( open and solid symbols indicate data obtained with a classical force field and with dft , respectively . ) when only the contact layer is present , the basal face structure is more stable than the prism face structure , but as soon as more layers are present , the prism face structure becomes more stable . this can be understood by the ability of the prism face to donate hydrogen bonds to the surface , and to the water molecules above , through the dangling hydrogen bonds seen in b and d. reprinted with permission from ref ( 371 ) . 2010 , the above types of study were the state of the art for simulations of heterogeneous ice nucleation . although they provided evidence that properties such as lattice match alone are insufficient to explain a material s ice - nucleating ability , because ice nucleation itself was not directly observed , only inferences could be drawn about how certain properties might actually affect ice nucleation . yan and patey investigated the effects of electric fields on ice nucleation using brute - force molecular dynamics ( the electric fields were externally applied and were not due to an explicit surface ) . they found that the electric field needed to act over only a small range ( e.g. , 10 ) and that the ice that formed near the surface was ferroelectric cubic ice , although the rest of the ice that formed above was not . cox et al . performed simulations of heterogeneous ice nucleation in which the atomistic natures of both the water and the surface were simulated explicitly , using tip4p/2005 water and clayff to describe kaolinite . despite the fact that the simulations were affected by finite - size effects , the simulations revealed that the amphoteric nature of the kaolinite is important to ice nucleation . in the liquid , a strongly bound contact layer was observed , and for ice nucleation to occur , significant rearrangement in the above water layers was required . it was found that ice nucleated with its prism face , rather than its basal face , bound to the kaolinite , which was unexpected based on the theory that the pseudohexagonal arrangement of oh groups at the surface was responsible for templating the basal face . cox et al . rationalized the formation of the prism of ice at the kaolinite as being due to its ability to donate hydrogen bonds both to the surface and to the water molecules above ( see figure 15 ) , whereas the basal face maximizes hydrogen bonding to the surface only . more recent simulation studies , employing rigid and constrained models of kaolinite , have also found the amphoteric nature of the kaolinite surface to be important . however , the heterogeneous nucleation mechanism of water on clays is yet to be validated by unconstrained simulations unaffected by substantial finite - size effects . as in the case of simulations of homogeneous ice nucleation , the use of the coarse - grained mw potential has seen the emergence of computational studies that actually quantify the ice - nucleating efficiencies of different surfaces . recently , lupi et al . investigated ice nucleation at carbonaceous surfaces ( both smooth graphitic and rough amorphous surfaces ) using cooling ramps to measure nonequilibrium freezing temperatures tf tf tfhomo , where tf is the temperature at which ice nucleates in the presence of a surface and tfhomo = 201 1 k is the temperature at which homogeneous ice nucleation occurs . it was found that the rough amorphous surface did not enhance ice nucleation ( tf = 0 k ) , whereas the smooth graphitic surfaces promoted ice nucleation ( tf = 1113 k ) . this was attributed to the fact that the smooth graphitic surface induced a layering in the density profile of water above the surface , whereas the rough amorphous surface did not . lupi and molinero quantified the extent of layering as where (z ) is the density of water at a height z above the surface and 0 (zbulk ) , where zbulk is a height where the density profile is bulk - like . in a subsequent work using the same methodology , lupi and molinero investigated how the hydrophilicity of graphitic surfaces affected ice nucleation . the hydrophilicity of the surface was modified in two different ways : first , by uniformly modifying the water surface interaction strength and , second , by introducing hydrophilic species at the surface . it was found that the two ways produced qualitatively different results : uniformly modifying the interaction potential led to enhanced ice nucleation , whereas increasing the density of hydrophilic species was detrimental to ice nucleation ( although the surfaces still enhanced nucleation relative to homogeneous nucleation ) . it was concluded that hydrophilicity is not a good indicator of the ice - nucleating ability of graphitic surfaces . as for the difference between increasing the hydrophilicity by uniformly modifying of the interaction potential and by introducing hydrophilic species , lupi and molinero again saw that the extent of layering in water s density profile above the surface correlated well with the ice - nucleating efficacy . the general applicability of the layering mechanism , however , was left as an open question . cox et al . addressed the question of the general applicability of the layering mechanism by investigating ice nucleation rates over a wider range of hydrophilicities ( by uniformly changing the interaction strength ) on two surfaces with different morphologies : ( i ) the ( 111 ) surface of a face - centered - cubic lj crystal ( fcc-111 ) that provided distinct adsorption sites for the water molecules and ( ii ) a graphitic surface , similar to that of lupi et al . although it was found that the layering mechanism ( albeit with a slight modification to the definition used by lupi et al . ) could describe the ice - nucleating behavior of the graphitic surface , at the fcc-111 surface , no beneficial effects of layering were observed . this was attributed to fact that the fcc-111 surface also affected the structure of the water molecules in the second layer above the surface , in a manner detrimental to ice nucleation . it was concluded that layering of water above the surface can be beneficial to ice nucleation , but only if the surface presents a relatively smooth potential energy surface to the water molecules . the studies at the carbonaceous and fcc-111 surfaces discussed above hinted that the heterogeneous nucleation mechanism could be very different at different types of surfaces . although there is experimental evidence that , for example , different carbon nanomaterials are capable of boosting ice nucleation ( see , e.g. , ref ( 378 ) ) , most experiments can only quantify the ice - nucleating ability of the substrates ( see section 1.2 ) . however , the structure of the water substrate interface and any insight into the morphology of the nuclei are typically not available , making simulations essential to complement the experimental picture . in this respect , zhang et al . assessed that the ( regular ) pattering of a generic crystalline surface at the nanoscale can strongly affect ice formation . more generally , the interplay between hydrophobicity and surface morphology was recently elucidated by fitzner et al . brute - force md simulations of heterogeneous ice nucleation were performed for the mw water model on top of several crystalline faces of a generic fcc crystal , taking into account different values of the water surface interaction strength , as well as different values of the lattice parameter . the latter is involved in the rather dated concept of zero lattice mismatch , which we introduced in section 2.2 ( see eq 7 ) and which has been often quoted as the main requirement of an effective ice - nucleating agent . however , a surprisingly nontrivial interplay between hydrophobicity and morphology was observed , as depicted in figure 16 . clearly , neither the layering nor the lattice mismatch alone are sufficient to explain such a diverse scenario . in fact , the authors proposed three additional microscopic factors that can effectively aid heterogeneous ice nucleation on crystalline surfaces : ( i ) an in - plane templating of the first water overlayer on top of the crystalline surface ; ( ii ) a first overlayer buckled in an ice - like fashion ; and ( iii ) enhanced nucleation in regions of the liquid beyond the first two overlayers , possibly aided by dynamical effects and/or structural templating effects of the substrate extending past the surface water interface . in addition , it turned out that different lattice parameters can lead to the nucleation and growth of up to three different faces of ice [ basal , prismatic , and secondary prismatic ( { 1120 } ) ] on top of the very same surface , adding a layer of complexity to the nucleation scenario . insights into the interplay between hydrophobicity and morphology were also very recently obtained by bi et al . , who investigated heterogeneous ice nucleation on top of graphitic surfaces by means of ffs simulations using the mw model . among their findings , the authors suggested that the efficiency of ice - nucleating agents can be a function not only of surface chemistry and surface crystallinity but of the elasticity of the substrate as well . ( a ) heat maps representing the values of ice nucleation rates on top of four different fcc surfaces [ ( 111 ) , ( 100 ) , ( 110 ) , ( 211 ) ] , plotted as a function of the adsorption energy , eads , and the lattice parameter , afcc . the lattice mismatch with respect to ice on ( 111 ) is indicated below the corresponding graph in panel a. the values of the nucleation rate , , are reported as log10(j / j0 ) , where j0 refers to the homogeneous nucleation rate at the same temperature . ( b ) sketches of the different regions ( white areas ) in ( eads , afcc ) space in which a significant enhancement of the nucleation rate is observed . each region is labeled according to the face of ih nucleating and growing on top of the surface [ basal , prismatic , or ( 11/200 ) , together with an indication of what it is that enhances the nucleation , where temp , buck , and highe refer to the in - plane template of the first overlayer , the ice - like buckling of the contact layer , and the nucleation for high adsorption energies on compact surfaces , respectively . reinhardt and doye used umbrella sampling with the mw model to investigate nucleation at a smooth planar interface and at an ice - like surface . they found that the flat planar interface did not help nucleate ice and that homogeneous nucleation was the preferred pathway . one explanation given for this finding was that , as the density of liquid water is higher than that of ice , an attractive surface favors the liquid phase . it was also noted that the mw potential imposes an energy penalty for nontetrahedral triplets , that removing neighbors at the surface decreases this energetic penalty , and that this reduction in tetrahedrality favors the liquid phase . cabriolu and li recently studied ice nucleation at graphitic surfaces using forward flux sampling , again with the mw model . under the assumptions that depends linearly on t and that does not depend on t , cabriolu and li also extracted the values of the contact angle at different temperatures , which , along with the free energy barrier , turned out to be consistent with cnt for heterogeneous nucleation ( see section 1.1.3 ) . although intriguing , the generality of this finding to surfaces that include strong and localized chemical interactions remains an open question . we have seen that , for both homogeneous and heterogeneous nucleation , using the coarse - grained mw model has greatly enhanced our ability to perform quantitative , systematic simulation studies of ice nucleation . we must face the fact , however , that this approach will further our understanding of heterogeneous ice nucleation only so far . as discussed for kaolinite , an explicit treatment of the hydrogen bonds is essential in describing heterogeneous ice nucleation . in addition , the mw model ( as well as the majority of fully atomistic water models ) can not take into account surface - charge effects . surfaces can polarize water molecules in the proximity of the substrate , alter their protonation state , and even play a role in determining the equilibrium structure of the liquid at the interface . in light of recent studies , it seems that these effects can heavily affect nucleation rates of many different systems . as discussed , enhanced sampling techniques such as umbrella sampling and forward flux sampling have been applied to heterogeneous ice nucleation with the mw model , and we have seen the latter applied successfully to homogeneous nucleation with an all - atom model of water ; the computational cost , however , was huge . although the presence of an ice - nucleating agent should help reduce this cost , the parameter space that we wish to study is large , and systematically studying how the various properties of a surface affect ice nucleation requires the investigation of many different surfaces . simulating heterogeneous ice nucleation under realistic conditions does not mean just mild supercooling ; we also need realistic models of the surfaces that we wish to study ! most studies of kaolinite have considered only the planar interface , even though , in nature , kaolinite crystals have many step edges and defects . ice nucleation at agi was also recently studied , although bulk truncated structures for the exposed crystal faces were used . in the case of agi(0001 ) , this is problematic , as the wurtzite structure of the crystal means that this basal face is polar and likely to undergo reconstruction . furthermore , agi is photosensitive , and it has been shown experimentally that exposure to light enhances its ice - nucleating efficacy , suggesting that structural motifs at the surface very different from those expected from the bulk crystal structure are important . the development of computational techniques to determine surface structures , along with accurate force fields to describe the interaction with water , will be essential if we are to fully understand heterogeneous ice nucleation . understanding crystal nucleation from solution is a problem of great practical interest , influencing , for instance , pharmaceutical , chemical , and food processing companies . being able to obtain a microscopic description of nucleation and growth would allow the selection of specific crystalline polymorphs , which , in turn , can have an enormous impact on the final product . an ( in)famous case illustrating the importance of this issue is the drug ritonavir , originally marketed as solid capsules to treat hiv . this compound has at least two polymorphs : the marketed and thoroughly tested polymorph ( pi ) and a second more stable crystalline phase pii that appeared after pi went to market . pii is basically nonactive as a drug because of a much lower solubility than pi . as such and , most importantly , because of the fact that pii had never been properly tested , ritonavir was withdrawn from the market in favor of a much safer alternative in the form of gel capsules . many other examples could be listed , as various environmental factors ( such as the temperature , the degree of supersaturation , the type of solvent , and the presence of impurities ) can play a role in determining the final polymorph of many classes of molecular crystals . thus , it is highly desirable to pinpoint a priori the conditions leading to the formation of a specific polymorph possessing the optimal physical / chemical properties for the application of interest . the term nucleation from solution encompasses a whole range of systems , from small molecules in aqueous or organic solvents to proteins , peptides , and other macromolecular systems in their natural environment . these systems are very diverse , and a universal nucleation framework is probably not applicable to all of these cases . the review by dadey et al . discusses the role of the solvent in determining the final crystal . many aspects of the nucleation of solute precipitates from solution were recently reviewed by agarwal and peters . in this section , a central issue with md simulations of nucleation from solution is the choice of order parameters able to distinguish different polymorphs . many of these collective variables have been used in enhanced - sampling simulations ( see section 1.3.2 ) . md simulations of nucleation from solution are particularly challenging because of finite - size effects due to the nature of the solute / solvent system . in the nvt and npt ensembles , where md simulations of nucleation are usually performed however , the ratio between the numbers of solute molecules in the crystalline phase and in the solution varies during the nucleation events , leading to a change in the chemical potential of the system . this occurrence has negligible effects in the thermodynamic limit , but it can substantially affect the outcomes of , for example , free - energy - based enhanced - sampling simulations . simulations of models containing a large number ( 1010 ) of molecules can alleviate the problem , although this is not always the case . an analytic correction to the free energy for npt simulations of nucleation of molecules from solution was proposed in refs ( 394 and 404 ) on the basis of a number of previous works ( see , e.g. , refs ( 152 , 400 , and 405 ) ) and applied later in ref ( 406 ) as well . alternative approaches include seeded md simulations ( see section 1.3.1 ) and simulations mimicking the grand canonical ensemble ( vt ) , where the number of constituents is not a constant and the number of molecules in in this case it is worth noticing that nucleation of molecules in solution is a challenging playground for experiments as well . for instance , quantitative data about nucleation of ionic solutions are amazingly hard to find within the current literature . this is in stark contrast with the vast amount of data covering , for example , ice nucleation ( as illustrated in section 2.4 ) . among the countless organic compounds , urea molecules can be regarded as a benchmark for md simulation of nucleation from solution . this is because urea is a system of great practical importance that ( i ) displays fast nucleation kinetics and ( ii ) has only one experimentally characterized polymorph . early studies by piana and co - workers focused on the growth rate of urea crystals , which turned out to be consistent with experimental results . years later , the inhibition of urea crystal growth by additives was investigated by salvalaglio et al . the investigation of the early stages of nucleation was tackled only recently by salvalaglio et al . for urea molecules in aqueous and organic ( ethanol , methanol , and acetonitrile ) solvents . in these studies , the resulting free energies , modified for finite - size effects related to the solvent , suggested that different solvents lead to different nucleation mechanisms . whereas a single - step nucleation process is favored in methanol and ethanol , a two - step mechanism ( see section 1.1.2 ) emerges for urea molecules in acetonitrile and water , as depicted in figure 17 . in this case , the initial formation of an amorphous albeit dense note that , according to the free energy surface reported in figure 17a , the amorphous clusters ( configurations 2 and 3 in figure 17a , b ) are unstable with respect to the liquid phase ; that is , they are not metastable states having their own free energy bases , but rather , they originate from fluctuations within the liquid phase . this evidence , together with the fact that the transition state ( configuration 4 in figure 17a , b ) displays a fully crystalline core , prompts the following , long - standing question : if the critical nucleus is mostly crystalline and the amorphous precursors are unstable with respect to the liquid phase , can we truly talk about a two - step mechanism ? reference ( 394 ) suggests the terms ripening regime two - step nucleation when dealing with stable amorphous precursors and crystallization - limited two - step nucleation when the amorphous clusters are unstable and the limiting step is the formation of a crystalline core within the clusters . salvalaglio et al . also observed two polymorphs ( pi and pii ) in the early stages of the nucleation process . pi corresponds to the experimental crystal structure and is the most stable structure in the limit of an infinite crystal . pii , however , is more stable for small crystalline clusters . in agreement with the ostwald rule ( see section 2.2 ) , the small crystalline clusters that initially form in solution are of the pii type , and the subsequent conversion from pii to pi seems to be an almost - barrierless process . ( a ) free - energy surface ( fes ) associated with the early stages of nucleation of urea in aqueous solution , as obtained by salvalaglio et al . from a well - tempered metadynamics simulation of 300 urea molecules and 3173 water molecules , within an isothermal isobaric ensemble at p = 1 bar and t = 300 k ( simulation s2 in ref ( 406 ) with a correction term to the free - energy included to represent the case of a constant supersaturation of 2.5 ) . the contour plot of the fes is reported as a function of the number of molecules belonging to the largest connected cluster ( n , along the ordinate ) and the number of molecules in a crystal - like configuration within the largest cluster ( no , along the abscissa ) . note that n no by definition and that cnt would prescribe that the evolution of the largest cluster in the simulation box is such that n = no ( i.e. , only the diagonal of the contour plot is populated ) . the presence of an off - diagonal basin provides evidence of a two - step nucleation of urea crystals from aqueous solutions . this is further supported by the representative states sampled during the nucleation process , shown in panel b. urea molecules are represented as blue spheres , and red connections are drawn between urea molecules falling within a cutoff distance of 0.6 nm of each other . an approach similar to that employed in ref ( 413 ) was used to investigate crystal nucleation of 1,3,5-tris(4-bromophenyl)benzene molecules in water and methanol . these simulations showed the emergence of prenucleation clusters , consistent with recent experimental results based on single - molecule real - time transmission electron microscopy ( smrt - tem ; see section 1.2 ) . the formation of prenucleation clusters in the early stages of nucleation from solution has been observed in several other cases . this is of great relevance , as cnt is not able to account for two- ( or multi- ) step nucleation . md simulations have been of help in several cases , validating or supporting a particular mechanism . for instance , md simulations provided evidence for two - step nucleation in aqueous solutions of -glycine and n - octane ( or n - octanol ) solutions of d-/l - norleucine . sodium chloride ( nacl ) nucleation from supersaturated brines represents an interesting challenge for simulations , as the system is relatively easy to model and experimental nucleation rates are available . the first simulations of nacl nucleation date back to the early 1990s , when ohtaki and fukushima performed brute - force md simulations using very small systems ( 448 molecules including water molecules and ions ) and exceedingly short simulation times ( 10 ps ) . thus , the formation of small crystalline clusters that they observed was most likely a consequence of finite - size effects . more recently , the tps simulations of zahn suggested that the centers of stability for nacl aggregates consist of nonhydrated na ions octahedrally coordinated with cl ions , although the results were related to very small simulations boxes ( containing 310 molecules in total ) . tentative insight into the structure of the crystalline clusters came with the work of nahtigal et al . , featuring simulations of 4132 molecules ( 4000 water molecules and 132 ions ) in the 6731073 k range for supercritical water at different densities ( 0.170.34 g / cm ) . they reported a strong dependence of the crystalline cluster size distribution on the system density , with larger clusters formed at lower densities . the emergence of amorphous precursors was also reported in the work of chakraborty and patey , who performed large - scale md simulations featuring 56000 water molecules and 4000 ion pairs in the npt ensemble . the spc / e model was used for water , and the ion parameters were those used in the opls force field . their findings provided strong evidence for a two - step mechanism of nucleation , where a dense but unstructured nacl nucleus is formed first , followed by a rearrangement into the rock salt structure , as depicted in figure 18a . on a similar note , metadynamics simulations performed by giberti et al . using the gromos force field for the ions and the spc / e model for water suggested the emergence of a wurtzite - like polymorph in the early stages of nucleation . this precursor could be an intermediate state along the path from brine to the nacl crystal . however , alejandre and hansen pointed out a strong sensitivity of the nucleation mechanism on the choice of the force field . ( a ) snapshots from an md simulation of crystal nucleation of nacl from aqueous solution . the simulations , carried out by chakraborty and patey , involved 56000 water molecules and 4000 ion pairs ( concentration of 3.97 m ) in the npt ensemble . all na ( black ) and cl ( yellow ) ions within 2 nm of a reference na ion ( larger and blue ) are shown , together with water molecules ( oxygen and hydrogen atoms in red and white , respectively ) within 0.4 nm from each ion . from the relatively homogeneous solution ( 3 ns ) , this fluctuation in the concentration of the ions leads to a subsequent ordering of the disordered cluster ( 10 ns ) in a crystalline fashion ( 30 ns ) , consistently with a two - step nucleation mechanism . ( b ) comparison of nacl nucleation rates , , as a function of the driving force for nucleation , reported as 1/(/kbt ) . red points and blue and gray ( continuous ) lines were estimated by three different approaches in the simulations of zimmermann et al . experimental data obtained employing an electrodynamic levitator trap ( na et al . ) , an efflorescence chamber ( gao et al . ) , and microcapillaries ( desarnaud et al . ) are also reported , together with a tentative fit ( fitexp , dotted line ) . note the substantial ( up to about 30 orders of magnitude ) discrepancy between experiments and simulations . in fact , very recent simulations by zimmermann et al . demonstrated that the gromos force field overestimates the stability of the wurtzite - like polymorph . the authors employed a seeding approach within an nvt setup for which the absence of depletion effects was explicitly verified . the force fields used were those developed by joung and cheatham for na and cl and spc / e for water , which provide reliable solubilities and accurate chemical potential driving force . using a methodology introduced in ref ( 193 ) , a thorough investigation of the latter demonstrated that the limiting factor for n , which , in turn , strongly affects the kinetics of nucleation ( see section 1.1.1 ) , is not the diffusion of the ions within the solution but is instead the desolvation process needed for the ions to get rid of the solvent and join the crystalline clusters . moreover , zimmermann et al . evaluated the nucleation free energy barrier as well as the nucleation rate as a function of supersaturation , providing three estimates using different approaches . the results are compared with experiments in figure 18b , showing a substantial discrepancy as large as 30 orders of magnitude . interestingly , experimental nucleation rates are much smaller than what is observed in simulations , contrary to what has been observed for colloids , for example ( see section 2.1 ) . we stress that the work of zimmermann et al . employed state - of - the - art computational techniques and explored nacl nucleation under different conditions using a variety of approaches . the fact that these tour de force simulations yielded nucleation rates that differed significantly from experiments casts yet another doubt on the possibility of effectively comparing experiments and simulations . however , it must be noted that zimmermann et al . assumed a value of about 5.0 molnacl / kgh2o for the nacl solubility in water , as proposed in ref ( 432 ) . ( 3.64 molnacl / kgh2o ) and more recently by mester and panagiotopoulos ( 3.71 molnacl / kgh2o ) . , once again demonstrating the severe sensitivity of nucleation rates to any of the ingredients involved in their calculations . on a final note , we stress that many other examples of molecular dynamics simulations looking at specific aspects of crystal nucleation from solution exist in the literature . for instance , a recent study by anwar et al . describes secondary crystal nucleation , where crystalline seeds are already present within the solution . the authors suggest , for a generic solution represented by lennard - jones particles , a ( secondary ) nucleation mechanism enhanced by the existence of pncs ( see section 1.1.2 ) . kawska et al . stressed instead the importance of proton transfer within the early stages of nucleation of zinc oxide nanoclusters from an ethanol solution . the emergence of similar ripening processes , selecting specific crystalline polymorphs , for example , according to the effect of different solvents is still fairly unexplored but bound to be of great relevance in the future . finally , several computational studies have dealt with the crystallization of calcium carbonate , which was recently reviewed extensively in ref ( 18 ) and thus , together with the broad topic of crystal nucleation of biominerals , is not discussed in here . natural gas hydrates are crystalline compounds in which small gas molecules are caged ( or enclathrated ) in a host framework of water molecules . as natural gas molecules ( e.g. , methane , ethane , propane ) are hydrophobic , gas hydrates are favored by conditions of high pressure and low temperature , and are found to occur naturally in the ocean bed and in permafrost regions . with exceptionally high gas storage capabilities and the fact that it is believed that gas hydrates exceed conventional gas reserves by at least an order of magnitude , there is interest in trying to exploit gas hydrates as a future energy resource . although gas hydrates might potentially play a positive role in the energy industry s future , they are currently considered a hindrance : if mixed phases of water and natural gas are allowed to cool in an oil pipeline , then a hydrate can form and block the line , causing production to stall . understanding the mechanism(s ) by which gas hydrates nucleate is likely to play an important role in the rational design of more effective hydrate inhibitors . there are two main types of natural gas hydrates : structure i ( si ) , which has a cubic structure ( space group pm3n ) , and structure ii ( sii ) , which also has a cubic structure ( space group fd3m ) . ( there is also a third , less common type , sh , which has a hexagonal crystal structure , but we do not discuss this structure any further here . ) structurally , the water frameworks of both si and sii hydrates are similar to that of ice ih , with each water molecule finding itself in an approximately tetrahedral environment with its nearest neighbors . unlike ice ih , however , the water framework consists of cages , with cavities large enough to accommodate a gas molecule . between the si and sii hydrates , there exist three types of cages , which are denoted 56 depending on the numbers of five- and six - sided faces that make up the cage . for example , common to both the si and sii hydrates is the 5 cage , where the water molecules sit on the vertices of a pentagonal dodecahedron . along with 5 cages , the si hydrate also consists of a 56 cages , which have two six - sided faces and 12 five - sided faces : there are two 5 cages and six 56 cages in the unit cell . the sii hydrate , on the other hand , has a unit cell made up of 16 5 cages and eight 56 cages . because of the larger size of the 56 cage , the sii structure forms in the presence of larger guest molecules such as propane , whereas small guest molecules such as methane favor the si hydrate . ( this is not to say that small guest molecules are not present in sii , just that the presence of larger guest molecules is necessary to stabilize the larger cavities . ) the si , sii , and sh crystals structures are shown in figure 19 , along with the individual cage structures . further details regarding the crystal structures of natural gas hydrates can be found in ref ( 439 ) . crystal structures of the si , sii and sh gas hydrates , along with the corresponding cage structures . first , sloan and co - workers proposed the labile cluster hypothesis ( lch ) , which essentially describes the nucleation process as the formation of isolated hydrate cages that then agglomerate to form a critical hydrate nucleus . second , the local structure hypothesis ( lsh ) was proposed after umbrella sampling simulations by radhakrishnan and trout suggested that the guest molecules first arrange themselves in a structure similar to the hydrate phase , which is accompanied by a perturbation ( relative to the bulk mixture ) of the water molecules around the locally ordered guest molecules . for the same reasons as already outlined elsewhere ( see section 1.2 ) , it is experimentally challenging the verify which , if either , of these two nucleation mechanisms is correct . what we will see in this section is how computer simulations of gas hydrate nucleation have been used to help shed light on this process . although not the first computer simulation study of natural gas hydrate formation ( see , e.g. , refs ( 444447 ) ) , one the most influential simulation works on gas hydrate formation is that of walsh et al . , in which methane hydrate formation was directly simulated under conditions of 250 k and 500 bar . it was found that nucleation proceeded through the cooperative organization of two methane and five water molecules into a stable structure , with the methane molecules adsorbed on opposite sides of a pentagonal ring of water molecules . this initial structure allowed the growth of more water faces and adsorbed methane , until a 5 cage formed . after persisting for 30 ns , this 5 cage opened when two new water molecules were inserted into the only face without an adsorbed methane molecule , on the side opposite to that where several new full cages were completed . this opening of the original 5 cage was then followed by the relatively fast growth of methane hydrate . after 240 ns , the original 5 cage transformed into a 56 cage , a structure not found in any equilibrium hydrate structure . walsh et al . also found that 5 cages dominated , in terms of abundance , during the early stages of nucleation . 56 cages ( which along with the 5 cages comprise the si hydrate ) were the second most abundant , although their formation occurred approximately 100 ns after that of the initial 5 cages . a significant amount of the larger 56 cages that are found in the sii hydrate was also observed , which was rationalized by the large number of face - sharing 5 cages providing an appropriate pattern . the 56 cages were also observed in an abundance close to that of the 56 cages . the final structure can be summarized as a mixture of si and sii motifs , linked by 56 cages . a similar structure had previously been reported as a result of hydrate growth simulations . ( a c ) a pair of methane molecules is adsorbed on either side of a single pentagonal face of water molecules . partial cages form around this pair , near the eventual central violet methane molecule , only to dissociate over several nanoseconds . ( d , e ) a small cage forms around the violet methane , and other methane molecules adsorb at 11 of the 12 pentagonal faces of the cage , creating the bowl - like pattern shown . ( f , g ) the initial central cage opens on the end opposite to the formation of a network of face - sharing cages , and rapid hydrate growth follows . ( h ) a snapshot of the system after hydrate growth shows the fates of those methane molecules that made up the initial bowl - like structure ( other cages not shown ) . . provided useful insight into the hydrate nucleation mechanism , the conclusions were based on only two independent nucleation trajectories . soon after the publication by walsh et al . water model based on mw water under conditions of 210 k and 500 atm ( the melting point of the model is approximately 300 k ) . owing to the reduced computational cost of the coarse - grained model , they were also able to study a much larger system size than walsh et al . ( 8000 water and 1153 guest molecules vs 2944 water and 512 guest molecules ) . in agreement with walsh , the initial stages of the nucleation mechanism were also dominated by 5 cages , and a mixture of si and sii motifs connected by 56 cages was observed . it was also observed that solvent - separated pairs of guest molecules were stabilized by greater numbers of guest molecules in the cluster . as gas hydrates are composed of solvent - separated pairs of guest molecules as opposed to contact pairs , this suggests a resemblance to the lsh , where the local ordering of guest molecules drives the nucleation of the hydrate . , however , also found a likeness to the lch : clusters of guest molecules and their surrounding water molecules formed long - lived blobs that slowly diffused in solution . these blobs could be considered large analogues of the labile clusters proposed in the lch . through analysis of their simulation data , jacobsen et al . concluded that the blob is a guest - rich precursor in the nucleation pathway of gas hydrates with small guest molecules ( such as methane ) . note that the distinction between blobs and the amorphous clathrate is that the water molecules have yet to be locked into the clathrate hydrate cages in the former . sketch of the nucleation mechanism of methane hydrates proposed in ref ( 451 ) . clusters of guest molecules aggregate in blobs , which transform into amorphous clathrates as soon as the water molecules arrange themselves in the cages characteristic of crystalline clathrate , which eventually form upon the reordering of the guest molecules and thus of the cages in a crystalline fashion . note that the difference between the blob and the amorphous clathrate is that the water molecules have yet to be locked into clathrate hydrate cages in the former . copyright 2010 american chemical society . both the work of walsh et al . and that of jacobson et al . suggest that amorphous hydrate structures are involved in the nucleation mechanism , although both studies were carried out under high driving forces . in ref ( 453 ) , jacobsen and molinero addressed the following two questions raised by the above studies : how could amorphous nuclei grow into a crystalline form ? are amorphous nuclei precursors intermediates for clathrate hydrates under less forcing conditions ? by considering the size - dependent melting temperature of spherical particles using the gibbs thomson equation , jacobson and molinero found for all temperatures that the size of the crystalline critical nucleus was always smaller than that of the amorphous critical nucleus , with the two becoming virtually indistinguishable in terms of stability for very small nuclei of 15 guest molecules ( i.e. , under very forcing conditions ) . from a thermodynamic perspective , this would suggest that nucleation would always proceed through a crystalline nucleus . the observation of amorphous nuclei , even at temperatures as high as 20% supercooling , hints that their formation might be favored for kinetic reasons . employing the cnt expression for the free energy barrier suggested that the amorphous nuclei could be kinetically favored up to 17% supercooling if a/x = 0.5 , where a and x are the surface tensions of the liquid - amorphous and liquid - crystal structures , respectively . jacobson and molinero estimated x 36 mj / m and 16 < a < 32 mj / m , so it is certainly plausible that amorphous precursors are intermediates for clathrate hydrates under certain conditions . the growth of clathrate hydrates from amorphous and crystalline seeds was also studied , where it was found that crystalline clathrate can grow from amorphous nuclei . as the simulation led to fast mass transport , the growth of postcritical nuclei was relatively quick , and the amorphous seed became encapsulated by a ( poly)crystalline shell . under conditions where an amorphous nucleus forms first because of a smaller free energy barrier but diffusion of the guest species becomes a limiting factor , it is likely that small nuclei would have long enough to anneal to structures of greater crystallinity before growing to the macroscopic crystal phase . it thus appears that gas hydrates might exhibit a multistep nucleation process involving amorphous precursors for reasonably forcing conditions , but for temperatures close to coexistence , it seems that nucleation should proceed through a single crystalline nucleus . by assuming a cnt expression for the free energy ( as well as the total rate ) , knott et al . used the seeding technique ( see section 1.3.1 ) to compute the nucleation rate for si methane hydrate with relatively mild supersaturation of methane , in a manner similar to that of espinosa et al . for homogeneous ice nucleation as discussed in section 2.4.1 . they found vanishingly small homogeneous nucleation rates of 10 nuclei cm s , meaning that , even with all of earth s ocean waters , the induction time to form one crystal nucleus homogeneously would be 10 years ! knott et al . therefore concluded that , under mild conditions , hydrate nucleation must occur heterogeneously . compared to homogeneous nucleation , the heterogeneous nucleation of gas hydrates has been little studied . liang et al . investigated the steady - state growth of a h2s hydrate crystal in the presence of silica surfaces , finding that the crystal preferentially grew in the bulk solution rather than at the interface with the solid . they also observed that , in one simulation , local gas density fluctuations of the dissolved guest led to the spontaneous formation of a gas bubble from solution , which was located at the silica interface . this had two effects on the observed growth : ( i ) the bubble depleted most of the gas from solution , leading to an overall decrease of the crystal growth rate , and ( ii ) because of the location of the guest bubble , the silica surface effectively acted like a source of gas , promoting growth of the crystal closer to the interface relative to the bulk . investigated the heterogeneous nucleation of co2 hydrate in the presence of a fully hydroxylated silica surface , first in a two - phase system where the water and co2 were well - mixed and then in a three - phase system where the co2 and water were initially phase - separated . for the two - phase system , the authors reported the formation of an ice - like layer at the silica surface , above which a layer composed of semi-5 cage - like structures mediated the structural mismatch between the ice - like contact layer and the si hydrate structure above . in the three - phase system , nucleation was observed at the three - phase contact line , along which the crystal nucleus also grew . this was attributed to the stabilizing effect of the silica on the hydrate cages , plus the requirement for the availability of both water and co2 . in a later work , bai et al . investigated the effects of surface hydrophilicity ( by decreasing the percentage of surface hydroxyl groups ) and crystallinity on the nucleation of co2 hydrate . they found that , in the case of decreased hydrophilicity , the ice - like layer at the crystalline surface vanished , replaced instead by a single liquid - like layer upon which the hydrate directly nucleated . whereas shorter induction times to nucleation at the less hydrophilic surfaces were reported , although certainly an interesting observation , as only a single trajectory was performed for each system , studies in which multiple trajectories are used to obtain a distribution of induction times would be desirable , and as the hydrate actually appears to form away from the surface in all cases , a full comparison of the heterogeneous and homogeneous rates would also be a worthwhile pursuit . there have also been a number of studies investigating the potential role of ice in the nucleation of gas hydrates . pirzadeh and kusalik performed md simulations of methane hydrate nucleation in the presence of ice surfaces and reported that an increased density of methane at the interface induced structural defects ( coupled 58 rings ) in the ice that facilitated the formation of hydrate cages . nguyen et al . used md simulations to directly investigate the interface between a gas hydrate and ice and found the existence of an interfacial transition layer ( itl ) between the two crystal structures . the water molecules in the itl , which was found to be disordered and two to three layers of water in thickness , had a tetrahedrality and potential energy intermediate between those of either of the crystal structures and liquid water . the authors suggested that the itl could assist the heterogeneous nucleation of gas hydrates from ice by providing a lower surface free energy than either of the ice liquid and hydrate liquid interfaces . differential scanning calorimetry experiments by zhang et al . found ice and hydrate formation to occur simultaneously ( on the experimental time scale ) , which was attributed to the heterogeneous nucleation of ice , which , in turn , facilitated hydrate formation . poon and peters provide a possible explanation for ice acting as a heterogeneous nucleating agent for gas hydrates , aside from the structural considerations of refs ( 460 and 462 ) : at a growing ice front , the local supersaturation of methane can be dramatically increased , to the extent that induction times to nucleation are reduced by as much as a factor 10 . computer simulations of hydrate nucleation have certainly contributed to our understanding of the underlying mechanisms , especially in the case of homogeneous nucleation . one fairly consistent observation across many simulation studies ( e.g. , refs ( 444 , 445 , 447 , 448 , 451 , 453 , and 464 ) ) suggests that some type of ordering of dissolved guest molecules precedes the formation of hydrate cages . another is that amorphous nuclei , consisting of structural elements of both si and sii hydrates form when conditions are forcing enough . nevertheless , open questions still remain . in particular , the prediction that homogeneous nucleation rates are vanishingly small under mild conditions emphasizes the need to better understand heterogeneous nucleation . to this end , enhanced sampling techniques such as ffs , which was recently applied to methane hydrate nucleation at 220 k and 500 bar , are likely to be useful , although directly simulating nucleation under mild conditions is still likely to be a daunting task . another complicating factor is that , aside from the presence of solid particles , the conditions from which natural gas hydrates form are often highly complex ; for example , in an oil or gas line , there is fluid flow , and understanding how this effects the methane distribution in water is likely to be an important factor in determining how fast gas hydrates form . in this respect , we have described only a fraction of the many computer simulation studies of crystal nucleation in supercooled liquids and solutions . still , we have learned that md simulations have dramatically improved our fundamental understanding of nucleation . for instance , several studies on colloidal particles ( see section 2.1 ) provided evidence for two - step nucleation mechanisms , and the investigation of lj liquids yielded valuable insights into the effects of confinement ( see section 2.2 ) . in addition , the investigation of more realistic systems has provided outcomes directly related to problems of great relevance . for example , the influence of different solvents on the early stages of urea crystallization ( see section 2.5 ) has important consequences in fine chemistry and in the fertilizer industry , and the molecular details of clathrate nucleation ( see section 2.5 ) could help to rationalize and prevent hydrate formation in oil or natural gas pipelines . thus , it is fair to say that md simulations have been and will remain a powerful complement to experiments . however , simulations are presently affected by several shortcomings , which hinder a reliable comparison with experimental nucleation rates and limit nucleation studies to systems and/or conditions often far from those investigated experimentally . these weaknesses can be classified in two main categories : ( i ) limitations related to the accuracy of the computational model used to represent the system and ( ii ) shortcomings due to the computational techniques employed to simulate nucleation events.(i)in an ideal world , ab initio calculations would be the tool of the trade . unfortunately , in all but a handful of cases such as the phase - change materials presented in section 2.3 , the time - scale problem makes ab initio simulations of crystal nucleation unfeasible ( see figure 5 ) . as this will be the status quo for the next few decades , we are forced to focus our efforts on improving the current classical force fields and on developing novel classical interatomic potentials . this is a fundamental issue that affects computer simulations of materials as a whole . although this is not really an issue for nucleation of simple systems such as colloids ( section 2.1 ) , things start to fall apart when dealing with more realistic systems ( see , e.g. , sections 2.5 and 2.6 ) and become even worse in the case of heterogeneous nucleation ( see , e.g. , section 2.4.2 ) , as the description of the interface requires extremely transferable and reliable force fields . machine learning techniques such as neural network potentials ( see section 2.3 and refs ( 467 and 468 ) ) are emerging as possible candidates to allow for classical md simulations with an accuracy closer to first - principles calculations , but the field is constantly looking for other options that are capable of bringing simulations closer to reality.(ii)the limitations of the computational techniques currently employed to study crystal nucleation are those characteristic of rare - events sampling . brute - force md simulations ( see section 1.3.1 ) allow for an unbiased investigation of nucleation events , but the time - scale problem limits this approach to very few systems , typically very distant from realistic materials ( see , e.g. , sections 2.1 and 2.2)although notable exceptions exist ( see section 2.3 ) . it is also worth noticing that , whereas brute - force md is not able to provide a full characterization of the nucleation process , useful insight can still be gained , for example , into prenucleation events . enhanced sampling techniques ( see section 1.3.2 ) are rapidly evolving and have the potential to take the field to the next level . however , free energy methods as they are do not give access to nucleation kinetics and , in the case of complex systems ( see , e.g. , sections 2.4.1 and 2.5 ) , are strongly dependent on the choice of the order parameter . on the other hand , in light of the body of work reviewed , it seems that path - sampling methods can provide a more comprehensive picture of crystal nucleation . however , at the moment , these techniques are computationally expensive , and a general implementation is not available yet , although consistent efforts have recently been put in place . we believe that the development of efficient enhanced sampling methods specific to crystal nucleation is one of the crucial challenges ahead . in an ideal world , ab initio calculations would be the tool of the trade . unfortunately , in all but a handful of cases such as the phase - change materials presented in section 2.3 , the time - scale problem makes ab initio simulations of crystal nucleation unfeasible ( see figure 5 ) . as this will be the status quo for the next few decades , we are forced to focus our efforts on improving the current classical force fields and on developing novel classical interatomic potentials . this is a fundamental issue that affects computer simulations of materials as a whole . although this is not really an issue for nucleation of simple systems such as colloids ( section 2.1 ) , things start to fall apart when dealing with more realistic systems ( see , e.g. , sections 2.5 and 2.6 ) and become even worse in the case of heterogeneous nucleation ( see , e.g. , section 2.4.2 ) , as the description of the interface requires extremely transferable and reliable force fields . machine learning techniques such as neural network potentials ( see section 2.3 and refs ( 467 and 468 ) ) are emerging as possible candidates to allow for classical md simulations with an accuracy closer to first - principles calculations , but the field is constantly looking for other options that are capable of bringing simulations closer to reality . the limitations of the computational techniques currently employed to study crystal nucleation are those characteristic of rare - events sampling . brute - force md simulations ( see section 1.3.1 ) allow for an unbiased investigation of nucleation events , but the time - scale problem limits this approach to very few systems , typically very distant from realistic materials ( see , e.g. , sections 2.1 and 2.2)although notable exceptions exist ( see section 2.3 ) . it is also worth noticing that , whereas brute - force md is not able to provide a full characterization of the nucleation process , useful insight can still be gained , for example , into prenucleation events . enhanced sampling techniques ( see section 1.3.2 ) are rapidly evolving and have the potential to take the field to the next level . however , free energy methods as they are do not give access to nucleation kinetics and , in the case of complex systems ( see , e.g. , sections 2.4.1 and 2.5 ) , are strongly dependent on the choice of the order parameter . on the other hand , in light of the body of work reviewed , it seems that path - sampling methods can provide a more comprehensive picture of crystal nucleation . however , at the moment , these techniques are computationally expensive , and a general implementation is not available yet , although consistent efforts have recently been put in place . we believe that the development of efficient enhanced sampling methods specific to crystal nucleation is one of the crucial challenges ahead . at the moment , simulations of crystal nucleation of complex liquids are restricted to small systems ( 1010 particles ) , most often under idealized conditions . for instance , it is presently very difficult to take into account impurities or , in the case of heterogeneous nucleation , defects of the substrate . indeed , defects seem to be ubiquitous in many different systems , such as ice , hard - sphere crystals , lj crystals , and organic crystals as well . defects are also often associated with polymorphism , but possibly because of the inherent difficulties in modeling them ( or in characterizing them experimentally ) , they are under - represented in the current literature . these are important aspects that almost always impact experimental measurements and that should thus be included in simulations as well . in general , simulations of nucleation should allow us not only to provide microscopic insight but also to make useful predictions and/or to provide a general understanding to be applied to a variety of systems . these two ambitious goals are particularly challenging for simulations of heterogeneous nucleation . in light of the literature we have reviewed in this work , we believe that much of the effort in the future has to be devoted to ( i ) enabling atomistic simulations of heterogeneous nucleation dealing with increasingly realistic interfaces and ( ii ) obtaining general , maybe non - material - specific trends able to point the community into the right direction , even at the cost of sacrificing accuracy to a certain extent . on the other hand , we hope that the body of work reviewed here will inspire future experiments targeting cleaner , well - defined systems by means of novel techniques , possibly characterized by better temporal and spatial resolution . improving on the current limitations of the computational models and techniques would enable simulations of much larger systems over much longer time scales , with a degree of accuracy that would allow a fruitful comparison with experiments . we think this should be the long - term objective for the field . up to now , the only way to connect simulations and experiments has been through the comparison of crystal nucleation rates , which even now still exhibit substantial discrepancies for every single class of systems we have reviewed . this is true not only for complex liquids such as water ( see section 2.4.1 ) but even for model systems such as colloids ( section 2.1 ) . this , together with the fact that , in some cases , even experimental data are scattered across several orders of magnitude , suggests that we are dealing with crystal nucleation in liquids within a flawed theoretical framework . it is thus no wonder that every aspect of this battered theory has been criticized at some point . for instance , the emergence of two - step ( or even multistep ) mechanisms for nucleation has been reported for many different systems ( see sections 2.1 , 2.2 , 2.5 , and 2.6 ) and can not be easily embedded in cnt as it is , although several improvements on the original cnt formulation have appeared within the past decade ( see section 1.1.2 ) . nonetheless , cnt is basically the only theory invoked by both experiments and simulations when dealing with crystal nucleation from the liquid phase . cnt is widely used because it offers a simple and unified picture for nucleation and it is often very useful . however , as demonstrated by both experiments and simulations , even the basic rules governing the formation of the critical nucleus can change dramatically from one system to another . thus , we believe that any sort of theoretical universal approach , a brand new cnt , so to say , will be unlikely to significantly further the field . indeed , we fear that the same reasoning will hold for the computational methods required . we can not think of a single enhanced sampling technique capable of tackling the complexity of crystal nucleation as a whole . the interesting but uncomfortable truth is that each class of supercooled liquids often exhibits unique behavior , which , in turn , results in specific features ruling the crystal nucleation process . thus , it is very much possible that different systems under different conditions could require different , ad hoc flavors of cnt . although the latter have been evolving for decades , we believe that a sizable fraction of the new developments in the field should aim at producing particular flavors of cnt , specifically tailored to the problem at hand . in conclusion , it is clear that md simulations have proven themselves to be of the utmost importance in unraveling the microscopic details of crystal nucleation in liquids . we have reviewed important advances that have provided valuable insights into fundamental issues and diverse nucleation scenarios , complementing experiments and furthering our understanding of nucleation as a whole . we feel that the ultimate goal for simulations should be to get substantially closer to the reality probed by experiments and that , to do so , we have to sharpen our computational and possibly theoretical tools . in particular , we believe that the community should invest in improving the classical interatomic potentials available as well as the enhanced sampling techniques currently used , enabling accurate simulations of crystal nucleation for systems of practical relevance .
the nucleation of crystals in liquids is one of nature s most ubiquitous phenomena , playing an important role in areas such as climate change and the production of drugs . as the early stages of nucleation involve exceedingly small time and length scales , atomistic computer simulations can provide unique insights into the microscopic aspects of crystallization . in this review , we take stock of the numerous molecular dynamics simulations that , in the past few decades , have unraveled crucial aspects of crystal nucleation in liquids . we put into context the theoretical framework of classical nucleation theory and the state - of - the - art computational methods by reviewing simulations of such processes as ice nucleation and the crystallization of molecules in solutions . we shall see that molecular dynamics simulations have provided key insights into diverse nucleation scenarios , ranging from colloidal particles to natural gas hydrates , and that , as a result , the general applicability of classical nucleation theory has been repeatedly called into question . we have attempted to identify the most pressing open questions in the field . we believe that , by improving ( i ) existing interatomic potentials and ( ii ) currently available enhanced sampling methods , the community can move toward accurate investigations of realistic systems of practical interest , thus bringing simulations a step closer to experiments .
Introduction Selected Systems Future Perspectives
PMC3316749
discovered by berzelius in 1817 , selenium ( se ) belongs to the 16 group ( formerly group 6a ) in the periodic table , together with oxygen , sulfur , tellurium , and polonium ( nogueira and rocha 2011 ) . initially , se was considered only as a toxic element , but for several decades it has been known as an essential trace element associated with significant health benefits in humans and mammals ( schwarz and foltz 1958 ) . the basic role of se activity is its presence in catalytic sites of various selenoproteins . in eukaryotic cells , se can be incorporated into 25 human and 24 rodent selenoproteins during translation , as selenocysteine ( sec ) , 21st aminoacid , which is unique for essential trace elements incorporation ( hesketh 2008 ) . selenoproteins may perform various functions in humans , including antioxidant action ( e.g. , glutathione peroxidases ) , transport and storage of se ( selenoprotein p ) , redox signaling ( thioredoxin reductases ) , thyroid hormone metabolism ( iodothyronine deiodinases ) , protein folding ( e.g. , selenoprotein 15 kda ) , and others ( table 1).table 1human selenoproteinsfunctionabbreviationsselenoproteincellular ; tissue localizationantioxidant enzymesgpx1cytosolic glutathione peroxidase ( gpx)cytosol , mitochondria ; widely expressedgpx2gastrointestinal gpxcytosol , er ; gastrointestinal tissue , livergpx3plasma gpxsecreted ; plasma , extracellular fluid , liver , kidney , heart , lung , thyroid , gastrointestinal tissue , breastgpx4phospholipid hydroperoxide gpxcytosol , mitochondria , nucleus ; widely expressed , testesgpx6olfactory gpxunknown ; embryo and oilfactory epitheliumselkselenoprotein ker , membrane proteinselrselenoprotein r ; methionine sulfoxide reductase b1cytosol , nucleus ; widely expressedselwselenoprotein wcytosol ; widely expressed , brain , colon , heart , skeletal muscle , prostatetransport and storage of sesepp1selenoprotein psecreted , cytosol ; plasma , widely expressed , brain , liver , testesredox signalingtrxr1thioredoxin reductase , type icytosol , nucleus ; widely distributedtrxr2thioredoxin reductase , type iimitochondria ; widely distributedtrxr3thioredoxin reductase , type iiicytosol , er , nucleus ; testis - specificthyroid hormone metabolismdio1iodothyronine deiodinase , type imembrane protein ; kidney , liver , thyroid , brown adipose tissuedio2iodothyronine deiodinase , type iier , membrane protein ; thyroid , central nervous system , brown adipose tissue , skeletal muscledio3iodothyronine deiodinase , type iiimembrane protein ; placenta , central nervous system , fetusprotein foldingsep15selenoprotein 15 kdaer lumenselnselenoprotein ner membrane ; widely expressedselmselenoprotein mer lumenselsselenoprotein ser , membrane protein ; widely expressedsec synthesissps2selenophosphate synthetasecytosolunknownselhselenoprotein hnucleus ; widely expressedseliselenoprotein itransmembraneseloselenoprotein ounknownseltselenoprotein ter membraneselvselenoprotein vtestesadapted from papp et al . ( 2010 ) genetic polymorphism associated with cancer the role of se and selenoproteins in human health and diseases has been intensively studied with special attention on the determination of relevant biomarkers of se status . , se is largely found in animal foods , and to a lesser extent also in plants , which indicates large individual differences in se intake , associated with dietary menu composition , but also with the origin of food , which can be grown ( plants ) or bred ( animals ) on soils with different se content ( gromadzinska et al . 2008 ) . in humans , dietary se intakes also vary geographically from low to high se areas . well documented se deficiency health effects keshan and kashin - beck ( kbd ) diseases , associated with muscle disorders , were found in a broad zone running from northeast to southwest china , from the border of heilongjiang to the yunnan province , where concentration of se in the soil was very low ( oldfield 1999 ) . in european countries , dietary se intake is lower than that observed in usa , mainly due to low se soils . recommended dietary intake ( rdi ) value of se for adults in usa and europe is 55 g / day . tolerable upper intake level determined by the us food and nutrition board ( nas 2000 ) is 400 g / day , while that determined by the scientific committee on food in europe is 300 g / day ( efsa 2008 ) . average intake of se by european population ranges from 27 to 70 g per day ( efsa 2008 ) , which is insufficient to meet the daily requirement . clinical signs of marginal se deficiency in europe have not been observed or documented yet . however , it should be noted that several groups of healthy individuals may be specially prone to se deficiency , which includes breast - fed neonates , pregnant women , and elderly people ( bellinger et al . the relationship between se level and health effects is represented by a u - shaped curve that suggests health pathologies associated with se deficiency as well as its excess ( ip 1998 ; jablonska et al . the altered se status resulting from insufficient se intake is very often associated with different diseases , including immune diseases , cardiological diseases , and cancer . on the other hand , recent studies have indicated that long - term high dietary se supply seems to be related to the risk of type 2 diabetes , amyotrophic lateral sclerosis , and some types of cancer ( bellinger et al . although se levels in blood and blood compartments are easily accessible markers of human se nutritional status , se level itself does not reflect its functional significance . plasma or serum se , very often used in various se investigations , reflects rather short - term se status , while platelet , leukocyte , and erythrocyte se reflects its longer - term status . two best known selenoprotein biomarkers that have been widely used in discriminating of se status are as follows : plasma selenoprotein p ( sepp1 ) level and plasma glutathione peroxidase ( gpx3 ) activity . in healthy humans , plasma se is incorporated as sec in two selenoproteins : sepp1 ( 4070% ) , gpx3 ( 2040% ) , while 610% of se is bound to albumin in the form of selenomethionine , through the replacement of methionine . free se accounts for less than 1% of total plasma se ( vincent and forceville 2008 ) . these biomarkers generally reflect the major sources of human body se , because gpx3 and sepp1 are the unique secreted selenoproteins . gpx3 is mainly synthesized in kidney , where it is produced by the cells of proximal tubular ephitelium and by parietal cells of bowman s capsule and then it is released into the plasma ( gromadzinska et al . mammalian sepp1 that contains multiple sec residues ( 10 sec residues in humans and rodents ) is synthesized in liver and then secreted into the blood and transported to other tissues . recently , specific apolipoprotein e receptor-2 ( apoer2 ) for sepp1 uptake in brain and testis and apoer2 homolog megalin for sepp1 uptake in kidney proximal tubule epithelial cells were found , suggesting receptor - mediated uptake of se in these organs ( burk and hill 2009 ) . there is a growing interest in the use of transcripts level as a molecular biomarker with special regard to whole blood . since early 1990s , research on this new molecular biomarker of se status has been extended to se studies in rodents , showing prioritization of preservation and degradation of specific selenoprotein mrna under se deficiency and under conditions of adequate and enhanced supply ( bermano et al . 1995 ) . the observed hierarchy of the expression of selenoprotein mrna in response to dietary se supply has indicated the order of selenoproteins , some of them dramatically affected under se deficiency or its excess , and others only marginally . selenoprotein transcripts analysis in the blood gives an opportunity to obtain genomic fingerprints , in response to se status , but also reflect an impact of genetic polymorphism of selenoproteins . in a few recent human studies , selenoprotein gene expression level in circulating human blood leukocytes was used as a longer term se status indicator . some other studies have been focused on the role of se incorporation during selenoprotein synthesis under different se dietary status in humans ( pagmantidis et al . 2008 ; ravn - haren et al . 2008a , b ; reszka et al . 2009 ; sunde et al . 2008 ) . levels of se for optimization of sepp1 concentration and gpx3 activity in plasma have been determined in people living in se - adequate areas . however , these biomarkers may be unsuitable under conditions of high se status , because of plateau levels obtained at such concentrations . on the other hand , se levels in blood of individuals living in low se areas with low se intake may be insufficient to ensure maximal activity and/or concentration of selenoproteins gpx3 and sepp1 ( thomson 2004 ) . therefore , determination of se intake and se status in humans with variable se supply seems to be important in assessing the most sensitive se status biomarkers . in this review , we present recent findings regarding molecular se biomarkers , based on rodent and human studies . se status measured by serum / plasma se , plasma sepp1 concentration and plasma gpx3 activity may differently respond to se supplementation , which can give information about low , adequate , and high se dietary intake . since several years , it has been generally accepted that selenoprotein level and/or activity may be more useful in determining se status than se itself . two human selenoproteins , gpx3 , and sepp1 are believed to be good nutritional se biomarkers in humans . studies on se - deficient populations showed that full expression of sepp1 required larger intake of se than did gpx3 activity ( xia et al . therefore , setting rdi values for se intake was based on their assessment of the amount of se required to achieve optimal activity or two - thirds optimal activity of gpx3 in plasma to meet the requirements of people living in low , adequate , and high se areas ( rayman 2004 ) . different chemical forms of dietary se from animal foods and plants , such as selenite , selenocysteine , and selenomethionine , are involved in metabolic pathways to form selenide . selenide is then transformed into sec for selenoprotein biosynthesis or may be methylated to the main metabolite selenomethionine is a major form of organic selenium in plant foods , similarly like selenium - enriched yeast in se supplements . because of different metabolism of se compounds in organism , absorption of se from different organic and inorganic se food sources , and se supplements , their incorporation in selenoprotein and urinary se excretion may vary in humans . 2005 ) clearly indicate that se in the form of selenomethionine is more easily absorbed than selenite . absorption of se from yeast was greater than this inorganic form but less than selenomethionine . interestingly , plasma se seemed to reflect selenomethionine content of yeast but not the other yeast se forms , indicating its effective bioavailability ( burk et al . in several populations in suboptimal se areas in europe , china , new zealand , concentration of sepp1 could not reach the plateau because of low se daily intake , suggesting that se nutritional requirements had not been achieved ( xia et al . different plateaus reached by plasma gpx3 activity and plasma sepp1 level indicates that the latter is a better indicator of se status in humans , because a larger intake of se has been required for optimized sepp1 concentration than did gpx3 activity ( xia et al . in addition to gpx3 activity , plasma sepp1 level may be a suitable se status and se intake biomarker in individuals from low se populations with additional supply of se . in populations living in high se areas , e.g. , australia ( queensland ) , usa , and canada ( central states and provinces ) ( oldfield 1999 ) , where gpx3 activity and sepp1 level in plasma can be optimized by dietary se supply , plasma se seems to reflect se status and se intake ( burk et al . the unique conserved stem - loop sec insertion sequence ( secis ) in the 3-untranslated region of mammalian selenoprotein mrna is essential for the recognition of uga as a codon for sec . sec may be synthesised from different se dietary sources : selenomethionine , selenocysteine or selenite , se compounds which are further metabolized to selenide and then in the presence of selenophosphate synthetase 2 to selenophosphate this process requires specific sec trna[ser]sec , several translation factors like sbp2 , efsec , and others that serve to distinguish between uga codons designated for sec from those terminating translation . sec synthesis occurs directly on its sec trna[ser]sec , initially carrying a serine residue , which serves as an acceptor for selenophosphate . maturation of sec trna[ser]sec requires methylation , and two isoforms of methylated and unmethylated sec trna[ser]sec are observed . se supplementation is known to modulate the relative ratio between these two isoforms and promote the methylation of sec trna[ser]sec . ( hatfield and gladyshev 2002 ; hesketh and villette 2002 ; jameson and diamond 2004 ; schomburg and schweizer 2009 ; small - howard and berry 2005 ) . it has been observed that the alterations in selenoprotein activity and concentration during se depletion and repletion are accompanied by changes in the mrna level . under severe se deficiency , this microelement is accumulated mainly in the brain and endocrine tissues , where elevated expression and activity of phospholipid glutathione peroxidase ( gpx4 ) and thyroxine deiodinase ( dio ) were observed , indicating the biological importance of these selenoproteins ( schomburg and schweizer 2009 ) . according to rodent studies , the main selenoproteins that are resistant to dietary se changes are gpx4 and dio ; thioredoxine reductases ( trxr ) and sepp1 are moderately sensitive , while gpx1 , selw , selh are very sensitive to low se supply ( reeves and hoffmann 2009 ; sunde 2010 ) . several in vitro and in vivo studies have shown that , under se deficiency , the degradation of selenoprotein mrna occurs through nmd ( nonsense codon - mediated mrna decay ) ( moriarty et al . 1998 ; weiss and sunde 1998 ) . due to specific nucleotide sequences and preferential binding of sbp2 translation factor , 14 of 25 human selenoprotein mrnas may be sensitive to nmd - based degradation under low se supply ( squires et al . however , a study on rodents , suggesting regulation of selenoprotein expression , irrespective of this hypothetical preference , indicates that also other mechanisms concerning differences in the expression efficiency of some selenoproteins under se deficiency must exist ( sunde et al . 2009 ) . in addition , methylated sec trna[ser]sec supports the biosynthesis of selenoproteins gpx1 , gpx3 , selr , selt , sensitive to low se supply , while other selenoproteins , like trxr1 and gpx4 , moderately sensitive and resistant to low se supply , require unmethylated isoform ( schomburg and schweizer 2009 ) . an attempt to answer the question whether selenoprotein gene expression may be used as a biomarker of se status was presented for the first time by roger a. sunde s team . sepp1 mrna shows the highest expression in rat testes and liver , while gpx3 mrna expression is highest in rat kidney ( evenson et al . 2004 ) , which is in agreement with observed selenoprotein activity and protein expression in these organs . therefore , these molecular biomarkers may be also useful for the determination of se status as well as for establishing physiological requirements of se for adequate selenoprotein gene expression . these authors found also that , in rodents , gpx1 , selh , and selw mrna levels were highly regulated by se status ( sunde 2010 ) . analysis of hierarchical regulation of different selenoprotein mrnas by se status indicates that gpx1 transcripts level is the best and adequate molecular biomarker in rats , because dietary se deficiency similarly decreased gpx1 mrna in blood and liver . by way of comparison , decrease in se supply in diet gpx1 mrna expression in rat liver was the highest , while in blood , expression of this selenoprotein was at 4th place in tissue rank , comparable with the expression in heart and kidney . interestingly , other investigated selenoprotein transcripts also presented distinct expression pattern across the tissues and it was found that gpx1 , gpx3 , gpx4 , sepp1 , trxr1 expression in blood were comparable with those observed in the major organs ( evenson et al . regulation of selenoprotein mrna under se dietary deficiency presents diversity , from dramatic decrease in gpx1 mrna to lack of changes for gpx4 mrna . low selenoprotein mrna observed in rodents reflect depletion of se supply in the diet , and adequate se supply regulates expression level very efficiently . it has been found that at least half of the dietary se necessary to provide plateau for enzymatic activity or protein expression is adequate to provide plateau for mrna expression in liver , muscle , and kidney of rats ( barnes et al . , selenoprotein mrna level is expressed as hyperbolic curve presenting plateau breakpoints at low se supply for majority of selenoproteins . minimal se requirement for growing rats was 0.1 g se / g diet based on liver se , liver , and rbc gpx1 activity . slightly lower dietary se requirements based on plasma gpx3 activity , liver trxr activity , and liver and kidney gpx3 activity were observed . based on dose response curves for selenoprotein mrna indifferent tissues , the minimum dietary requirements were lower than for physiological se biomarkers , ranging between 0.04 and 0.06 g se / g diet in liver and kidney and was between 0.03 and 0.05 g se / g diet in muscles . besides , it has been assumed that it is not feasible to use selenoprotein mrna in rat tissues as a biomarker for super - nutritional se level ( up to 0.8 g se / g diet ) . these experiments evidently suggest that marginal level of se in diet is able to increase selenoprotein mrna to adequate level in rats and mice , what may suggest common mechanism in regulation of selenoprotein mrna expression by dietary se supply ( barnes et al . recently , sunde ( 2010 ) proposed the panel of molecular biomarkers , which could be useful for the assessment of selenium status in rats and might be effective as the traditional biomarker panel in rat tissues . gpx1 , gpx3 , selt , and selw mrna panel of transcripts level was significantly correlated with liver se concentration , gpx1 and selk mrna were associated with liver gpx1 activity , while sepw and selk mrna reflected kidney selenium status , and gpx1 , sepw , txnrd1 transcripts level correlated with gpx1 kidney activity . interestingly , recent studies suggest that regulation of selenoprotein mrna by se dietary status is not a general phenomenon in rodents . it was found that in mice , majority of analyzed selenoprotein transcripts in liver and kidney were not significantly regulated by se deficiency ( sunde et al . it has been found that minimum se requirement of the turkey is higher than for rodents ( sunde and hadley 2010 ) . dietary requirement was decreased at least 50% in old rats as compared to requirements of young animals ( sunde and thompson 2009 ) . expression of the majority of selenoprotein mrna in testis , except for gpx1 , sepp1 , sepw , and also apoer2 , was not regulated by dietary se status ( schriever et al . 2009 ) . the mrna abundances of the 12 selenoprotein genes in thyroid and pituitary of young pigs were resistant to increasing se dietary supply in diet and also se deficiency , but not in liver , where nmd under se deficiency was observed for gpx1 , sepw , sepn , txrd1 . the testicular mrna of txnrd1 and sep15 were decreased by increasing dietary se supply , indicating that high se status may be associated with a decrease in selenoprotein mrna transcripts level ( zhou et al . differences in dietary minimal se supply required for maximal mrna expression and/or selenoenzyme activity suggest differential regulation depending on the type of selenoprotein as well as tissue . therefore , findings regarding ranking of different selenoproteins synthesis within tissue and also different distribution of selenoproteins in various tissues should be considered in establishing a universal se status biomarker ( sunde 2010 ) . in 2003 , the panel of experts in uk food standards agency has issued specific research recommendations after evaluation of current knowledge regarding the assessment of se status , including further development of functional biomarkers ( elsom et al . recently , a special emphasis is laid on the application of population of circulating white blood cells ( wbc ) transcriptome patterns in nutritional studies . transcriptomics studies suggest that genes with various functional annotations can be significantly expressed in wbc ( visvikis - siest et al . gene expression patterns may be also useful to define biological processes associated with human health and disease . however , only a few studies assessing selenoprotein gene expression in various populations of circulating wbc in humans have been conducted so far ( table 2).table 2human studies on selenoprotein molecular biomarkersstudybaseline daily se intake ( g)mean sdbaseline plasma se level ( ng / ml)mean sdselenoprotein gene expression in wbcuk , selgen study , n = 39 , both sexes , 6 week . se intervention ( pagmantidis et al . 2008)not presented93.9 1.7lymphocyteup - regulation : sps11.15 ( 1.061.23)selk1.11 ( 1.041.19)sep151.11 ( 1.021.20)denmark , n = 20 , 1840 years . . 2008a)49.8 13.6113.2 12.2leukocyteno effect : gpx1 , trxr1 sepp1denmark , precise pilot study , n = 105 , both sexes , up to 5 years . 2008b)49.8 13.693 11.2leukocyteno effect : gpx1uk , n = 39 , both sexes , 28 week . se intervention ( sunde et al . 2008)48 1489.2 12.5whole bloodno effect and no association between se status and gpx1,gpx3 , gpx4 , sepp1 , sepw , sephpoland , n = 47 healthy men ( reszka et al . 2009)24.2 17.4 ( unpublished data)54.3 14.6leukocyteno association between se status and gpx1 , gpx3 , sepp1 , sep15 human studies on selenoprotein molecular biomarkers human studies have not confirmed the hypothesis that selenoprotein transcript level in circulating leukocytes and in whole blood may be the reliable biomarker of se status in population with adequate ( sunde et al . 2008 ) and suboptimal ( reszka et al . 2009 ) baseline se level in plasma ( fig . 1 ) . after short - term 100 g sodium selenite supplementation , healthy subjects from the selgen population were chosen at random for microarray analysis of rna isolated from lymphocytes . the greatest changes after se supplementation were observed for genes that encode proteins functioning in protein biosynthesis . up - regulation of selenoprotein k ( selk ) and selenoprotein 15 kda ( sep15 ) after se supplementation was observed , indicating that only a small number of selenoprotein encoding genes was altered by different se dietary supply ( pagmantidis et al . ( 2008a , b ) also indicated lack of se impact on selenoprotein mrna expression in wbc after short- and long - term se supplementation . after 1 week of supplementation with nonorganic se or organic form among healthy young danish men , there were no differences in mrna expression of gpx1 , trr1 , and sep1 in leukocytes ( ravn - haren et al . similarly , after 5 years of se - enriched yeast supplementation of the precise danish subjects , there were no differences in the expression of gpx1 in circulating wbc ( ravn - haren et al . moreover , selenoprotein expression level in whole blood was not significantly associated with se status measured in the blood and se supply measured according to dietary questionnaire in uk population ( sunde et al . the baseline se level in uk and danish populations was relatively high as compared to other european populations and it approximately averaged 100 ng / ml in plasma ( table 2 ) . lack of changes in selenoprotein mrna levels after se supplementation indicates that protein synthesis may already be saturated in leukocytes at such sufficient se concentration . in the small group of 47 healthy polish individuals , plasma se levels were below the level required to optimize plasma gpx3 activity ( 54.3 14.6 ng / ml ) . however , no relationship between serum se and gpx1 , gpx3 , sepp1 , and sep15 mrna transcripts level , as well as gpx1 and gpx3 activities and selenoprotein mrna expression was found ( reszka et al . daily se intake in studied european groups was lower than rdi ( 55 g / day ) for adults in europe and us . the observed hyperbolic curve describing the relationship between selenoprotein mrna expression and dietary supply in rodents suggests that suboptimal se status in humans may be sufficient for selenome transcription machinery . interestingly , gpx1 , gpx3 , sepp1 , and sep15 selenoprotein mrna expression in circulating blood leukocytes was significantly positively correlated , indicating similar regulation of expression in circulating blood leukocytes . although significant correlations were found between several selenoprotein mrnas in the circulating blood leukocytes of healthy individuals , no correlation was found in the blood of bladder cancer patients , which may suggest an alteration of selenoprotein synthesis during carcinogenesis ( reszka et al . british ( whole blood ) ( sunde et al . 2008 ) ( a ) and suboptimal polish ( leukocyte ) ( reszka et al . 2009 ) ( c ) se level and correlation between plasma se level and sepp1 transcripts level in population with adequate british ( whole blood ) ( b ) and suboptimal polish ( leukocyte ) ( d ) se level a and b reproduced with permission from sunde et al . ( 2008 ) correlation between plasma se and gpx3 transcripts level in population with adequate british ( whole blood ) ( sunde et al . 2008 ) ( a ) and suboptimal polish ( leukocyte ) ( reszka et al . 2009 ) ( c ) se level and correlation between plasma se level and sepp1 transcripts level in population with adequate ( d ) se level a and b reproduced with permission from sunde et al . functional significance of blood se status is mainly related to selenoprotein activity in specific tissues . therefore , both traditional and molecular biomarkers of se status measured in different human tissues , except for controlled se intake , may depend on additional major modifiers , like health status ( e.g. , endocrine and immunological status ) , inter - individual variations ( age , sex , genetic polymorphism ) , environmental exposure , diet , medication , etc . therefore , establishing the se status and intake which would be optimal for human health seems to be very difficult , especially in many populations experiencing suboptimal se supply , including europe . epidemiological and animal studies clearly indicate that biological effects of se are sex - specific , which may be associated with endocrine regulation and also immunological status . it has been suggested that cancer risk in men is more profoundly influenced by se status than in women ( waters et al . selenoprotein gene expression displays sexual dimorphism in various organs of females and males , e.g. , se status was linked to male fertility due to gpx4 function during spermatogenesis ( schomburg and schweizer 2009 ) . sex - specific selenoprotein expression pattern may vary or be sustained with age . in the selgen study , an effect of se supplementation was associated with gpx4 genetic polymorphism in a sex - specific manner ( meplan et al . plasma se level is not likely to accurately reflect tissue se status or selenoenzyme activity and level . 2006 ) . se metabolism may be altered in different health pathologies , e.g. , in patients with inflammatory diseases . during critical illness , like sepsis , acute phase response , and other immunological disturbances , serum se status may be lower and insufficient to support organ function . in systemic inflammatory response syndrome patients , serum se concentration was significantly lower ( vincent and forceville 2008 ) . tobacco smoking or occupational exposure can also increase dietary requirement of se . it is generally agreed that smoking can decrease the activity of antioxidative selenoproteins , probably due to formation of complexes of se with cadmium ( ellingsen et al . a scheme of traditional and molecular biomarkers measurements with the impact of potential modifiers ( physiological , environmental , genetic ) . the optimal selenium biomarker should reflect all putative egzo- and endogenous factors which can modulate selenium bioavailability , metabolism and selenoprotein transcription , biosynthesis , transport , activity and function . a type of measurement used for the determination of selenium status should be also considered biomarkers of selenium status in humans . a scheme of traditional and molecular biomarkers measurements with the impact of potential modifiers ( physiological , environmental , genetic ) . the optimal selenium biomarker should reflect all putative egzo- and endogenous factors which can modulate selenium bioavailability , metabolism and selenoprotein transcription , biosynthesis , transport , activity and function . a type of measurement used for the determination of selenium status should be also considered in human population , there is a large individual variation in response to se supplementation , appearing to be unrelated with the baseline se status ( brown et al . recent human studies indicate that selenium status , measured as body se , as well as sepp1 plasma level and gpx activity , may be significantly influenced by genetic polymorphism of specific selenoproteins , including gpx1 ( rs1050450 ) ( jablonska et al . 2007 , 2009 ) , gpx4 ( rs713041 ) ( meplan et al . 2008 ) . the impact of sepp1 variations in codon 234 , associated with ala to thr change ( rs3877899 ) and g to a transition within 3 untranslated region ( utr ) of sepp1 mrna ( rs7579 ) , resulted in the alteration of se status before as well as after se supplementation ( meplan et al . . these polymorphisms also influence proportion of two sepp1 50 and 60 kda isoforms in plasma , which was proposed as the modulation factor of se incorporation during selenoprotein synthesis ( meplan et al . 2009 ) . se availability modulated by sepp1 variants , resulting in difference in isoform pattern was restricted just to males . interestingly , a possible impact of gender was observed for functional significance of gpx4 genetic polymorphism . single nucleotide polymorphism in 3-utr of gpx4 mrna , associated with c to t change ( rs713041 ) influenced the level of gpx4 in lymphocytes , and also other selenoproteins ; however , this effect was more evident in females ( meplan et al . one may hypothesize that separate molecular mechanisms for gpx4 synthesis in testes and high dietary se requirements in males override genetic polymorphism in gpx4 and sepp1 . recent findings suggest also that the association between gpx1 activity and se concentration , analyzed separately for each gpx1 pro198leu ( rs1050450 ) genotype group , was the highest for pro / pro and the lowest for leu / leu genotype , suggesting different response of gpx1 activity to se . this also points to the importance of the genetic background in the assessment of the se status with the use of selenoprotein biomarkers such as gpx1 activity ( jablonska et al . adaptation of humans to suboptimal dietary se supply with low se level was observed by finley et al . ( 1999 ) in healthy study participants living in new zealand , where people consume less se than suggested by the rdi of 55 g / day . se level in blood of individuals living in low se areas and with low se intake may be insufficient for maximal activity of gpx3 and level of sepp1 in plasma . however , supplementation had no effect on se status in platelets and erythrocytes , which can be regarded as indicative of long - term se intake . interestingly , high retention of stable se isotope in placebo individuals was observed as compared to individuals supplemented with 30 g se daily for 5 months , suggesting maintenance of critical se pool in the human body and adaptation to low se status by adjusting of its secretion . therefore , physiological requirements of se are lower than recommended in humans , but enabling selenoprotein synthesis . besides , in patients with diseases like kbd and cancer , where low se level in blood is observed , down - regulation of specific selenoprotein gene expression was found to occur in circulating wbc . significant down - regulation of sepp1 mrna level was observed in han chinese with kbd ( sun et al . 2010 ) . in caucasian bladder cancer male patients , sepp1 , gpx1 , gpx3 , sep15 mrna levels were lower than in the control group ( reszka et al . preferential incorporation of sec into selenoprotein mrna in circulating blood leukocytes under relatively low se supply merits investigations intended to identify potential sensitive and resistant selenoproteins under se deficiency in humans . while the levels of se required for optimization of sepp1 and gpx3 activities in plasma are well known , estimating se level that is required for maximal selenoprotein gene expression in humans requires further research . according to rodents studies and studies among humans , which have been discussed in this review , it appears that suboptimal se intake may be sufficient to achieve selenoprotein mrna expression and molecular requirements of se are lower than the established recommended dietary intake in humans55 g / day ( efsa 2008 ; nas 2000 ) . it should be noted that the expression of individual selenoprotein mrna may be not always linked with protein expression . therefore , since we have not understood the transcriptional and translational link of selenoprotein under different se supply in different human tissues , biological functionality of selenoprotein could be recognized only at protein level . preferential incorporation of se into selenoproteins in rodents was observed even at suboptimal dietary se level . in rats , minimum se requirements for achieving a plateau of selenoprotein mrna expression have been low , reached plateau at half of the dietary se concentration required for maximal activity of gpx1 , gpx3 , and maximal level of sepp1 in different tissues ( barnes et al . therefore , in rodents , selenoprotein mrna expression does not seem to be a good indicator for se status . hypothetically , in almost all populations , even in those with low and moderate se intake and se level in blood compartments , selenoprotein encoding gene expression might reach the plateau levels and serve as a sensitive functional biomarker of se status . however , it should be noted that regulation of selenoprotein gene expression , metabolic pathways , and responses to se interventions in animals may differ from those in humans . none of the human studies conducted so far has indicated that selenoprotein mrna in the whole blood and blood cells may be a good indicator of se status in humans ( pagmantidis et al . . low se status may be adequate for proper regulation of selenoprotein transcription , but not for proper physiological activity of selenoproteins . preferential incorporation of se into selenoprotein and its optimal saturation at low dietary se level does not provide adequate activities and selenoprotein expression in different tissues . therefore , it may be suggested that functional rather than molecular biomarkers of se are optimal indicators of its supply . it seems reasonable to conclude that sepp1 concentration in plasma may clearly indicate sub - optimal to optimal se supply , because it can reflects functional significance of se activity in organism . in over - optimal se supply , where sepp1 level is optimized , plasma se seems to reflect se intake and achieved se status . all possible modifiers of se status determined by means of biochemical and molecular techniques , including smoking , occupational exposure , diet , health parameters , sex , age , endocrine and immunological status , should be included , as these factors can potentially modulate selenoprotein gene expression . impact of polymorphism of selenoproteins should be also included in complete analysis of se requirements of individuals with different genotypes and haplotypes . therefore , comprehensive intervention studies of circulating leukocyte selenoprotein transcript levels in a population with suboptimal and with adequate compared to over - optimal selenium status should be conducted .
the most commonly used methods for assessing the selenium ( se ) status in humans involve analysis of se concentration , selenoprotein activity , and concentration in the blood and its compartments . recently , it has been suggested that the expression of selenoprotein mrna in circulating blood leukocytes could differently reflect se status , due to prioritization of specific selenoprotein synthesis in response to dietary se supply . whereas the se levels required for optimization of selenoprotein p level and plasma glutathione peroxidise activity are well known , estimation of se level that is required for maximal mrna expression of selenoprotein in humans is the subject of current investigations . studies on rats suggest that whole blood selenoprotein mrna level can be used as the relevant molecular biomarker for assessing se status , and suboptimal se intake may be sufficient to achieve effective expression . human studies , however , did not confirm this hypothesis . according to studies on rodents and humans discussed in this review , it appears that suboptimal se intake may be sufficient to satisfy molecular requirements of se and it is lower than current recommended dietary intake in humans . the use of selenoprotein transcripts as a molecular biomarker of se status requires further studies on a large group of healthy individuals with different baseline se , including data regarding genetic polymorphism of selenoproteins and data regarding potential modifiers of se metabolism .
Introduction Traditional biomarkers of Se status Molecular biomarkers of Se status Selenoprotein transcripts in rodents Selenoprotein transcripts in humans Modifiers of selenium status Concluding remarks
PMC3751219
fracture healing of bone is a complex process affected by systemic , biological and mechanical factors , the combination of which can lead to successful and complete repair of a fracture , or deficiencies of which can cause delayed healing or even non - union . angiogenesis is an important early phase of fracture healing , leading to invasion of the initial haematoma by fine vessels and subsequent conversion into soft callus . impaired angiogenesis can compromise fracture healing [ 8 , 14 ] , whereas enhanced vascularity is known to improve fracture healing [ 37 , 43 ] . neovascularisation of the haematoma occurs from the adjacent soft tissues [ 4 , 9 , 10 , 13 ] , especially when the medullary supply has been disrupted . standard histology uses staining to visualise tissue types or tissue components ( such as smooth muscle or endothelial cells ) in the fracture callus and surrounding tissue [ 3 , 21 , 32 , 42 ] . immunohistochemistry techniques label proteins particular to blood vessels , such as vegf and cd31 , to identify regions of angiogenesis and quantify small vessels [ 22 , 28 , 36 ] . capillary proliferation around the fracture site has been demonstrated by micro - angiography , and vascular budding adjacent to the fracture defect was revealed with contrast perfused histological sections . electron microscopy [ 21 , 23 , 38 , 46 ] and intra - vital microscopy [ 47 , 49 ] provide high - resolution images that can indicate fine details of the angiogenesis process , such as changes in cell morphology , gene expression or endothelial cell activity . these techniques can indicate fine details of the vessel structure and distribution , but most are two - dimensional and static , providing little quantitative information on blood flow . detailed three - dimensional structural information is provided by ct scanning with the vessels perfused with contrast resin at killing [ 2 , 34 ] , or in vivo [ 20 , 26 , 30 , 31 ] , but these are often limited snapshots at particular time points and provide no information of the time course of neovascularisation . for example , tomlinson and associates used silicone contrast perfusion of vessels and micro - computer tomography ( micro - ct ) scanning of forelimb stress fractures at 3 and 7 days after loading - induced stress fractures . measures of blood flow or blood perfusion are possible with probe - based laser doppler [ 6 , 1517 ] , ultrasound , radioactive tracer clearance [ 1 , 33 , 45 , 48 ] or functional ct [ 1820 , 39 ] . these functional measures must be performed in vivo and provide temporal analysis of the neovascularisation process throughout the fracture healing period . probe laser doppler measures at discrete points ( sometimes several across a region ) and tracer clearance can be used to provide a regional integral of the measured flow . laser doppler scanning , however , scans continuously over a region and provides 2d surveys of blood perfusion , which can be used to identify vessels and regional flow patterns . in this study , we combined laser doppler scanning ( for longitudinal percutaneous information ) and micro - ct imaging ( for end - point three - dimensional structural information ) to investigate the location and extent of neovascularisation of the soft tissues around the fracture gap during healing . thirty - two sprague dawley rats ( 350500 g ; 1220 weeks of age ) underwent left mid - femoral osteotomy fixed with an external fixator . all procedures were approved by the imperial college , london , ethical review process and were strictly conformed to the animals ( scientific procedures ) act 1986 uk home office guidelines . experiments were performed under a home office licence ppl 70/6472 ( the conditions of which also fulfil the us nih guide for the care and use of laboratory animals ) . inhalation anaesthesia was induced in an induction chamber ( 4 % isoflurane in oxygen at 2 l / min ; isoflo , abbott laboratories ltd . , maidenhead , sl6 4xe , uk ) , the animal was transferred to a rat mask and anaesthesia was maintained with 1 l / min oxygen and isoflurane varied as appropriate between 1.5 and 3 % to maintain full anaesthesia and analgesia but to allow optimal recovery . newbury , rg14 1ja , uk ) and analgesia ( buprenorphine ; vetergesic 0.3 mg / ml . ; reckit benckiser healthcare ( uk ) ltd , hull , hu8 7ds , uk ) were administered in appropriate doses ; both thighs were shaved and laser doppler scanned ( model ldi2-hr , moor instruments plc , axminster , england ) . the animal was then prepared recumbent on its side on a heating mat , surgically draped and skin disinfected for surgery . the left femur was exposed through a lateral skin incision and blunt dissection , four stainless steel fixation pins ( 1.25 mm thread diameter ) were inserted transversely 7 mm apart along the femoral length and the femur was cut transversely mid - shaft using a fine hand saw ( fig . 1 ) . the muscle layers and skin were closed ( and sutured with resorbable suture material ) , the pin shafts were passed retrograde through the skin and a custom - designed unilateral fixator was assembled to the pin ends . after rehydration by injection of two 2 ml aliquots of sterile water subcutaneously , the animal was awoken with administration of pure oxygen through the anaesthesia equipment under observation and once fully awake was returned to standard housing . all animals were housed individually with analgesic ( rimadil ) and antibiotic cover ( enrofloxacin 0.05 % ) in the drinking water for 4 days . thereafter , the animals were housed in threes and allowed water and standard laboratory feed ad libitum . the fracture zone is outlined radiograph of healed fracture ( and fixator ) sacrificed 6 weeks after fracture . the fracture zone is outlined the full medial aspect of both thighs was laser doppler scanned ( 2.5 cm 2.5 cm ) at a resolution of 0.1 mm , pre- and post - operatively and at 1 , 2 , 4 and 7 days and thence weekly until sacrifice . animals were killed at 2 and 4 days post - operatively , and at 1 , 2 , 4 and 6 weeks post - injury ( n = 5 per group ) . at sacrifice , the anaesthetised animal was laser doppler scanned , and then the common iliac artery was cannulated , heparinised and infused with silicone contrast agent ( microfil mv-120 , flow tech inc . , carver , mass . the animal was immediately euthanised with pentobarbital , and the hindquarters were harvested and fixed in 10 % neutral - buffered formalin solution for at least 2 weeks . from laser doppler scans , the femoral artery was identified by its bifurcation and location , and four regions of interest were analysed : ( 1 ) femoral artery , ( 2 ) distal femoral artery , ( 3 ) the zone cranial to the artery ( adjacent to the fracture ) and considered to be the tissue most probably involved in neovascularisation of the fracture site [ 13 , 23 , 29 ] and ( 4 ) the zone caudal to the artery ( fig . 2 ) . proprietary software ( moor ldi v5.1 , moor instruments plc , axminster , england ) computed mean and maximum perfusion in each region ; the mean values were used in subsequent analysis . daily variations in perfusion due to depth of anaesthesia or body temperature were determined by calculating the change of perfusion in each region in the contralateral leg relative to the pre - operative value in that region ( dailyvar ) . we divided the perfusion values of the fractured leg by this parameter to account for fluctuations not associated with the neovascularisation process . mean perfusion measures ( corrected for daily variations ) were then expressed as a percentage change from the pre - operative value for the fractured leg . inter - operator repeatability of the laser doppler analysis was determined by three separate investigators analysing 20 separate scans independently . intra - operator repeatability was determined by one - operator repeating analysis on 20 scans four times . 2laser doppler scan with four regions of interest identified : 1 femoral artery , 2 distal femoral artery , 3 cranial region overlying the fracture zone and 4 caudal region laser doppler scan with four regions of interest identified : 1 femoral artery , 2 distal femoral artery , 3 cranial region overlying the fracture zone and 4 caudal region micro - ct scanning of the limb - fixator constructs and intact contralateral femora was performed at 180 kv and 133 ma and a resolution of 21 microns ( hmx st225 , x - tek systems ltd . the excised femora were typically 30 mm in length and thus consisted of about 1,450 slices . the ct scans of both limbs were reconstructed to show the bone and the vessels ( fig . the steel fixator pins caused some artefactual whiteout of the scans at their level , masking some of the vascular regions on those sections , but the fracture zone ( which is the region of interest ) was between the middle pins . scans of fractured limbs were therefore analysed just between the pins on either side of the fracture site ( proximally and distally , pins number 2 and 3 ) , a length of about 8 mm . intact limbs were analysed over the mid - third of the femur , which corresponded to the same level as the fracture region on the contralateral limb . the image stack of the scan was then thresholded automatically ( trainable segmentation , fiji image manipulation software , http://fiji.sc/wiki/index.php/fiji ) to identify four regions : bone , vessel , tissue and air ; the contrast medium - filled vessels displayed ct density midway between bone and soft tissue . the scans were binarised and manipulated to smooth and cohere the vessel regions and minimise all other regions . an automated particle counter ( fiji ) was then used to count the vessels on each cross - section , the total cross - sectional ( vessel ) area and area fraction for each section , and the size and number of vessels in each slice were output to a spreadsheet . the particle counter also reported the size of every particle counted ( cross - sectional area of each vessel at each slice ) in voxels , which was recorded , and the distribution of sizes was characterised in decades ( 1 , 2 , 5 , 10 , 50 , 100 , 500 , 1,000 and 5,000 voxels , corresponding to vessels of cross - sectional area of less than 20 , then approximately 30 , 50 , 70 , 150 , 200 , 500 and 1,000 m ) . the proportion of small vessels was then calculated as the number of vessels whose cross - section was two voxels or less , summed for all slices in the fracture region divided by the total number of vessels in that region . 3three - dimensional reconstruction of -ct of bone 6 weeks after fracture ( medial view ) . pins were left in during scanning to preserve fracture morphology , resulting in artefacts at the level of the pins ( gaps in the scan ) . a whole bone without vessels ( the united fracture site can be seen mid - shaft , and the pin locations are indicated by dotted lines ) . b whole bone and vessels ( the femoral artery is clearly seen running across the medial aspect of the central zone of the femur ) three - dimensional reconstruction of -ct of bone 6 weeks after fracture ( medial view ) . pins were left in during scanning to preserve fracture morphology , resulting in artefacts at the level of the pins ( gaps in the scan ) . a whole bone without vessels ( the united fracture site can be seen mid - shaft , and the pin locations are indicated by dotted lines ) . b whole bone and vessels ( the femoral artery is clearly seen running across the medial aspect of the central zone of the femur ) statistical analysis of the perfusion data was performed using one - way analysis of variance ( anova ) and also a multilevel ( growth ) model in spss ( ibm spss statistics , version 20 2011 , ibm corp . the growth model was used to analyse the raw perfusion measures from regional analysis considering the effects of time under a linear and quadratic model and using the corresponding measures in the intact leg ( intact ) , cranial ( cranial ) and cranial intact and body temperature measures as covariates . anova was used to test for differences between the fractured side and intact side measures . vessel density ( vessel count ) and vessel size in the study ( fractured ) limb were compared to those in the contralateral ( intact ) limb for the same region and at the same time point using paired student s t tests with the bonferroni correction . one cannulation failed , preventing contrast infusion , and two animals scans were compromised by wound difficulties ( although they were still available for analysis ) . one longer term animal developed respiratory distress and was killed 1 week early ( at 5 weeks ) . therefore , 31 valid animals were micro - ct analysed at 2 days ( n = 4 ) , 4 days ( n = 5 ) , 6 days ( 1 ) , 1 week ( n = 5 ) , 2 weeks ( n = 6 ) , 4 weeks ( n = 5 ) , 5 weeks ( 1 ) and 6 weeks ( n = 4 ) . all 6-week animals appeared successfully healed on radiographic projection ( fig . 1 ) . inter - operator repeatability studies of the laser doppler analysis gave a spearman s correlation coefficient of r = 0.88 for femoral artery regions and 0.83 for the cranial region . spearman s correlation coefficients for intra - operator repeatability were 0.96 and 0.94 for the femoral artery and cranial region , respectively . blood perfusion in the femoral artery ( region 1 ) and the cranial region ( region 3 , which is adjacent to the fracture region ) dropped immediately after operation , increased greatly from 4 days post - operatively to 2 weeks and then declined . the two other regions ( 2 and 4 ) showed similar trends to a lesser degree ( fig . 4 ) . the mean regional perfusion values over all animals also demonstrated this trend ( fig . 4laser doppler scans at successive time points in a typical subject ( above ) and the corresponding plot of mean perfusion by zonesfig . 5perfusion ( relative to pre - operative measures ) in femoral artery region and cranial region ( adjacent to fracture ) ; pooled data for all animals and all time points laser doppler scans at successive time points in a typical subject ( above ) and the corresponding plot of mean perfusion by zones perfusion ( relative to pre - operative measures ) in femoral artery region and cranial region ( adjacent to fracture ) ; pooled data for all animals and all time points statistical analysis confirmed these observations ; multilevel statistical analysis showed that time after fracture significantly predicted perfusion in the femoral zone of the fractured leg , f(1,169.8 ) = 4.042 , p = 0.046 , and the corresponding measures on the intact limb also significantly predicted perfusion [ f(1,181.7 ) = 8.83 , p = 0.003 ] . the dependence on time was significantly better represented by the quadratic model than a linear growth model [ (1 ) 9.858 , p < 0.05 ] , and inclusion of the intact measure was highly significant in improving the model [ (5 ) 21.213 , p < 0.001 ] . body temperature did not significantly predict perfusion [ f(1,187.92 ) = 0.167 , p = 0.664 ] nor was there significant interaction between time and the perfusion measures [ f(1,137.6 ) = 1.165 , p = 0.282 ] . anova demonstrated a highly significant difference between the femoral ( fractured limb ) perfusion measure and the intact side ( f = 24.5 , p < 0.001 ) , with post hoc comparisons using the least - significant difference ( lsd ) technique indicating significant differences between pre - operative perfusion and post - fracture measures at days 214 : immediate post - fracture perfusion was highly significantly different ( p < 0.001 ) from all time points except 35 days ( p = 0.018 from pre - fracture perfusion ) . perfusion at day one was highly significantly different from post - fracture perfusion , but not from the pre - fracture measure ( p = 0.109 ) . micro - ct scans indicated no clear changes in spatial distribution of the vessels over time of healing , with an even distribution of vessel densities throughout the region surveyed . fractured limbs at all time points displayed highly significantly greater vessel densities ( vessel number ) than their contralateral intact limbs ( p = 0.005 , student s paired t test ) . the number of small vessels for each limb expressed as a fraction of the total number of vessels also showed a significantly greater proportion of smaller vessels on the fractured side ( fig . this proportion also showed an increase from 2 days post - operatively to 14 days post - operatively , and then , the difference from the intact side diminished . 6small blood vessel fraction ( number of vessels <3 voxels / total number of vessels ) in the fractured limb and contralateral intact limb ) ( p = 0.021 , student s paired t test ) small blood vessel fraction ( number of vessels <3 voxels / total number of vessels ) in the fractured limb and contralateral intact limb ) ( p = 0.021 , student s paired t test ) this study is the first to use two - dimensional laser doppler scanning longitudinally throughout the healing period , to show an immediate post - operative decrease in perfusion adjacent to the fracture site , with a subsequent increase , especially throughout the first 2 weeks of healing . by combining this with micro - ct scanning , we have shown that this was achieved by an increase in the number and proportion of small vessels adjacent to the fracture zone . this elaborates the findings of matsumoto et al . who used laser doppler scanning of the whole hind limbs of mice to identify perfusion changes ( post - fracture and 2 weeks later ) and histology to measure vessel density . others have used single - point measures by laser doppler probe to impute perfusion in whole regions , also reporting increased perfusion after fracture [ 32 , 50 ] . our laser doppler measurements confirm that neovascularisation begins soon ( 214 days ) after fracture , particularly in the soft tissues surrounding the fracture ( cranial region of interest in laser doppler scans ) . this indicates that neovascularisation of the tissue occurs in a spatially targeted region , not just through general limb perfusion , and is temporally organised to provide quick supply of nutrients and blood to the fracture site . the micro - ct vessel analyses indicate that this increased perfusion is achieved by more and finer vessels and not just by vasodilation of existing vessels . the vessel distributions around the fracture site ( along the axis of the femur ) were predominantly even and showed no consistent variation or trend towards one side of the fracture line or location . although the pins were placed in the blunt dissected surgical approach , it is not possible to exclude any vascular response to this or the cortical fixation of the pins . similar time trends were reported by melnyk et al . , who also found increased perfusion shortly after fracture followed by a decline , in their study of perfusion in fracture healing with soft tissue damage , using a probe - based transcutaneous laser doppler flowmeter measuring single points at the fracture site and one centimetre distally and proximally . perfusion near the fracture site was only clearly greater than preoperatively at 3 and 7 days , and 14 days slightly greater . previous studies have shown similar trends in neovascularisation of soft tissue around the fracture callus , dating back to gothman s studies in the 1960s [ 912 ] . these studies established clinical practice for treatment of displaced fractures by underlining the benefit of soft tissue neovascularisation . however , neovascularisation analyses were limited to ex vivo assessment of 2d angiographs or histology , or single - point measures of perfusion . by combining laser doppler scanning and micro - ct analysis , we were able to provide 3d structural and 2d functional measurements of vascularisation , showing that neovascularisation occurred by angiogenesis ( new vessel formation ) and not only vasculogenesis ( vasodilation of existing vessels ) . micro - ct scanning undecalcified bones enabled localisation of the vascularity with respect to the fracture gap , although segmentation and scan analysis were thereby made more difficult . our images indicate that angiogenesis ( as evident by number of small vessels ) was evenly distributed along the fractured bone length . our micro - ct results correlate with the findings of tomlinson et al . , who also used microfil infiltration and micro - ct to identify vessels in osteogenic response to overloading . using anti - angiogenic treatment , they showed that angiogenesis was significant in the increased vascularity at 3 and 7 days after loading - induced stress fracture - related osteogenesis . the use of microfil to image vessels down to 10 microns was reported by marxen et al . and demonstrated by vasquez et al . , working with similar 20 micron micro - ct resolution . neovascularisation of the soft tissues is critical as it provides a blood source for the healing fracture callus . in our current study , we were not able to visualise or quantify vessels directly within the fracture callus . the resolution and depth of penetration of the laser doppler scan were insufficient to quantify the angiogenic processes in the callus . though our micro - ct images most probably captured the intra - cortical vessels , the contrast between bone and vessels was insufficient to allow thresholding differentiation and segmentation . further work will use histology and decalcified bones to examine the vessel structure in the bone and fracture callus . though variations in perfusion were large between animals ( large standard deviations in fig . 5 ) , the temporal and spatial patterns of revascularisation were similar for all animals ( statistical analysis , p < 0.05 ) . large variation in blood perfusion measurements is not uncommon due to the numerous internal and external factors that influence perfusion . sample size at each time point was small ( n = 5 ) but comparable to similar studies , and larger sample sizes would probably not change the overall trends seen . our preliminary data indicated that laser doppler scanning can detect flow through 5 mm of muscle tissue , and melnyk et al . report laser penetration of bone to 2 mm depth and skin / muscle to 6 mm . perfusion images are weighted projections of perfusion as a function of depth ( i.e. , deeper vessels appear to have less flow ) . a few animals suffered skin nicks during shaving , which resulted in extremely high perfusion values despite relatively small skin injury . the injury caused by surgery was all on the lateral side of the leg , and scanning was performed on the medial side , minimising the skin effects of the surgical site . reed has shown that revascularisation at early time points is critical for ensuring healing . in that rat model , that we have developed methods for functional and structural characterisation of neovascularisation , future work will examine methods for increasing vascularity of the soft tissue in these early time points . laser doppler scanning is a non - invasive in vivo measurement causing minimal distress to the animal which can be used for longitudinal studies of treatment effects and responses . micro - ct complements the functional measures provided with laser doppler with 3d structural information of the vessel network . further investigation using these modalities in fracture healing studies may enable correlation with other factors that promote the healing process .
vascularity of the soft tissues around a bone fracture is critical for successful healing , particularly when the vessels in the medullary canal are ruptured . the objective of this work was to use laser doppler and micro - computer tomography ( micro - ct ) scanning to characterise neovascularisation of the soft tissues surrounding the fracture during healing . thirty - two sprague dawley rats underwent mid - shaft osteotomy of the left femur , stabilised with a custom - designed external fixator . five animals were killed at each of 2 , 4 days , 1 , 2 , 4 and 6 weeks post - operatively . femoral blood perfusion in the fractured and intact contralateral limbs was measured using laser doppler scanning pre- and post - operatively and throughout the healing period . at sacrifice , the common iliac artery was cannulated and infused with silicone contrast agent . micro - ct scans of the femur and adjacent soft tissues revealed vessel characteristics and distribution in relation to the fracture zone . blood perfusion dropped immediately after surgery and then recovered to greater than the pre - operative level by proliferation of small vessels around the fracture zone . multi - modal imaging allowed both longitudinal functional and detailed structural analysis of the neovascularisation process .
Introduction Methods Results Discussion
PMC3125725
the rna molecule , once perceived as a passive carrier of genetic material from dna , has long been shown to possess an active role that is reminiscent to proteins . moreover , in the past several years , new discoveries have demonstrated the peculiar possibilities of an rna molecule to control fundamental processes in living cells [ reviews of some of these recent discoveries can be found in ( 13 ) ] . although the functional role of rnas are often related to their 3d structure , the rna secondary structure is experimentally accessible and in a variety of systems contains a significant amount of information to shed light on the relationship between structure and function . in general , rna folding is thought to be hierarchical in nature ( 4,5 ) , where a stable secondary structure forms first and subsequently there is a refinement to the tertiary fold . thus , rna secondary - structure prediction as performed in energy minimization software packages ( 6,7 ) is also important for tertiary structure prediction , let alone by itself . for example , in the recently discovered genetic control elements called riboswitches ( 2,3 ) , a mechanism for bacterial gene regulation by rnas was already observed by examining the secondary structure even before any knowledge about tertiary structure became available . on the prediction side , mutational analysis using the program implemented in our rnamute webserver was performed on a tpp - riboswitch , and experimental results were able to verify the predictions of a deleterious and a compensatory mutation on that riboswitch ( 8) . this type of prediction , knowing that it could be verified , may offer prospects for rational design in the future . in general , the purpose of the rnamute webserver is as follows . for a given biological system that involves rna , for example , an rna virus or a segment of an mrna of interest or any other type of an rna sub - sequence in the length order of 100150 nt , there are most probably some rna secondary - structure motifs - like unique stem loops ( 9,10 ) that are believed to possess some kind of a functional role . oftentimes , there is a motivation to find a mutation that may alter this functional role . a logical step toward this goal is to predict which mutations may exhibit a fold that is significantly different in its secondary structure than that of the wild - type . in principle , when no other knowledge is available on the behavior of mutations in that system and a multiple alignment is not at hand to use an approach that analyzes substitutions ( 11 ) , or to perform comparative modeling ( 12 ) or to generate covariance models ( 13 ) , the best that can be done and could be very useful is to predict the folding of the wild - type sequence and several mutants by energy minimization using software such as zuker s mfold ( 6 ) or vienna s rnafold ( 7 ) . for performing this type of mutational analysis in a systematic way , a basic approach that can be traced back to preliminary ideas in ( 1416 ) and later was developed into the rnamute program ( 17,18 ) is to order mutations in various tables according to their distance from the wild - type predicted structure . that way , the mutations with the largest distances can be singled out from the rest for further examination . other approaches that use the same energy parameter rules ( 19 ) were also developed , notably rdmas ( 20 ) and rnamutants ( 21,22 ) , and are reviewed in ( 23 ) . in practice , the most straight - forward application for performing mutational analysis using rnamute is to guide biochemical experiments that directly involve the insertion of mutations , such as site - directed mutagenesis . despite the limitations of the approach that are mentioned in the continuation in addition , the growing importance of snp detection based on high - throughput sequencing may also present a need for coarse - grained mutational analysis , such as in investigating the structural behavior of synonymous snps . as a consequence , we have now developed the rnamute webserver that can easily be used by practitioners with no prior knowledge and basically performs mutational analyzes based on energy minimization predictions in a user - friendly way . the rnamute program uses folding predictions by energy minimization in an efficient way to analyze neighboring mutants ( e.g. single - point , two - point , three - point and more ) relative to a given wild - type rna sequence . it employs routines from the vienna rna package ( 24 ) , including the folding prediction of suboptimal solutions . for convenience with the problem , the vienna way of calculating the suboptimals ( 25 ) was chosen for the core of rnamute ( 18 ) , although the final output of rnamute can be checked by either mfold ( 6 ) with its original way of calculating suboptimal solutions ( 26 ) or the vienna rna secondary - structure server ( 7 ) for verifying the results . this final verification step is recommended after the user has been able to find some interesting mutations by examining the output of rnamute interactively . it should be clarified that the desired number of mutations is made in the rna sequence , not the secondary structure , allowing the researcher to see the effects of point mutations on the overall structure of the rna . after the user supplies an input sequence and the number of mutations to be analyzed , the initial step of rnamute is to calculate all suboptimal solutions of the input sequence using vienna s rnasubopt . next , an appropriate filtering step is applied to reduce the number of suboptimal solutions , after which only the mutations that stabilize the suboptimal solutions and destabilize the optimal one are considered . in the final step , the mutations reached from the previous step are sorted according to their distance from the wild - type predicted structure , starting from mutations that are with zero distance from the wild - type ( mutations that fold into the same structure as that of the wild - type ) and ending with mutations that are with large distances from the wild - type . the latter , most probably some conformational rearranging mutations , are examined by comparing between the folding prediction of the wild - type and the folding prediction of the mutants . the information for comparison is available to the user in output screens reached by single - clicks , and this visualization processing continues until the user collects all the desired candidates for deleterious mutations based on the output at hand . more features are available for the user to control which mutations are to be analyzed using the parameter values , for example the user can choose to discard the mutations that change amino acid after translation . for more details on the method employed by rnamute , the reader is referred to ( 18 ) . the rnamute webserver ( http://www.cs.bgu.ac.il/~xrnamute/xrnamute ) runs on a unix cluster with four types of computation nodes , including : ibm x3550 m3 servers with 2 quad core xeon e5620 2.40 ghz smt processors with 12 m l3 cache and 24 g ram max ppn = 16 , intel smp server with 2 quad core e5335 2.00ghz processors with 4 m l2 cache and 4 g ram max ppn = 8 , intel smp servers with 2 dual core xeon 5140 2.33 ghz processors with 4 m l2 cache and 4 g ram max ppn = 4 and pentium4 2.40 ghz processors with 512 the input screen of the rnamute webserver is shown in figure 1 ( containing default parameter values ) . in addition , the number of mutations should be inserted ( a value of 1 corresponds to single - point mutations , a value of 2 corresponds to double - point mutations , and a value of m corresponds to m - point mutations ) . next , the user can choose to select do not change amino acids , in which case the start of reading frame should also be supplied in order for the constraint that considers the genetic code to be effective . on the right , the clustering resolution for each of the three tables should be chosen . this controls how the grouping of the mutations will appear in each table , but the exact values are less critical because they can also be updated at a later stage for a convenient examination of the corresponding tables . after selecting the above options , the parameters are dist1 , dist2 , e - range , type of distance , type of method . they are all described in detail in the tutorial page that is accessible by pressing help at the bottom of the screen , and in the methodology paper for the efficient version of rnamute ( 18 ) . the user can choose between two different types of distance for filtering the suboptimal solutions : hamming distance , or base pair distance . hamming distance calculates the number of mismatches between the two dot - brackets being compared , whereas the base pair distance is given by the number of base pairs that have to be opened or closed to transform one structure into the other . the base pair distance has been widely used for comparing between two rna secondary structures , and is a fine choice for being selected by the user , although there are certain situations when the hamming distance can slightly be preferred in perhaps some special instances . for example , suppose we are comparing the following two dot - brackets : figure 1.the input screen of the rnamute webserver including default parameters . the base pair distance between these two dot - brackets is 8 , whereas the hamming distance is 2 , faithfully reflecting a slight change to the overall structure if this is indeed desired . in performing mutational analysis by filtering and categorization , it was noticed that both these distance types give very similar results , and therefore picking either one is legitimate . once the distance type is specified , numerical values should be inserted for dist1 , dist2 and e - range . the two parameters dist1 and dist2 are used for filtering the suboptimal solutions that are close to the optimal and close to each other , respectively . it is recommended that their values will be 25% of the sequence length , and this value should be lowered if more solutions are desired . the parameter e - range is the one used in the rnasubopt routine from the vienna rna package ( 7,24 ) . in general , a larger e - range value will provide better results but also take a longer time to compute . our suggestion is that e - values between 8 and 15 will be used for a sequence length of 100 bases . it is advisable to use lower values first and if the running time is too short , one can always increase the e - range and try another run . for the method type , we provide four different complexity modes for our algorithm : fast , only stabilizing , slow , only stabilizing , fast , stabilizing and destabilizing and slow , stabilizing and destabilizing. we suggest using initially one of the two the first option is the fastest and can be used for the initial trial calculation , providing a sufficient number of solutions to begin with , whereas the third option is slower but provides more solutions compared to the first , offering a refinement . fast options and they will run even slower . by default , fast , only stabilizing is selected . finally , the user specifies whether the results should arrive by email , in which case the email address should be specified . when submitting the job interactively , in some cases the results may take several minutes to compute , and patience is advised while following the instructions on the screen . the results are guaranteed to be kept for at least one week after they are generated in the web link that is provided to the user . in addition to keeping the web link for later use , the user has an option to download the essence of the results as a static file containing textual information . after the example parameters in the input screen of figure 2 are inserted and the form is submitted , the preliminary results screen appearing in figure 3 is obtained . the query rna sequence appears at the top , and below it are three tables for ordering mutations using tree - edit distance , base pair distance and hamming distance . it should be noted that the more expensive tree - edit distance was not considered during the stage of filtering suboptimal solutions ( the choice was between base pair distance and hamming distance ) , but it is being used together with the other two for sorting mutations according to their distance from the wild - type predicted structure . each row in the tables contains some distance range and the number of mutations that are within this distance range . clustering resolution , which is a technical feature that is used to control the amount of resolution in each table being displayed for convenience to the user , can be updated for each table separately using the figure 4 illustrates how the changes in the tables of figure 3 occurred as a consequence of fine tuning the clustering resolution parameter . when the clustering resolution was manually changed and updated to a value of 1 in the base pair distance table , all the mutations in the 826 group have been re - distributed to subgroups where there is a difference of only 1 between the upper value in the distance range of a particular group and the lower value in the distance range of the next group , exclusive of the group 66 that contains only one mutation . next , the user can click on each distance range table entry to obtain the list of mutations belonging to that group . figure 5 displays the mutation group list screen as a result of clicking on the 2226 hamming distance range entry in the hamming distance table of figure 4 . in the mutations table appearing in figure 5 , each row in the table contains the mutation name , corresponding distance from the wild - type , mean free energy of the mutant predicted fold in units of kcal / mol , and the dot - bracket representation of the mutant predicted fold . finally , by pressing on each mutation name , a corresponding new page appears with detailed structure and energy information for the mutation . figure 6 shows the output screen that corresponds to mutation g7c - a9u available in figure 5 . it contains secondary - structure drawings of the wild - type and mutation that facilitates examination of the structural change . the sequences of the wild - type and mutant predicted structures , with the mutated bases in the mutated sequence and structure painted in red , appear below the secondary - structure drawings . detailed information about the free - energies , dot - bracket representations and the various distances of the mutant predicted structure from the wild - type predicted structure are given at the bottom of the page . this way the user can scan several rearranging mutations by clicking on promising candidates that are available in figure 5 , until a desired mutation for a specific task is reached . figure 2.the input screen of the rnamute webserver with the example parameters inserted . in the example , the number of mutations is set to 2 and a more time consuming method is employed relative to the default one . figure 3.the preliminary results screen of the rnamute webserver , ordering mutations in tables according to their distances from the wild - type predicted structure . figure 4.the preliminary results screen of the rnamute webserver after fine tuning the clustering resolution parameter in some of the tables . figure 5.mutation group list screen as a result of running rnamute for the case of two - point mutations for the example sequence . figure 6.output screen of a rearranging mutation is the example sequence as a result of running rnamute for the case of two - point mutation and single clicking in the mutation group list screen shown in figure 5 on the highlighted mutation g7c - a9u . the secondary - structure drawings for the wild - type and the mutants are plotted . the input screen of the rnamute webserver with the example parameters inserted . in the example , the number of mutations is set to 2 and a more time consuming method is employed relative to the default one . the preliminary results screen of the rnamute webserver , ordering mutations in tables according to their distances from the wild - type predicted structure . the preliminary results screen of the rnamute webserver after fine tuning the clustering resolution parameter in some of the tables . mutation group list screen as a result of running rnamute for the case of two - point mutations for the example sequence . output screen of a rearranging mutation is the example sequence as a result of running rnamute for the case of two - point mutation and single clicking in the mutation group list screen shown in figure 5 on the highlighted mutation g7c - a9u . the secondary - structure drawings for the wild - type and the mutants are plotted . the rnamute webserver ( http://www.cs.bgu.ac.il/~xrnamute/xrnamute ) runs on a unix cluster with four types of computation nodes , including : ibm x3550 m3 servers with 2 quad core xeon e5620 2.40 ghz smt processors with 12 m l3 cache and 24 g ram max ppn = 16 , intel smp server with 2 quad core e5335 2.00ghz processors with 4 m l2 cache and 4 g ram max ppn = 8 , intel smp servers with 2 dual core xeon 5140 2.33 ghz processors with 4 m l2 cache and 4 g ram max ppn = 4 and pentium4 2.40 ghz processors with 512 the input screen of the rnamute webserver is shown in figure 1 ( containing default parameter values ) . in addition , the number of mutations should be inserted ( a value of 1 corresponds to single - point mutations , a value of 2 corresponds to double - point mutations , and a value of m corresponds to m - point mutations ) . next , the user can choose to select do not change amino acids , in which case the start of reading frame should also be supplied in order for the constraint that considers the genetic code to be effective . on the right , the clustering resolution for each of the three tables should be chosen . this controls how the grouping of the mutations will appear in each table , but the exact values are less critical because they can also be updated at a later stage for a convenient examination of the corresponding tables . after selecting the above options , the parameters are dist1 , dist2 , e - range , type of distance , type of method . they are all described in detail in the tutorial page that is accessible by pressing help at the bottom of the screen , and in the methodology paper for the efficient version of rnamute ( 18 ) . the user can choose between two different types of distance for filtering the suboptimal solutions : hamming distance , or base pair distance . hamming distance calculates the number of mismatches between the two dot - brackets being compared , whereas the base pair distance is given by the number of base pairs that have to be opened or closed to transform one structure into the other . the base pair distance has been widely used for comparing between two rna secondary structures , and is a fine choice for being selected by the user , although there are certain situations when the hamming distance can slightly be preferred in perhaps some special instances . for example , suppose we are comparing the following two dot - brackets : figure 1.the input screen of the rnamute webserver including default parameters . the base pair distance between these two dot - brackets is 8 , whereas the hamming distance is 2 , faithfully reflecting a slight change to the overall structure if this is indeed desired . in performing mutational analysis by filtering and categorization , it was noticed that both these distance types give very similar results , and therefore picking either one is legitimate . once the distance type is specified , numerical values should be inserted for dist1 , dist2 and e - range . the two parameters dist1 and dist2 are used for filtering the suboptimal solutions that are close to the optimal and close to each other , respectively . it is recommended that their values will be 25% of the sequence length , and this value should be lowered if more solutions are desired . the parameter e - range is the one used in the rnasubopt routine from the vienna rna package ( 7,24 ) . in general , a larger e - range value will provide better results but also take a longer time to compute . our suggestion is that e - values between 8 and 15 will be used for a sequence length of 100 bases . it is advisable to use lower values first and if the running time is too short , one can always increase the e - range and try another run . for the method type , we provide four different complexity modes for our algorithm : fast , only stabilizing , slow , only stabilizing , fast , stabilizing and destabilizing and slow , stabilizing and destabilizing. we suggest using initially one of the two the first option is the fastest and can be used for the initial trial calculation , providing a sufficient number of solutions to begin with , whereas the third option is slower but provides more solutions compared to the first , offering a refinement . fast options and they will run even slower . by default , fast , only stabilizing is selected . finally , the user specifies whether the results should arrive by email , in which case the email address should be specified . when submitting the job interactively , in some cases the results may take several minutes to compute , and patience is advised while following the instructions on the screen . the results are guaranteed to be kept for at least one week after they are generated in the web link that is provided to the user . in addition to keeping the web link for later use , the user has an option to download the essence of the results as a static file containing textual information . after the example parameters in the input screen of figure 2 are inserted and the form is submitted , the preliminary results screen appearing in figure 3 is obtained . the query rna sequence appears at the top , and below it are three tables for ordering mutations using tree - edit distance , base pair distance and hamming distance . it should be noted that the more expensive tree - edit distance was not considered during the stage of filtering suboptimal solutions ( the choice was between base pair distance and hamming distance ) , but it is being used together with the other two for sorting mutations according to their distance from the wild - type predicted structure . each row in the tables contains some distance range and the number of mutations that are within this distance range . clustering resolution , which is a technical feature that is used to control the amount of resolution in each table being displayed for convenience to the user , can be updated for each table separately using the figure 4 illustrates how the changes in the tables of figure 3 occurred as a consequence of fine tuning the clustering resolution parameter . when the clustering resolution was manually changed and updated to a value of 1 in the base pair distance table , all the mutations in the 826 group have been re - distributed to subgroups where there is a difference of only 1 between the upper value in the distance range of a particular group and the lower value in the distance range of the next group , exclusive of the group 66 that contains only one mutation . next , the user can click on each distance range table entry to obtain the list of mutations belonging to that group . figure 5 displays the mutation group list screen as a result of clicking on the 2226 hamming distance range entry in the hamming distance table of figure 4 . in the mutations table appearing in figure 5 , each row in the table contains the mutation name , corresponding distance from the wild - type , mean free energy of the mutant predicted fold in units of kcal / mol , and the dot - bracket representation of the mutant predicted fold . finally , by pressing on each mutation name , a corresponding new page appears with detailed structure and energy information for the mutation . figure 6 shows the output screen that corresponds to mutation g7c - a9u available in figure 5 . it contains secondary - structure drawings of the wild - type and mutation that facilitates examination of the structural change . the sequences of the wild - type and mutant predicted structures , with the mutated bases in the mutated sequence and structure painted in red , appear below the secondary - structure drawings . detailed information about the free - energies , dot - bracket representations and the various distances of the mutant predicted structure from the wild - type predicted structure are given at the bottom of the page . this way the user can scan several rearranging mutations by clicking on promising candidates that are available in figure 5 , until a desired mutation for a specific task is reached . figure 2.the input screen of the rnamute webserver with the example parameters inserted . in the example , the number of mutations is set to 2 and a more time consuming method is employed relative to the default one . figure 3.the preliminary results screen of the rnamute webserver , ordering mutations in tables according to their distances from the wild - type predicted structure . figure 4.the preliminary results screen of the rnamute webserver after fine tuning the clustering resolution parameter in some of the tables . figure 5.mutation group list screen as a result of running rnamute for the case of two - point mutations for the example sequence . figure 6.output screen of a rearranging mutation is the example sequence as a result of running rnamute for the case of two - point mutation and single clicking in the mutation group list screen shown in figure 5 on the highlighted mutation g7c - a9u . the secondary - structure drawings for the wild - type and the mutants are plotted . the input screen of the rnamute webserver with the example parameters inserted . in the example , the number of mutations is set to 2 and a more time consuming method is employed relative to the default one . the preliminary results screen of the rnamute webserver , ordering mutations in tables according to their distances from the wild - type predicted structure . the preliminary results screen of the rnamute webserver after fine tuning the clustering resolution parameter in some of the tables . mutation group list screen as a result of running rnamute for the case of two - point mutations for the example sequence . output screen of a rearranging mutation is the example sequence as a result of running rnamute for the case of two - point mutation and single clicking in the mutation group list screen shown in figure 5 on the highlighted mutation g7c - a9u . the secondary - structure drawings for the wild - type and the mutants are plotted . recent discoveries of functional rna secondary - structure motifs in a variety of non - coding rnas and others , such as viruses , have boosted the interest in analyzing the effect of mutations on structure . they brought to an increasing number of site - directed mutagenesis experiments that affect these motifs . whether the purpose is to study the structural properties of these functional motifs or to perform smart modifications for rational design purposes , there is a clear motivation to develop a computational framework for the mutational analysis of rna secondary structures . when no rna alignments are available , only a single rna sequence , one relies at present on thermodynamic parameters as the main framework ( as was done in the development of rnamute , rdmas and rnamutants , see ( 23 ) for their descriptions and comparison ) . toward this end , rna secondary - structure predictions by energy minimization are performed on rna wild - type and mutant sequences . thus , sequences that have been shown to fold correctly by experimental structure determination techniques to their energy minimization predicted structure are the best to work with as inputs to these programs in order to achieve reliable results . though exceptional cases exist , in general the upper range estimate for the sequence length that these programs are useful for is 150 nt ; therefore , the rnamute webserver supports sequences of up to 200 nt long . for example , rna functional motifs of up to 150 nt that form stable stem loop structures and are taken from utrs or orfs of viruses may constitute favorable candidates for their analysis with the rnamute webserver although this is by no means inclusive . the goal of the methodology behind the webserver is to process a large number of mutations efficiently . the analysis of multiple point mutations without any efficient strategy is highly expensive since the running time is o(n ) for a sequence of length n with m - point mutations . the rnamute method that is now implemented in a webserver was developed to meet this challenge . by calculating in the initial stage all suboptimal solutions , after which only the mutations that stabilize the suboptimal solutions and destabilize the optimal one are considered as candidates for being deleterious , the method employed reduces the running time from several hours to several minutes as was described in ( 18 ) . thus , the methodology behind the webserver enables its practical use for the analysis of multiple - point mutations . the rnamute webserver was developed with the goal of making the efficient method for the mutational analysis of rna secondary structures available for the entire biological community . the webserver is user - friendly and accessible to practitioners , both in terms of ease of use and simplification of the output . we believe that it will serve experimental groups for improving their capability to perform rna mutational analysis . the lynne and william frankel center for computer sciences , ben - gurion university ; united states israel binational science foundation ( 2003291 ) . funding for open access charge : the lynne and william frankel center for computer sciences , ben - gurion university .
rna mutational analysis at the secondary - structure level can be useful to a wide - range of biological applications . it can be used to predict an optimal site for performing a nucleotide mutation at the single molecular level , as well as to analyze basic phenomena at the systems level . for the former , as more sequence modification experiments are performed that include site - directed mutagenesis to find and explore functional motifs in rnas , a pre - processing step that helps guide in planning the experiment becomes vital . for the latter , mutations are generally accepted as a central mechanism by which evolution occurs , and mutational analysis relating to structure should gain a better understanding of system functionality and evolution . in the past several years , the program rnamute that is structure based and relies on rna secondary - structure prediction has been developed for assisting in rna mutational analysis . it has been extended from single - point mutations to treat multiple - point mutations efficiently by initially calculating all suboptimal solutions , after which only the mutations that stabilize the suboptimal solutions and destabilize the optimal one are considered as candidates for being deleterious . the rnamute web server for mutational analysis is available at http://www.cs.bgu.ac.il/~xrnamute/xrnamute .
INTRODUCTION THE RNAMUTE METHOD RNAMUTE WEBSERVER Input Output CONCLUSIONS FUNDING
PMC4618553
skeletal muscles are innervated by sensory neurons that provide the central nervous system with important information about the muscle environment . muscle spindles are highly specialized encapsulated structures composed of intrafusal muscle fibers that are located in parallel with extrafusal muscle fibers . spindles are innervated by group ia and ii afferents that encode changes in muscle length and intrafusal fiber tone is dynamically regulated by innervating gamma motor neurons . group ib afferents are positioned between muscle fibers and their tendon insertions and are sensitive to changes in muscle force , for instance during muscle contraction . the group ia , ib and ii afferents provide proprioceptive feedback that aids in appropriate motor control . additional populations of muscle afferents located throughout the muscle ( group iii and iv ) serve to signal metabolite buildup , the presence of nociceptive stimuli , and muscle temperature . this muscle sensory information is critical to maintaining homeostasis , preventing muscle damage and modulating movement . activation of nociceptors can lead to the induction of chronic pain states and aberrant signaling from the muscle proprioceptors can lead to problems with balance or movement . we use an isolated muscle - nerve in vitro preparation to study the response of muscle sensory neuron receptor endings from adult mice of both sexes ( shown are responses from 2 - 4 month old c57bl/6 mice ) . this preparation requires the isolation of the deep peroneal branch of the sciatic nerve and the extensor digitorum longus ( edl ) muscle , a fast twitch muscle of the peroneal group found in the lateral part of the lower leg . the edl is often used to study muscle contractile properties , has tendons that are easy to isolate and is small enough to allow adequate diffusive oxygen supply at rest and with reasonable contraction duty cycles . other muscles in this area ( for instance soleus and tibialis anterior ) are of similar size and this preparation could be easily modified to record from afferents from these muscles . a suction electrode is placed onto the cut end of the nerve to record muscle sensory afferent firing . individual neurons can be identified and analyzed based on their spike shape using spike sorting software . stimulating electrodes in the bath or on the nerve can be used to evoke muscle contraction . muscle length and force similar in vitro preparations have been used in rodents to study group iii and iv muscle afferents in the rat edl , group ia and ii spindle afferents in the rat fourth lumbrical toe muscle and group iii and iv muscle afferents in the mouse plantar muscle . an in vitro system has the advantages of pharmacological accessibility and direct control of perfusate and physiochemical variables like temperature and ph . an in vitro approach eliminates the potential in vivo confounds of anesthesia and muscle perfusion status . while the muscle - nerve preparation allows for the study of the direct response of a perturbation on the afferents , the ability to study gamma efferent modulation of spindle sensitivity and other integrated responses that is possible with in vivo or ex vivo preparations this preparation was previously used to characterize the response of mouse muscle spindle afferents to a battery of ramp and hold stretches and vibrations and it was determined that mouse spindle afferent responses were similar to those reported in other species such as rats , cats , and humans . mouse spindle afferent responses were found to be similar at both 24 c and 34 c bath temperatures , although at 34 c absolute firing rates were faster and the afferents were more able to respond to faster length changes . we describe below how this preparation can be used to identify and study the muscle spindle afferents . moreover , this preparation can easily be modified to study the response of other muscle afferent subtypes , to compare the properties of sensory afferents in response to a drug or disease state or assorted other variables ( e.g. , age , sex , gene knockout ) . appropriate national and institutional ethics should be obtained before performing animal experiments . weigh and deeply anesthetize an adult mouse with inhaled isofluorane using a vaporizer with 5% isofluorane plus a 1.5 l / min oxygen flow rate or a bell jar with isofluorane soaked cotton on the bottom . ensure that the mouse is deeply anesthetized and does not respond to a toe pinch . make a ventral midline cut through the ribs using scissors and remove the internal organs . skin the animal by grasping the skin from the neck area and pulling it past the feet . remove the legs by cutting above the hips and place the skinned legs into a dish with chilled ( 4 c ) , carbogenated ( 95% o2 , 5% co2 ) low calcium , high magnesium bicarbonate buffered saline solution containing in mm : 128 nacl , 1.9 kcl , 1.2 kh2po4 , 26 nahco3 , 0.85 cacl2 , 6.5 mgso4 , and 10 glucose ( ph of 7.40.05 ) . place the legs dorsal side up in the dish and pin the legs and hips down using needles or insect pins so that the knee and ankle joints are at a 90 angle . place a pin on each end of the feet and one pin on each of the anterior and posterior thighs to hold the tissue in place . using castroviejo spring scissors , lift up the top layer of muscle on the thighs and make a midline cut to expose the sciatic nerve directly below . the sciatic nerve is located just below the top muscle layer and runs above the femur from its exit near the hips until it branches into the common peroneal and tibial nerve just before the knee joint . remove the muscle above the sciatic nerve until the point at which the deep peroneal nerve branch dives into the flexor hallucis longus ( fhl ) muscle is visible . using # 55 forceps , dissect the connective tissue around the peroneal branch to free it from the gastrocnemius and soleus muscles , which are on the medial side of the tibia . cut the tendons of the gastrocnemius and soleus muscles and carefully remove them from under the nerve . remove the superficial muscle on the lateral side of the tibia to expose three distinct tendon bundles at the ankle joint from medial to lateral : fhl , edl , and tibialis anterior ( ta ) , with the edl hidden underneath the ta . cut the ta tendon at the ankle joint , lift the ta up and away from the edl , and cut the muscle near the knee to remove it . cut the fhl tendon at the ankle and lift it back to reach the area where the nerve enters the fhl . cut just below that point and remove ~2/3 of the fhl muscle . cut the sciatic nerve as close to the hip joint as possible and gently strip away all nerve branches except for the deep peroneal branch . cut the edl tendons at both the ankle and knee joints using large spring scissors . using sharp scissors , remove the edl , the remaining fhl and nerve from the surrounding tissue by cutting through the tibia bone at the knee and midway through the thigh . cut away the remaining tibia bone so that just the edl , part of the fhl and nerve remain . before the start of the experiment , constantly perfuse the bath with oxygenated ( 100% o2 ) synthetic interstitial fluid ( sif ) containing ( in mm ) 123 nacl , 3.5 kcl , 0.7 mgso4 , 1.7 nah2po4 , 2.0 cacl2 , 9.5 nac6h11o ( sodium gluconate ) , 5.5 glucose , 7.5 sucrose , and 10 n-2-hydroxyethylpiperazine - n-2-ethanesulfonic acid ( hepes ) ; ph 7.40.05 . a flow rate of 15 - 30 shown is a commercially available bath of 25 ml capacity with two stimulating electrodes fixed to the bottom of the bath , a mounted tissue post and a mount for a force and length controller ( approximate bath dimensions 8.5 cm x 3 cm x 1 cm ; for specifications see table of materials / equipment ) . use the remaining fhl tissue to handle the isolated muscle - nerve and place it into the tissue bath . place a small piece of sylgard on the bottom of the dish and use 6 - 0 silk sutures to tie both tendons and affix one end to the tissue post and the other to the lever arm of the force and length controller ( see table of materials / equipment for specifications ) . use the smallest suture length that is practical . note : to facilitate easy connection to the lever arm a small piece of wire can be bent into a the suture can then be tied to the wire instead of threaded into the small hole on the lever arm . make suction electrodes from sa 16 glass by first using a glass micropipette puller ( heat = 286 , pull = 0 , velocity = 150 , time = 200 ) ; break the tip back and manually grind it on a sharpening stone until there is about a 3 mm taper . melt the tip using a microforge to the desired tip inner diameter of between 10 - 100 m depending on the area of nerve one wishes to sample from ( see figures 1b-1c for electrode schematic and table of materials / equipment for product information ) . fill a premade glass suction electrode to the inner silver wire with sif . suction the cut end of the nerve into the electrode and connect to the positive port of a differential amplifier . wrap the electrode with a chlorided silver wire that connects to the negative port of the headstage . ground the sif bath by running a second chlorided silver wire from the bath to the headstage s ground port . also ground the perfusion tubing to the faraday cage at multiple points to mitigate electrical noise introduced via the perfusion pumps . stimulate the muscle via the electrodes mounted on either side of the muscle in the tissue bath to induce a twitch contraction . alternatively place a stimulating electrode on the cut end of the nerve . increase the stimulating voltage until a peak contractile force is observed and then increase the voltage by an additional 15% to reach supramaximal voltage ( 0.5 msec pulse width ) . continue the twitch contractions at the supramaximal voltage , with a 10 sec rest in - between , but vary the length of the muscle until a peak contractile force is reached to find the optimal length ( lo ) of the muscle . all length ramps and vibrations will start with the muscle at this length . allow the muscle - nerve preparation to remain in the bath for at least 1 hr before subsequent data collection to allow the tissue to reach the bath temperature and for normal synaptic transmission to recover following dissection in a low calcium solution . to collect data at a temperature other than room temperature , place a temperature probe into the tissue bath near the muscle . slowly bring the bath up to temperature by pumping heated water through the tissue bath base plate . wrap clay microwavable heating pads around the sif reservoir to help maintain a steady temperature . to identify an afferent as a spindle afferent , record the neuronal activity during repeated twitch contractions produced by a 0.5 msec supramaximal voltage stimulus delivered once every second . note : muscle spindle afferents should pause during the twitch contraction ( figure 2 ) . use data acquisition software to apply length changes at different speeds and to different lengths . use a custom script to automate this task ( see supplemental information for a screenshot of the script and directions on how to customize the stretches given ) . apply ramp - and - hold stretches of 4 sec at stretch lengths of 2.5% , 5% , and 7.5% lo and stretch speeds of 20 , 40 , or 60% lo / sec . note : see user manual for specific force and length controller for necessary voltage - to - millimeter conversion factor . at the end of the experiment , determine muscle health at 24 c using maximal isometric tetanic contractions ( 500 msec train , 120 hz train frequency , 0.5 msec pulse width , supramaximal voltage ) . weigh and deeply anesthetize an adult mouse with inhaled isofluorane using a vaporizer with 5% isofluorane plus a 1.5 l / min oxygen flow rate or a bell jar with isofluorane soaked cotton on the bottom . ensure that the mouse is deeply anesthetized and does not respond to a toe pinch . make a ventral midline cut through the ribs using scissors and remove the internal organs . skin the animal by grasping the skin from the neck area and pulling it past the feet . remove the legs by cutting above the hips and place the skinned legs into a dish with chilled ( 4 c ) , carbogenated ( 95% o2 , 5% co2 ) low calcium , high magnesium bicarbonate buffered saline solution containing in mm : 128 nacl , 1.9 kcl , 1.2 kh2po4 , 26 nahco3 , 0.85 cacl2 , 6.5 mgso4 , and 10 glucose ( ph of 7.40.05 ) . place the legs dorsal side up in the dish and pin the legs and hips down using needles or insect pins so that the knee and ankle joints are at a 90 angle . place a pin on each end of the feet and one pin on each of the anterior and posterior thighs to hold the tissue in place . using castroviejo spring scissors , lift up the top layer of muscle on the thighs and make a midline cut to expose the sciatic nerve directly below . the sciatic nerve is located just below the top muscle layer and runs above the femur from its exit near the hips until it branches into the common peroneal and tibial nerve just before the knee joint . remove the muscle above the sciatic nerve until the point at which the deep peroneal nerve branch dives into the flexor hallucis longus ( fhl ) muscle is visible . using # 55 forceps , dissect the connective tissue around the peroneal branch to free it from the gastrocnemius and soleus muscles , which are on the medial side of the tibia . cut the tendons of the gastrocnemius and soleus muscles and carefully remove them from under the nerve . remove the superficial muscle on the lateral side of the tibia to expose three distinct tendon bundles at the ankle joint from medial to lateral : fhl , edl , and tibialis anterior ( ta ) , with the edl hidden underneath the ta . cut the ta tendon at the ankle joint , lift the ta up and away from the edl , and cut the muscle near the knee to remove it . cut the fhl tendon at the ankle and lift it back to reach the area where the nerve enters the fhl . cut just below that point and remove ~2/3 of the fhl muscle . cut the sciatic nerve as close to the hip joint as possible and gently strip away all nerve branches except for the deep peroneal branch . cut the edl tendons at both the ankle and knee joints using large spring scissors . using sharp scissors , remove the edl , the remaining fhl and nerve from the surrounding tissue by cutting through the tibia bone at the knee and midway through the thigh . cut away the remaining tibia bone so that just the edl , part of the fhl and nerve remain . before the start of the experiment , prepare the tissue bath . constantly perfuse the bath with oxygenated ( 100% o2 ) synthetic interstitial fluid ( sif ) containing ( in mm ) 123 nacl , 3.5 kcl , 0.7 mgso4 , 1.7 nah2po4 , 2.0 cacl2 , 9.5 nac6h11o ( sodium gluconate ) , 5.5 glucose , 7.5 sucrose , and 10 n-2-hydroxyethylpiperazine - n-2-ethanesulfonic acid ( hepes ) ; ph 7.40.05 shown is a commercially available bath of 25 ml capacity with two stimulating electrodes fixed to the bottom of the bath , a mounted tissue post and a mount for a force and length controller ( approximate bath dimensions 8.5 cm x 3 cm x 1 cm ; for specifications see table of materials / equipment ) . use the remaining fhl tissue to handle the isolated muscle - nerve and place it into the tissue bath . place a small piece of sylgard on the bottom of the dish and use an insect pin through the remaining fhl tissue to stabilize the muscle . use 6 - 0 silk sutures to tie both tendons and affix one end to the tissue post and the other to the lever arm of the force and length controller ( see table of materials / equipment for specifications ) . use the smallest suture length that is practical . note : to facilitate easy connection to the lever arm a small piece of wire can be bent into a the suture can then be tied to the wire instead of threaded into the small hole on the lever arm . make suction electrodes from sa 16 glass by first using a glass micropipette puller ( heat = 286 , pull = 0 , velocity = 150 , time = 200 ) ; break the tip back and manually grind it on a sharpening stone until there is about a 3 mm taper . melt the tip using a microforge to the desired tip inner diameter of between 10 - 100 m depending on the area of nerve one wishes to sample from ( see figures 1b-1c for electrode schematic and table of materials / equipment for product information ) . fill a premade glass suction electrode to the inner silver wire with sif . suction the cut end of the nerve into the electrode and connect to the positive port of a differential amplifier . wrap the electrode with a chlorided silver wire that connects to the negative port of the headstage . ground the sif bath by running a second chlorided silver wire from the bath to the headstage s ground port . also ground the perfusion tubing to the faraday cage at multiple points to mitigate electrical noise introduced via the perfusion pumps . stimulate the muscle via the electrodes mounted on either side of the muscle in the tissue bath to induce a twitch contraction . alternatively place a stimulating electrode on the cut end of the nerve . increase the stimulating voltage until a peak contractile force is observed and then increase the voltage by an additional 15% to reach supramaximal voltage ( 0.5 msec pulse width ) . continue the twitch contractions at the supramaximal voltage , with a 10 sec rest in - between , but vary the length of the muscle until a peak contractile force is reached to find the optimal length ( lo ) of the muscle . all length ramps and vibrations will start with the muscle at this length . allow the muscle - nerve preparation to remain in the bath for at least 1 hr before subsequent data collection to allow the tissue to reach the bath temperature and for normal synaptic transmission to recover following dissection in a low calcium solution . to collect data at a temperature other than room temperature , place a temperature probe into the tissue bath near the muscle . slowly bring the bath up to temperature by pumping heated water through the tissue bath base plate . wrap clay microwavable heating pads around the sif reservoir to help maintain a steady temperature . to identify an afferent as a spindle afferent , record the neuronal activity during repeated twitch contractions produced by a 0.5 msec supramaximal voltage stimulus delivered once every second . note : muscle spindle afferents should pause during the twitch contraction ( figure 2 ) . use data acquisition software to apply length changes at different speeds and to different lengths . use a custom script to automate this task ( see supplemental information for a screenshot of the script and directions on how to customize the stretches given ) . apply ramp - and - hold stretches of 4 sec at stretch lengths of 2.5% , 5% , and 7.5% lo and stretch speeds of 20 , 40 , or 60% lo / sec . note : see user manual for specific force and length controller for necessary voltage - to - millimeter conversion factor . at the end of the experiment , determine muscle health at 24 c using maximal isometric tetanic contractions ( 500 msec train , 120 hz train frequency , 0.5 msec pulse width , supramaximal voltage ) . compare the peak contractile force to previously reported values ( ~24 n / cm ) . the response of muscle afferents can be recorded following a variety of perturbations , depending on which afferent subtype is being studied . representative responses of muscle spindle afferents to muscle contraction and ramp and hold stretch are shown here . to identify an afferent as a spindle afferent , twitch contractions are given once every second ( 0.5 msec pulse width ) to see if there is a pause in firing during contraction . figure 2 shows a representative trace of neuronal activity and muscle tension during these twitch contractions . if the recorded afferent was a group ib golgi tendon organ afferent , an increase in firing rate during the contraction would be expected . under control conditions most afferents with a regular firing pattern ( ~12 impulses / sec at 24 c and ~32 impulses / sec at 34 c ) are muscle spindle afferents . a subset of spindle afferents will only fire during stretch ( in our hands ~11% of spindle afferents ) . figure 3a shows a representative raw trace of two muscle spindle afferents responding to a ramp and hold stretch produced by the force and length controller . the spike histogram feature of labchart is used to identify and analyze the instantaneous firing frequency of the two individual neurons separately ( figures 3b-3c ) . isolated muscle preparation a ) the extensor digitorum longus ( edl ) and innervating sciatic nerve are mounted in an isolated tissue bath perfused with oxygenated synthetic interstitial fluid ( sif ) . an extracellular amplifier connected to a suction electrode records neural activity . a dual force and length controller with appropriate software controls and measures muscle force and length . c ) magnified depiction of the ideal tip shape for the suction electrode that is produced using a microforge neuronal activity ( top trace ) and muscle tension ( bottom trace ) from 30 twitch contractions are superimposed with top overlapping trace in black . afferent activity pauses during contraction - induced tension increase , which is a characteristic response of muscle spindle afferents . bracket and arrow above the top trace denote the time during contraction when activity is paused . spindle afferent response to ramp and hold stretch a ) raw neural activity of two afferents ( top trace ) during a ramp and hold stretch applied to the edl ( muscle length shown in bottom trace ) . fr . , impulses per sec ) of the unit exhibiting activity at lo from a ( shown in blue ) . c ) instantaneous firing frequency of the smaller unit that only fires during the stretch ( shown in orange ) . both units exhibit the spike frequency adaptation during stretch that is characteristic of muscle spindle afferents . the goal of this article was to describe a method for recording from muscle spindle afferents in an isolated mouse muscle - nerve preparation . we have found that mouse spindle afferents respond similarly to stretch as those from rats , cats and humans , and other laboratories have used mice as model organisms to study sensory neurons in both the muscle and the skin in vitro ( for example ) . muscle sensory afferents can be recorded for at least 6 - 8 hr at both 24 c and 34 c . to minimize handling the edl and nerve , use the remaining portion of the fhl to handle the tissue whenever possible . following the dissection , wait at least 1 hour to start data collection to allow the tissue to equilibrate to bath temperature and for normal synaptic transmission to recover . previously , muscle health was verified on a subset of muscles used in this preparation by determining that the maximal tetanic contractile force generated by the muscles is similar to that reported by others at the beginning and end of the experiment . muscle spindle afferents were identified functionally in this preparation by looking for a characteristic pause in firing in response to twitch contraction ( figure 2 ) as well as the expected instantaneous frequency increases in response to length changes ( figure 3 ) . in our experience , the length of the nerve retained in this preparation ( ~7 mm maximum ) is not long enough to use afferent conduction velocity differences to identify muscle afferent subtypes . additionally , unlike in cats and humans , measures of dynamic sensitivity were not able to clearly differentiate group ia and ii afferents in mice ( for further discussion see wilkinson et al . ) . other subtypes of afferents ( i.e. , group iii and iv ) can be identified using additional functional tests in this preparation , for instance by adding substances like capsaicin , atp , or bradykinin , decreasing bath ph , exposing the muscle to ischemia , etc . using suction electrodes allows the activity from multiple sensory neurons to be recorded at once , which increases the amount of data that can be collected from a single muscle . this preparation can be used to gauge the overall effect of a perturbation on sensory neuron population responses or the responses of identified afferents . if the spike shapes are unique enough , up to 4 sensory neurons can be discriminated by software ( both spike2 ( cambridge electronic design ) and labchart pro ( ad instruments ) have performed similarly ) . in cases where neurons can not be discriminated , changes in electrode placement or tip diameter can easily be implemented . in summary , the mouse muscle - nerve in vitro preparation is a simple experimental approach that can be used to investigate the response properties of muscle sensory afferents to various physicochemical perturbations , injury and disease models . additionally , this preparation is ideally suited to take advantage of the powerful genetic tools available in mice , including transgenic animals and optogenetic tools . the open access charges for this manuscript have been paid by aurora scientific , inc . , the company that makes the force and length controller and in vitro bath plate used in this manuscript .
muscle sensory neurons innervating muscle spindles and golgi tendon organs encode length and force changes essential to proprioception . additional afferent fibers monitor other characteristics of the muscle environment , including metabolite buildup , temperature , and nociceptive stimuli . overall , abnormal activation of sensory neurons can lead to movement disorders or chronic pain syndromes . we describe the isolation of the extensor digitorum longus ( edl ) muscle and nerve for in vitro study of stretch - evoked afferent responses in the adult mouse . sensory activity is recorded from the nerve with a suction electrode and individual afferents can be analyzed using spike sorting software . in vitro preparations allow for well controlled studies on sensory afferents without the potential confounds of anesthesia or altered muscle perfusion . here we describe a protocol to identify and test the response of muscle spindle afferents to stretch . importantly , this preparation also supports the study of other subtypes of muscle afferents , response properties following drug application and the incorporation of powerful genetic approaches and disease models in mice .
Introduction Protocol 1. Removal of EDL Muscle and Nerve 2. Mounting of the EDL Muscle and Nerve into the Tissue Bath (Figure 1A) 3. Data Collection Representative Results Discussion Disclosures
PMC3114462
single excitation configuration interaction ( cis),(1 ) time - dependent hartree fock ( tdhf ) , and linear response time - dependent density functional theory ( tddft ) are widely used for ab initio calculations of electronic excited states of large molecules ( more than 50 atoms , thousands of basis functions ) because these single - reference methods are computationally efficient and straightforward to apply . although highly correlated and/or multireference methods , such as multireference configuration interaction ( mrci),(10 ) multireference perturbation theory ( mrmp(11 ) and caspt2 ) , ( 12)and equation - of - motion coupled cluster methods ( sac - ci(13 ) and eom - cc ) , allow for more reliably accurate treatment of excited states , including those with double excitation character , these are generally too computationally demanding for large molecules . cis / tdhf is essentially the excited - state corollary of the ground - state hartree fock ( hf ) method and thus similarly suffers from a lack of electron correlation . because of this , cis / tdhf excitation energies are consistently overestimated , often by 1 ev.(8 ) the tddft method includes dynamic correlation through the exchange correlation functional , but standard nonhybrid tddft exchange correlation functionals generally underestimate excitation energies , particularly for rydberg and charge - transfer states.(5 ) the problem in charge - transfer excitation energies is due to the lack of the correct 1/r coulombic attraction between the separated charges of the excited electron and hole.(16 ) charge - transfer excitation energies are generally improved with hybrid functionals and also with range separated functionals that separate the exchange portion of the dft functional into long- and short - range contributions . neither cis nor tddft ( with present - day functionals ) properly includes the effects of dispersion but promising results have been obtained with an empirical correction to standard dft functionals , and there are continued efforts to include dispersion directly in the exchange correlation functional . both the cis and tddft(26 ) single reference methods lack double excitations and are unable to model conical intersections or excitations in molecules that have multireference character . in spite of these limitations , the cis and tddft methods can be generally expected to reproduce trends for one - electron valence excitations , which are a majority of the transitions of photochemical interest . tddft using hybrid density functionals , which include some percentage of hf exact exchange , has been particularly successful in modeling the optical absorption of large molecules . furthermore , the development of new dft functionals and methods is an avid area of research , with many new functionals introduced each year . thus it is a virtual certainty that the quality of the results available from tddft will continue to increase . a summary of the accuracy currently available for vertical excitation energies is available in a recent study by jacquemin et al . which compares tddft results using 29 functionals for 500 molecules.(29 ) although cis and tddft are the most tractable methods for excited states of large molecules , their computational cost still prevents application to many systems of photochemical interest . thus , there is considerable interest in extending the capabilities of cis / tddft to even larger molecules , beyond hundreds of atoms . quantum mechanics / molecular mechanics ( qm / mm ) schemes provide a way to model the environment of a photophysically interesting molecule by treating the molecule with qm and the surrounding environment with mm force fields . however , it is difficult to know when the mm approximations break down and when a fully qm approach is necessary . with fast , large - scale cis / tddft calculations , all residues of a photoactive protein could be treated quantum mechanically to explore the origin of spectral tuning , for example . explicit effects of solvent chromophore interactions , including hydrogen bonding , charge transfer , and polarization , could be fully included at the ab initio level in order to model solvatochromic shifts . one potential route to large scale cis and tddft calculations is through exploitation of the stream processing architectures(35 ) now widely available in the form of graphical processing units ( gpus ) . the introduction of the compute unified device architecture(36 ) ( cuda ) as an extension to the c language has greatly simplified gpu programming , making it more easily accessible for scientific programming . gpus have already been applied to achieve speed - ups of orders of magnitude in ground - state electronic structure , ab initio molecular dynamics(41 ) and empirical force field - based molecular dynamics calculations . in this article we extend our implementation of gpu quantum chemistry in the newly developed terachem program(46 ) beyond our previous two - electron integral evaluation(47 ) and ground - state self - consistent field , geometry optimization , and dynamics calculations(41 ) to also include the calculation of excited electronic states . we use gpus to accelerate both the matrix multiplications within the cis / tddft procedure and also the integral evaluation ( these steps comprise most of the effort in the calculation ) . the computational efficiency that arises from the use of redesigned quantum chemistry algorithms on gpu hardware to evaluate electron repulsion integrals ( eris ) allows full qm treatment of the excited states of very large systems both large chromophores and chromophores in which the environment plays a critical role and should be treated with qm . dancoff approximation using gpus to drastically speed up the bottleneck two - electron integral evaluation , density functional quadrature , and matrix multiplication steps . this results in cis calculations over 400 times faster than those achieved running on a comparable cpu platform . the linear response formalism of tdhf and tddft has been thoroughly presented in review articles . only the equations relevant for this work are presented here , and real orbitals are assumed throughout . the tdhf / tddft working equation for determining the excitation energies and corresponding x and y transition amplitudes iswhere for tdhf ( neglecting spin indices for simplicity):and for tddft : the two electron integrals ( eris ) are defined asand within the adiabatic approximation of density functional theory , in which the explicit time dependence of the exchange correlation functional is neglected : the i , j and a , b indices represent occupied and virtual molecular orbitals ( mos ) , respectively , in the hf / kohn sham ( ks ) ground - state determinant . setting the b matrix to zero within tdhf results in the cis equation , while in tddft this same neglect yields the equation known as the tamm dancoff approximation ( tda):in part because dft virtual orbitals provide a better starting approximation to the excited state than hf orbitals , the tda generally gives results that are very close to the full linear response tddft results for nonhybrid dft functionals at equilibrium geometries . furthermore , previous work has shown that a large contribution from the b matrix in tddft ( and to a lesser extent also in tdhf ) is often indicative of a poor description of the ground state , either due to singlet triplet instabilities or multireference character.(52 ) casida and co - workers have examined the breakdown of tddft in modeling photochemical pathways(52 ) and have come to the conclusion that the tda gives better results than does conventional tddft when it comes to excited - state potential energy surfaces in situations where bond breaking occurs . thus , if there is substantial deviation between the full tddft and tda - tddft excitation energies , then the tda results will often be more accurate . a standard iterative davidson algorithm(53 ) has been implemented to solve the cis / tda - tddft equations . as each ax matrix vector product is formed , the required two - electron integrals are calculated over primitive basis functions within the atomic orbital ( ao ) basis directly on the gpu . within cis , the ax matrix vector product is calculated ashere greek indices represent ao basis functions , ci is the ground - state mo coefficient of the hf / ks determinant , and t is a nonsymmetric transition density matrix . for very small matrices , there is no time savings with gpu computation of the matrix multiplication steps in eqs 10 and 12 . for matrices of dimension less than 300 the linear algebra is performed on the gpu using calls to the nvidia cublas library , a cuda implementation of the blas library.(54 ) in quantum chemistry the ao basis functions are generally a linear combination of primitive atom - centered gaussian basis functions . for a linear combination of m primitive basis functions centered at a nucleus , the contracted ao basis function isthus the two - electron integrals in the contracted ao basis that need to be evaluated for eq 11 above are given bywhere parentheses indicate integrals over contracted basis functions and square brackets indicate integrals over primitive functions . while transfer of matrix multiplication from the cpu to the gpu provides some speedup , the gpu acceleration of the computation of the eris delivers a much more significant reduction in computer time . details of our gpu algorithms for two - electron integrals in the j and k matrices ( coulomb and exchange operators , respectively ) have been previously published , so we only briefly highlight information relevant to our excited - state implementation , which uses these algorithms . the gpu evaluates the j and k matrices over primitives , and these are contracted on the cpu . initially pairs of primitive atomic orbital functions are combined using the gaussian product rule into a set of bra- and ket- pairs . a prescreening threshold is used to remove negligible pairs , and the remaining pairs are sorted by angular momentum class and by their [ bra| or |ket ] contribution to the total [ bra|ket ] schwarz bounds , respectively.(55 ) all data needed to calculate the [ bra|ket ] integrals ( e.g. , exponents , contraction coefficients , atomic coordinates , etc . ) are then distributed among the gpus . the coulomb j matrix and exchange k matrix are calculated separately , with both algorithms designed to minimize interthread communication by ensuring that each gpu has all necessary data for its share of integrals . the [ bra| and |ket ] pairs are processed in order of decreasing bound , and execution is terminated once the combined [ bra|ket ] bound falls below a predetermined threshold . because the ground - state density matrix is symmetric , both the ground - state j and k matrices are also symmetric , and thus only the upper triangle of each needs to be computed . the coulomb j matrix elements are given bywithin our j matrix algorithm , one gpu thread evaluates one primitive two - electron integral using the hermite gaussian formulation as in the mcmurchie davidson algorithm , which then must be contracted into the final j matrix element as given in eq 15 . j matrix computation uses the and symmetry and eliminates duplicates within the bra and ket primitive hermite product lists , reducing the number of integrals calculated from o(n ) to o(n/4 ) . a different gpu subroutine ( or gpu kernel ) is called for each angular momentum class , leading to nine gpu kernel calls for all s- and p- combinations : [ ss|ss ] , [ ss|sp ] , [ ss|pp ] , [ sp|ss ] , [ sp|sp ] , [ sp|pp ] , [ pp|ss ] , [ pp|sp ] , and [ pp|pp ] . many integrals are calculated twice because [ bra|ket ] [ ket|bra ] symmetry is not taken into account . this is intentional it is often faster to carry out more ( but simpler ) computations on the gpu ( compared to an algorithm that minimizes the number of floating point operations ) in order to avoid bookkeeping overhead and/or memory access bottlenecks . this may be viewed as a continuation of a trend that began already on cpus and has been discussed in that context previously.(58 ) the maximum density matrix element of all angular momentum components weights the ket contribution to the schwarz upper bound . this allows the jmatrix algorithm to take complete advantage of sparsity in the density matrix , since there is a one - to - one mapping between ket pairs and density matrix elements . also , because density matrix elements are packed together with the j matrix ket integral data , its memory access pattern is contiguous , i.e. , neighboring threads access neighboring memory addresses . in general , the exchange kmatrix elements are given bywithin our k matrix algorithm , one block of gpu threads evaluates one k matrix element and thereby avoids any communication with other thread blocks . because the integrals ( bra| ) and ( bra| ) are paired with different density matrix elements , the k matrix algorithm does not take into account the and symmetry . on the other hand , [ bra|ket ] [ ket|bra ] symmetry is used , leading to o(n/2 ) integrals computed to form the final k matrix . in addition to having to compute more integrals than is required for the j matrix computation , the k matrix computation is slowed relative to j matrix computation by two additional issues . the first is that unlike the j matrix gpu implementation , the k matrix algorithm can not map the density matrix elements onto the ket integral data , since the density index now spans both bra and ket indices . the second issue facing k matrix computation is that because the sparsity of the density can not be included in the presorting of ket pairs , the sorted integral bounds can not be guaranteed to be strictly decreasing , and a more stringent cutoff threshold ( still based on the product of the density matrix element and the schwarz upper bound ) must be applied for k kernels , meaning that k computation does not take as much advantage of density matrix sparsity as j computation . as a result of these drawbacks , the exchange matrix takes longer to calculate than its coulomb counterpart . based solely on the number of integrals required , the k / j timing ratio for ground - state scf calculations should be 2 . in practice , with the memory access and the thresholding issues , values of 35 are more common . in our excited - state calculations , we use the same j and k matrix gpu algorithms , adjusted for the fact that the nonsymmetric transition density matrix t replaces the symmetric ground - state density matrix p. the portion of the f matrix from the product of t with the first integral in eq 11 is computed with the j matrix algorithm . the portion of the f matrix from the product of t with the second integral in eq 11 is computed with the k matrix algorithm . while the j matrix remains symmetric we must thus calculate both the upper and lower triangle contributions for the cis / tddft k matrix , resulting in two calls to the k matrix algorithm and computation of up to o(n ) integrals . in addition to an increased number of integrals in the excited state , the k / j timing discrepancy ( comparing cis / tddft to ground - state scf calculations ) is also increased due to the sparseness of the transition density compared to the ground - state density . evaluation of the exchange correlation functional contribution from eq 7 needed for tddft excited states(7 ) is performed using numerical quadrature on a three - dimensional grid , which maps efficiently onto massively parallel architectures , such as the gpu . this was recently demonstrated for ground - state dft , for both gpu and related(59 ) architectures . the expensive steps are evaluating the electron density / gradient at the grid quadrature points to numerically determine the necessary functional derivatives and summing the values on the grid to assemble the matrix elements of eq 7 . we use a becke - type quadrature scheme(60 ) with lebedev angular(61 ) and euler , we generate the second functional derivative of the exchange correlation functional only once , saving its value at each quadrature point in memory . then , for each davidson iteration , the appropriate integrals are evaluated , paired with the saved functional derivative values , and summed into matrix elements . we evaluate the performance of our gpu - based cis / tddft algorithm on a variety of test systems : 6,6-bis(2-(1-triphenyl)-4-phenylquinoline ( b3ppq ) , an oligoquinoline recently synthesized and characterized by the jenekhe group for use in oled devices(63 ) and characterized theoretically by tao and tretiak;(64 ) four generations of oligothiophene dendrimers that are being studied for their interesting photophysical properties ; the entire photoactive yellow protein ( pyp)(68 ) solvated by tip3p(69 ) water molecules ; and deprotonated trans - thiophenyl - p - coumarate , an analogue of the pyp chromophore(70 ) that takes into account the covalent cysteine linkage , solvated with an increasing number of qm waters . we use the 6 - 31 g basis set for all computations , since we do not yet have gpu integral routines implemented for d - functions . this limits the quality of the excited - state energies , as polarization functions can give improved accuracy relative to experimental values and are often necessary for metals and hypervalent atoms , such as sulfur and phosphorus . benchmark structures are shown in figures 1 and 2 along with the number of atoms and basis functions for a 6 - 31 g basis set . for the solvated pyp chromophore , only three structures are shown in figure 2 , but benchmark calculations are presented for 15 systems with increasing solvation , starting from the chromophore in vacuum and adding water molecules up to a 16 solvation shell , which corresponds to 900 water molecules . structures , number of atoms , and basis functions ( fns ) using the 6 - 31 g basis set for four generations of oligothiophene dendrimers , s1s4 . structures , number of atoms , and basis functions ( fns ) for the 6 - 31 g basis for benchmark systems photoactive yellow protein ( pyp ) , the solvated pyp chromophore , and oligoquinoline b3ppq . for pyp , carbon , nitrogen , oxygen , and sulfur atoms are green , blue , red , and yellow , respectively . for the other molecules , atom coloration is as given in figure 1 , with additional red and blue coloration for oxygen and nitrogen atoms , respectively . for our benchmark tddft calculations , we use the generalized gradient approximation with becke s exchange functional(71 ) combined with the lee , yang , and parr correlation functional(72 ) ( blyp ) , as well as the hybrid b3lyp functional . during the scf procedure for the ground - state wave function , we use two different dft grids . a sparse grid of 1000 grid points / atom is used to converge the wave function until the diis error reaches a value of 0.01 , followed by a more dense grid of 3000 grid points / atom until the ground - state wave function is fully converged . this denser grid is also used for the excited - state tddft timings reported herein , unless otherwise noted . an integral screening threshold value of 1 10 atomic units is used by default unless otherwise noted . within terachem , this means that coulomb integrals with products of the density element and schwarz bound below the integral screening threshold are not computed , and exchange integrals with products of the density element and schwarz bound below the threshold value times a guard factor of 0.001 are not computed . the initial n pair quantities list is also pruned , with a default pruning value of 10 for removing pairs from integral computation . the pair quantity pruning value is set to the smaller of 10 and 0.01 the integral screening threshold . the timings reported herein were obtained on a desktop workstation using dual quad - core intel xeon x5570 cpus , 72 gb ram , and 8 tesla c1060 gpus . all cpu operations are performed in full double precision arithmetic , including one - electron integral evaluation , integral postprocessing and contraction , and diagonalization of the subspace matrix of a. to minimize numerical error , integral accumulation also uses double precision . calculations carried out on the gpu ( coulomb and exchange operator construction and dft quadrature ) use mixed precision unless otherwise noted . the mixed precision integral evaluation is a hybrid of 32- and 64-bit arithmetic . in this case , integrals with schwarz bounds larger than 0.001 au are computed in full double precision , and all others are computed in single precision . the potential advantages of mixed precision arithmetic in quantum chemistry have been discussed in the context of gpu architectures by several groups and stem in part from the fact that there are often fewer double precision floating point units on a gpu than single precision floating point units . to study the effects of using single precision on excited - state calculations , we have run the same cis calculations using both single and double precision integral evaluation for many of our benchmark systems . in general we find that mixed ( and often even single ) precision arithmetic on the gpu is more than adequate for cis / tddft . in most cases we find that the convergence behavior is nearly identical for single and double precision until the residual vector is quite small . figure 3 shows the typical single and double precision convergence behavior as represented by the cis residual vector norm convergence for b3ppq , the first and third generations of oligothiophene dendrimers s1 and s3 , and a snapshot of the pyp chromophore surrounded by 14 waters . the convergence criterion of the residual norm , which is 10 au , is shown with a straight black line . note that for the examples in figure 3 , we are not using mixed precision all two - electron integrals on the gpu are done in single precision ( with double precision accumulation as described previously).(39 ) this is therefore an extreme example ( other calculations detailed in this paper used mixed precision where large integrals and quadrature contributions are calculated in double precision ) and serves to show that cis and tddft are generally quite robust , irrespective of the precision used in the calculation . nevertheless , a few problematic cases have been found in which single precision integral evaluation is not adequate and where double precision is needed to achieve convergence.(75 ) during the course of hundreds of cis calculations performed on snapshots of the dynamics of the pyp chromophore solvated by various numbers of water molecules , a small number ( < 1% ) of cases yield ill - conditioned davidson convergence when single precision is used for the gpu - computed eris and quadrature contributions . for illustration , the single and double precision convergence behavior for one of these rare cases , here the pyp chromophore with 94 waters , is shown in figure 3 . in practice , this is not a problem since one can always switch to double precision , and this can be done automatically when convergence problems are detected . recent work in our group(76 ) shows a speedup of 24 times for an rhf ground - state calculation in going from full double precision to mixed or single precision for our gpu eri algorithms . plot of single and double precision ( sp and dp ) convergence behavior for the first cis/6 - 31 g excited state of five of the benchmark systems . the convergence threshold of 10 ( norm of residual vector ) is indicated with a straight black line . in most cases , convergence behavior is identical for single and double precision integration until very small residual values well below the convergence threshold . one such example is shown here for a snapshot of the pyp chromophore ( pypc ) surrounded by 94 waters . timings and cis excitation energies ( from the ground - state s0 to the lowest singlet excited state s1 ) for some of the test systems are given in table 1 and compared to the gamess quantum chemistry package version 12 jan 2009 ( r3 ) . the gamess timings are obtained using the same intel xeon x5570 eight - core machine as for the gpu calculations ( where gamess is running in parallel over all eight cores ) . we compare to gamess because it is a freely available and mature quantum chemistry code and provides a reasonable benchmark of the expected speed of the algorithms on a cpu . gamess may not represent the absolute best performance that can be achieved using the implemented algorithms on a cpu.(40 ) coordinates of all the geometries used in the tests are provided in supporting information , so the interested reader can determine timings for other codes and architectures if further comparisons are desired . unfortunately , it is not possible to compare our own code against itself , running on the cpu or gpu , since there does not presently exist a compiler that can generate a cpu executable from cuda code . calculations were performed on a dual intel xeon x5570 ( 8 cpu cores ) with 72 gb ram . gpu calculations use 8 tesla c1060 gpu cards . comparing the values for the cis first excited - state energy ( e s0/s1 ) given in table 1 , we find that the numerical accuracy of the excitation energies for mixed precision gpu integral evaluation is excellent for all systems studied . the largest discrepancy in the reported excitation energies between gamess and our gpu implementation in terachem is less than 0.00004 ev . we also report the cis times and speedups for gamess and gpu accelerated cis in terachem ( note that the times reported refer to the entire cis calculation from the completion of the ground - state scf to the end of program execution ) . since cis is necessarily preceded by a ground - state scf calculation , we also report the scf speedups to give a complete picture . we leave out the absolute scf times , since the efficiency of the gpu - based scf algorithm has been discussed for other test molecules previously . we find a large increase in performance is obtained using the gpu for both ground- and excited - state methods . the speedups increase as system size increases , with scf speedups outperforming cis speedups . for the largest system compared with gamess , which is the 29 atom chromophore of pyp surrounded by 487 qm water molecules , some possible reasons for the differing speedups in ground- and excited - state calculations are discussed below . in the supporting information , we also include a table giving the absolute terachem scf and cis times for four of the test systems , along with the corresponding scf and cis energies , for both mixed and double precision computation and for three different integral screening threshold values . while the timings increase considerably in switching from mixed precision to double precision and in tightening the integral screening thresholds , the cis excitation energies remain nearly identical , suggesting that the cis algorithm is quite robust with respect to thresholding . the dominant computational parts in building the cis / tddft ax vector can be divided into coulomb j matrix , exchange k matrix , and dft contributions . figure 4 plots the cpu + gpu time consumed by each of these three contributions ( both cpu and gpu times are included here , although the cpu time is a very small fraction of the total ) , in which j and k timings are taken from an average of the 10 initial guess ax builds for a cis calculation , and the dft timings are from an average of the initial guess ax builds for a td - blyp calculation . the initial guess transition densities are very sparse , and thus this test highlights the differing efficiency of screening and thresholding in these three contributions . the j timings for cis and blyp are similar , and only those for cis are reported . power law fits are shown as solid lines and demonstrate near - linear scaling behavior of all three contributions to the ax build . the j matrix and dft quadrature steps are closest to linear scaling , with observed scaling of n for both contributions , where n is the number of basis functions . the k matrix contribution scales as n because it is least able to exploit the sparsity of the transition density matrix . these empirical scaling data demonstrate that with proper sorting and integral screening , the ax build in cis and tddft scales much better than quadratic , with no loss of accuracy in excitation energies . contributions to the time for building an initial ax vector in cis and td - blyp . ten initial x vectors are created based on the mo energy gap , and the timing reported is the average time for building ax for those 10 vectors . the timings are obtained on a dual intel xeon x5570 platform with 72 gb ram using 8 tesla c1060 gpus . data ( symbols ) are fit to power law ( solid line , fitting parameters in inset ) . fewer points are included for the td - blyp timings because the scf procedure does not converge for the solvated pyp chromophore with a large number of waters or for the full pyp protein . of the three integral contributions ( j , k , and dft quadrature ) , the computation of the k matrix is clearly the bottleneck . this is due to the three issues with exchange computation previously discussed : ( 1 ) the j matrix takes full advantage of density sparsity because of efficient density screening that is not possible for our k matrix implementation , ( 2 ) exchange kernels access the density in memory noncontiguously , and ( 3 ) exchange requires the evaluation of 4 times more integrals than j both because it lacks the and symmetry and because it needs to be called twice to account for the nonsymmetric excited - state transition density matrix . it is useful to compare the time required to calculate the k matrix contribution to the first ground - state scf iteration ( which is the most expensive iteration due to the use of fock matrix updating(77 ) ) and to the ax vector build for cis ( or td - b3lyp ) . we find that for the systems studied herein the k matrix contribution is on average almost 2 times faster in cis compared to the first iteration of the ground - state scf . one might have expected the excited - state computation to be 2 times slower because of the two k matrix calls , but the algorithm efficiently exploits the greater sparsity of the transition density matrix ( compared to the ground - state density matrix ) . due to efficient prescreening of the density and integral contributions to the schwarz bound before the gpu coulomb kernels are launched , the j matrix computation also exploits the greater sparseness of the transition density and therefore is 3.5 times faster than the ground - state first iteration j matrix computation . since j matrix computation profits more from transition density sparsity than k matrix computation , the current implementation of the j matrix computation scales better with system size than the implementation of the k matrix computation ( n vs n for the excited - state benchmarks presented here ) . as can be seen in figure 4,(78 ) the dft integration usually takes more time than the j matrix contribution . this is because of the larger prefactor for dft integration , which is related to the density of the quadrature grids used . it has previously been noted(79 ) that very sparse grids can be more than adequate for tddft . we further support this claim with the data presented in table 2 , where we compare the lowest excitation energies and the average td - blyp integration times for the initial guess vectors for six different grids on two of the test systems . for both molecules , the excitation energies from the sparsest grid agree well with those of the more dense grids but with a substantial reduction in integration time , suggesting that a change to an ultra sparse grid for the tddft portion of the calculation could result in considerable time savings with little to no loss of accuracy . the small ( < 0.0002 ev ) differences in excitation energies between our gpu - based td - blyp and the cpu - based nwchem are likely due to slightly differing ground - state densities , which differ in energy by 7 microhartrees for the chromophore and 1.9 millihartrees for the s2 dendrimer . td - blyp timings ( average time for the dft quadrature in one ax build for the initial 10 ax vectors ) . for comparison , number of points / atom refers to the pruned grid for terachem and the unpruned grid for nwchem . nwchem was run on a different architecture , so timings are not comparable . while successive ground - state scf iterations take less computation time than the first ( because of the use of fock matrix updating ) , all iterations in the excited - state calculations take roughly the same amount of time . this is the dominant reason for the discrepancy in the speedups for ground - state scf and excited - state cis shown in table 1 . an additional reason that the scf speedup is greater than the cis speedup is decreased parallel efficiency because the ground - state density is less sparse than the transition density ( all of the reported calculations are running on eight gpu cards in parallel ) . gpu - accelerated cis and tddft computation provides the excited states of much larger compounds than can be currently studied with ab initio methods . for the well - behaved valence transitions in the pyp systems , cis convergence requires very few davidson iterations . the total wall time ( scf + cis ) required to calculate the first cis/6 - 31 g excited state of the entire pyp protein ( 10869 basis functions ) is less than 6 h , with 4.7 h devoted to the scf procedure and 1.2 h to the cis procedure . we can thus treat the protein with full qm and study how mutation within pyp will affect the absorbance . for any meaningful comparison with the experimental absorption energy of pyp at 2.78 ev,(70 ) many geometrical configurations need to be taken into account . for this single configuration , the cis excitation energy of 3.69 ev is much higher than the experimental value , as expected with cis . the td - b3lyp bright state ( s5 ) is closer to the experimental value but still too high at 3.33 ev . solvatochromic studies in explicit water are problematic for standard dft methods , including hybrid functionals , due to the well - known difficulty in treating charge - transfer excitations . in calculating the timings for the first excited state of the pyp chromophore with increasing numbers of waters , we found that the energy of the cis first excited state quickly leveled off and stabilized , while that for td - blyp and td - b3lyp generally decreased to unphysical values , at which point the ground - state scf convergence was also problematic . this behavior of the first excitation energies for the pyp chromophore with increasing numbers of waters is shown in figure 5 for cis , td - blyp , and td - b3lyp . while the 20% hf exchange in the hybrid td - b3lyp method does improve the excitation energies over td - blyp , the energies are clearly incorrect for both methods , and a higher level of theory or a range - separated hybrid functional is certainly necessary for studying excitations involving explicit qm waters . the first excitation energy ( ev ) of the pyp chromophore with increasing numbers of surrounding water molecules . both td - blyp and td - b3lyp exhibit spurious low - lying charge - transfer states . the recent theoretical work by badaeva et al . examining the one and two photon absorbance of oligothiophene dendrimers was limited to results for the first three generations s1s3 , even though experimental results were available for s4 . in table 3 , we compare our gpu accelerated results on the first bright excited state ( oscillator strength > 1.0 ) using td - b3lyp within the tda to the full td - b3lyp and experimental results . results within the tda are comparable to those from full td - b3lyp , for both energies and transition dipole moments . our results for s4 show the continuing trend of decreasing excitation energy and increasing transition dipole moment with increasing dendrimer generation . we have implemented ab initio cis and tddft calculations within the terachem software package , designed from inception for execution on gpus . the numerical accuracy of the excitation energies is shown to be excellent using mixed precision integral evaluation . the ability to use lower precision in much of the cis and tddft calculation is reminiscent of the ability to use coarse grids when calculating correlation energies , as shown previously for pseudospectral methods . recently , it has also been shown(86 ) that single precision can be adequate for computing correlation energies with cholesky decomposition methods which are closely related to pseudospectral methods.(87 ) both quadrature and precision errors generally behave as relative errors , while chemical accuracy is an absolute standard ( often taken to be 1 kcal / mol ) . thus , coarser grids and/or lower precision can be safely used when the quantity being evaluated is itself small ( and therefore less relative accuracy is required ) , as is the case for correlation and/or excitation energies . for some of the smaller benchmark systems , we present speedups as compared to the gamess quantum chemistry package running over eight processor cores . the speedups obtained for cis calculations range from 9 to 461 times , with increasing speedups with increasing system size . these speedup figures are not necessarily normative ( other quantum chemistry packages might be more efficient ) , but we feel they give a good sense of the degree to which redesign of quantum algorithms for gpus may be useful . the increased size of the molecules that can be treated using our gpu - based algorithms exposes some failings of dft and tddft . specifically , the charge - transfer problem(16 ) of tddft and the delocalization problem(88 ) of dft both seem to become more severe as the molecules become larger , especially for the case of hydrated chromophores with large numbers of surrounding quantum mechanical water molecules . it remains to be seen whether range - separated hybrid functionals can solve these problems for large molecules , and we are currently working to implement these .
excited - state calculations are implemented in a development version of the gpu - based terachem software package using the configuration interaction singles ( cis ) and adiabatic linear response tamm dancoff time - dependent density functional theory ( tda - tddft ) methods . the speedup of the cis and tddft methods using gpu - based electron repulsion integrals and density functional quadrature integration allows full ab initio excited - state calculations on molecules of unprecedented size . cis/6 - 31 g and td - blyp/6 - 31 g benchmark timings are presented for a range of systems , including four generations of oligothiophene dendrimers , photoactive yellow protein ( pyp ) , and the pyp chromophore solvated with 900 quantum mechanical water molecules . the effects of double and single precision integration are discussed , and mixed precision gpu integration is shown to give extremely good numerical accuracy for both cis and tddft excitation energies ( excitation energies within 0.0005 ev of extended double precision cpu results ) .
Introduction CIS/TDDFT Implementation using GPUs Results and Discussion Conclusions
PMC2808879
the open access publication charges for this paper has been waived by oxford university press .
pdbselect ( http://bioinfo.tg.fh-giessen.de/pdbselect/ ) is a list of representative protein chains with low mutual sequence identity selected from the protein data bank ( pdb ) to enable unbiased statistics . the list increased from 155 chains in 1992 to more than 4500 chains in 2009 . pdbfilter - select is an online service to generate user - defined selections .
FUNDING
PMC2694016
japanese immigration occurred mainly from 1908 to 1935 , and the large ethnic group living in brazil has prevented crossbreeding . as a result , japanese physical characteristics can be easily noted in japanese descendants residing in brazil . oriental eyelids have distinctive folds and contours that differentiate them from occidental eyelids due to inner anatomical relationships . the oriental eyelid shows a narrower tarsal plate , a higher level of subcutaneous fat , and a higher level of fat posterior to the orbital septum than is found in caucasian examples . the eyelid crease , when present , is located near the upper eyelid border due to the following anatomical features : the orbital septum fuses to the eyelid levator muscle aponeurosis at variable distances below superior tarsal border ; the preaponeurotic fat protrusion and its thick feature blocks the attachments of the aponeurosis to orbicular muscle and skin next to the superior tarsal border ; and the levator aponeurosis attaches to the orbicular muscle and skin next to the upper eyelid border ( doxanas and anderson 1984 ; jeong et al 1999 ) . in addition to this , there has been no comparison made between the eyelids measurement of native japanese and nikkeis living in other countries . there is no information available as to whether habits , climate condition , or the exposure to different environmental factors could induce variation in the eyelid and its contours . this study was constructed in order to analyze , in a quantitative form , the measurements of eyelid contours in japanese people living in japan and in nikkeis living in brazil by using digital images . a prospective observational study was performed between august 2004 and july 2005 , evaluating 107 japanese descendents living in brazil ( so paulo state ) , and 114 japanese living in japan ( hamamatsu ) . the studied populations were required to have resided in their respective countries since birth or for more than 60 years . the protocol was approved by the ethics committee for human research at botucatu school of medicine ( unesp ) . exclusion criteria included those with systemic diseases that could cause changes in palpebral position , those with ocular or eyelid diseases , those who had undergone palpebral surgery , those who would not authorize photographic records , and crossbreeding individuals . all images were taken by the same person using a digital camera ( nikon coolpix 4100 , china ) , with flash , positioned in the frontal plane parallel to and 30 cm from the facial plane , at pupil height with a metric scale stick on the face and looking at the camera lens . images were transferred to a computer running windows ( microsoft , redmond , wa ) and processed by scion image from the scion corporation ( frederick , md ) . the following dimensions were analyzed : distance between medial canthi , distance between pupils ( ipd ) , superior palpebral crease position ( at the central part of the superior eyelid ) , the distance between superior palpebral margin and corneal reflex ( mrd ) , horizontal width , height , area , and obliquity of the palpebral fissure ( figure 1 ) . data were analyzed by analysis of variance using a three factor model and respective multiple comparison tests . the distance between medial canthi tended to be larger in the japanese than in nikkeis , with men having significantly greater values than women in both groups . the ipd tended to be larger in the japanese , with men having larger ipd than women , and statistical difference for some age groups ( table 2 ) . horizontal fissure values were barely higher in the japanese , with statistical significance for japanese women from 20 to 29 years , and japanese men from 40 to 49 years . comparing individuals within age groups , japanese women over 50 years had much lower values than the rest ( table 3 ) . the smallest vertical fissure dimensions were found in the over 50 s ( table 4 ) . mrd dimensions were similar in both populations , but there was a tendency for higher values in japanese ; the only significant difference was in female nikkeis who had larger dimensions than their japanese counterparts . upper eyelid crease dimensions showed no statistical significance between group ; however , japanese women always had discretely higher values ( table 6 ) . measurements in the individuals over 50 years old were significantly smaller ( table 7 ) . comparison between age groups showed lower values as long as age advanced ( table 8) . the shape and characteristics of the palpebral fissure are influenced by race , as has been documented in several racial comparative studies ( iosub et al 1985 ; leung et al 1990 ; kaimbo and kaimbo 1995 ; jeong et al 1999 ; hanada et al 2001 ) . in the present study we evaluated members of the oriental race living in different localities , and who were therefore submitted to distinct exogenous factors . both populations consisted of similar numbers from both sexes and were divided into age groups to check for gender and age related differences ( lam et al 1995 ; siqueira et al 2005 ) . differences between the nikkeis and japanese were very subtle , and frequently without significantly statistical difference . distances between inner canthi and pupils tended to be higher in the japanese , and in men rather than women . for horizontal and vertical fissure , a reduction in palpebral fissure width was found mainly in women ( in the group older than 50 years old ) ; this has already been explored in literature ( van den bosch et al 1999 ; siqueira et al 2005 ) and can be attributed to disinsertion and slackness of the palpebral structures which occur at individuals of advanced ages ( nesi et al 1997 ) . mean mrd values were higher in japanese than nikkeis in nearly all age groups , but differences were not statistically significant . as a consequence of lower vertical fissure values , mean mrd values were also lower in more elderly individuals . in relation to crease height , differences were small and variability high ; this meant that it was necessary to use medians and inter - quartile amplitude with this parameter . it is known that the oriental eyelid has a much lower superior palpebral crease position than a caucasian one , and it can even be absent due to particular anatomical differences . it should be noted that the japanese often use adhesive to artificially shape the upper eyelid crease , which can often remain even after discontinuing the treatment , and could explain their discreet larger values . discreet higher values in fissure width and height and mrd seen in japanese lead to their higher palpebral fissure areas . the oldest population had lower values for this parameter , which certainly link it with senility , when smaller vertical and horizontal palpebral fissures cause smaller palpebral fissure area . however , when analyzed apart this did not have the same tendencies as mrd or vertical and horizontal fissures . the palpebral fissure has a three - dimensional surface mainly shaped by the eyeball which pushes out and moulds both eyelid and all palpebral shape , including apexes and depressions , and the parabolic line of the superior lid , influencing the shape of the palpebral fissure ( maulboisson et al 2000 ) . evaluation of fissure obliquity in caucasians , japanese and descendents , and brazilian indians shows mean inclinations of 4.60 , 9.39 , and 9.34 , respectively ( hanada et al 2001 ) . in this study , both groups had superior mean values ( 5.15 in nikkeis and 6.57 in japanese ) to the caucasian population , but less than orientals from a previous study ( hanada et al 2001 ) . these results allow us to conclude that there are few differences in eyelids between orientals and the nikkeis who are living in brazil . environmental factors were not sufficiently strong to change the genetic load which has determined the shape of the oriental lids until the present .
objectivesquantitative evaluation of palpebral dimensions of japanese residents in japan and japanese descendant ( nikkeis ) who live in brazil , in order to define if environmental factors may influence these parameters.methodsa prospective study evaluating 107 nikkeis from brazil and 114 japanese residents in japan , aged 20 years or older . exclusion criteria were those with palpebral position alterations , prior palpebral surgery , and crossbreeding . images were obtained with a digital camera , 30 cm from the frontal plane at pupil height , with the individual in a primary position and the eye trained on the camera lens . images were transferred to computer and processed by the scion image program . measurements were made of distance between medial canthi , distance between pupils ( ipd ) , superior eyelid crease position , distance between the superior lid margin and corneal reflexes ( mrd ) , horizontal width , height , area , and obliquity of the palpebral fissure . data were analyzed using analysis of variance for a three factor model and respective multiple comparison tests.resultsjapanese residents and nikkeis living in brazil have similar measurements . statistical differences were found for some age groups concerning distance between pupils , horizontal , and vertical fissures , palpebral fissure area , and obliquity with native japanese presenting discretely higher measurements than nikkeis.conclusionenvironmental factors do not affect palpebral dimensions of nikkeis living in brazil .
Introduction Materials and methods Results Discussion Conclusion
PMC4982884
when robert merton wrote about the sociology of science in the 1970s , the central task at hand was explaining how a set of social norms and practices yielded knowledge what was different about science compared to the humanities and the professions ( merton , 1973 ) . this paper addresses a related , but somewhat different aspect of science how reliable knowledge can be turned into social benefit , using genomics as a case in point . the value of a science commons a pool of knowledge that is widely available at little or no cost is the central focus . the zone of intersection between reliable knowledge and useful knowledge falls squarely into what the late donald stokes described as pasteur s quadrant , ( 1997 ) where research results both contribute insight into the workings of nature and at the same time find practical application . the value of having knowledge widely and freely ( or almost freely ) available is particularly salient in pasteur s quadrant . knowledge is more likely to advance , and to be applied , if it is available at little or no expense to a broad array of scientists and innovators . these features of network efficiency are well known in software and other fields characterized by widely distributed cumulative innovation under mantras such as to many eyes , every bug is shallow , and theoretically described by benckler ( 2002 ) . i shamelessly steal the term science commons from the new organization of that name that has spun out of the creative commons movement . science commons is dedicated to making it easier for scientists , universities , and industries to use literature , data , and other scientific intellectual property and to share their knowledge with others . science commons works within current copyright and patent law to promote legal and technical mechanisms that remove barriers to sharing ( while i endorse their mission , they may not endorse my agenda or analysis . i have no direct connection to the organization , and do not speak for it . ) the main approach in what follows is historical , using background on how the science commons functioned in genomics to illustrate the role of a commons in general . genomics will be the main topic , occasionally straying into collateral fields of biomedical research such as bioinformatics or molecular and cellular biology when they provide better examples . there is some fuzziness around the edges of what constitutes a science commons , and how it relates to the public domain . there can be variants of many terms marching under the banner of open science or public research . open access , for example , can mean free access to view information , but not necessarily freedom to use it in all ways without restriction . the information in patents , for example , is openly available , but users may need to get permission or pay fees to use a patented invention ( including some basic methods used in science ) . to some , open science means no one can fence it in . access to information , say through viral licensing or copyleft , may be conditioned on agreeing not to restrict subsequent users . information may also simply be put into the public domain for any and all subsequent uses , by deposit at a freely available public database , for example . i focus on this last meaning , with information available to all at low or no cost . but again , this may not mean completely unfettered use , as sequence information in genbank may be covered by patent claims . jensen and murray , for example , noted that more than 4,000 human dna sequences are subject to claims in us patents ( jensen & murray , 2005 ) . sometimes there are restrictions on use of information and materials in the science commons , but those restrictions must also impose low or no costs to subsequent users , or else that information has left the science commons ( e.g. , through subsequent patenting , copyright , or database protection ) . there is no bright line dividing the science commons from proprietary r&d , and indeed in the case of some sequences , materials , and methods in molecular biology , the expense associated with use of genomic information may depend more on licensing terms and practices than what is or is not patented and one person s reasonable terms may be out of reach for some users . some of the data shown below , for example , are drawn from a free public database , the dna patent database at georgetown university ( dna patent database , 2005 ) . that database is , in turn , drawn from the freely available us patent and trademark database ( uspto database , 2005 ) . but the search engine and database used to generate the dna patent database are derived from an intermediate subscription database that is not widely or freely available , but available to subscribers at several thousand dollars a year , through the delphion patent database ( delphion database , 2005 ) . this is a major tool for our research , and for us the subscription cost is balanced by ease of use and reliability . we pay for delphion for its special features ( such as corporate trees that track ownership of patents ) and because its search results have proven more reliable than several alternatives , including the uspto s own computers and software available in northern virginia . we are happy to pay for a proprietary database because it helps us do our work and the price is reasonable , within reach of our nonprofit institution . delphion does not restrict our use , and does not prevent our creating a free public database . i raise the example of a pay - for - use database sandwiched between two public resource databases to hint that the story will get complicated , and to signal early that this is not a diatribe against for - profit intrusions into research . this is important because many of the points that come out later will seem unfriendly to purveyors of databases . those objections are not deep - seated rejections of capitalism in science , but rather pragmatic judgments about adverse effects of particular policies . innovation depends on how much information is produced as well as how widely and easily it is shared . funding of r&d is a major determinant of how much research is conducted , and thereby how much information is created . some reasons are obvious , but some are not so obvious , and some are even counterintuitive.one final conceptual point will be helpful to flag before proceeding into the narrative . there is extensive overlap between academic health research and the science commons in molecular biology . academic science is important in many fields , not just the life sciences . in all lines of scientific and technical work , universities and nonprofit research institutions and government laboratories ( academic research institutions ) play key roles . everyone is trained in academe , not just academic scientists , but also those working in industry . and within industry , academic training is not just for those doing r&d , but also managers and professionals . academe is also one place where the norms of mertonian science have real traction , where the norms of openness , community , mutual criticism , and fair allocation of credit are supposed to be respected , at least as an ideal . in some circumstances , however , academic science is done under strictures of secrecy , or results are made available only at great cost or encumbered by restrictions on use . great science goes on in industry , including or even particularly in the life sciences , but no one expects the norms of openness to prevail in industrial r&d , even if in some circumstances at some times scientists in companies publish in the open literature , present their findings at open scientific conferences , make materials freely available , and contribute data to public databases . when industrial r&d is widely shared openly , results flowing from industrial r&d can become part of the science commons , and there are several instances of this in the stories to follow . in sum , most academic research contributes to the science commons , and some industrial r&d also does so . it remains true nonetheless , that most of the science commons at least in the life sciences is based on academic research funded by government and nonprofit organizations , and most academic research probably enlarges the science commons , although to my knowledge no one has quantitatively assessed what fraction of the research funded by government and nonprofit organizations remains in the science commons . policies put in place over the past three decades have raised concerns about how big the science commons will be , and in particular , whether and to what degree government and nonprofit funders and academic research institutions will maintain it . richard nelson of columbia university , in particular , has expressed concerns about intrusions on open science , based on his decades of studying the innovation process as an economist ( nelson , 2006 ) . genomics became the grounds for a vigorous , sometimes even vicious , fight over what should or should not be in the public domain , and under what conditions . how much genomic data should be in the science commons has been a matter of explicit policy - making in government , nonprofits , academic institutions , and private firms since 1992 or 1993 , when the commercial promise of genomics became apparent , and private funding for genomics in for - profit companies began to accelerate . several features of genomics make it an interesting field to study as an instance of the science commons . it is clearly derived from a scientific project that was initially conceived as a public works project to construct maps and derive a reference sequence of the human genome and other genomes . the original intent of the human genome project was to produce information and tools to make that information useful and valuable . some commercial uses were foreseen from the beginning , but the main focus was on producing public data of permanent scientific value . it caught a wave of enthusiasm for the new biotechnology that had become both scientifically hot and also a darling on wall street . cetus was founded in 1971 and turned to recombinant dna techniques soon after they were discovered ( stanley cohen , co - inventor of recombinant dna , joined the cetus board in 1975 ) . those companies went public with high - profile stock offerings in 1980 and 1981 , raising sums that startled the markets ( smith hughes , 2001 ) . the origins of the human genome project were not in commercial biotechnology , however , but in publicly funded science . the ideas behind the human genome project began to appear in 1985 , while the embers of biotechnology were still warm but too distant from this particular part of molecular biology to catch fire . walter gilbert tried to start genome corp . in 1987 , for example , and had to resign from a national research council study as a consequence . ) scientists conceived a grand idea and focused on the scientific value of having a reference human genomic sequence ( cook - deegan , 1994).1 commercial interest lagged for several years , until in 1991 a conflict over patenting short sequence tags derived from human genes blew up into a major controversy , and created commercial interest in human genomic sequencing . j. craig venter , a scientist in nih s intramural ( government laboratory ) research program , started using automated dna sequencing machines rapidly to identify sequences unique to human genes . a genentech lawyer , max hensley , contacted the nih technology licensing lawyer , reid adler , who in turn contacted venter about filing a patent application on his method and the resulting dna sequences . the method was eventually given over to the public domain through a statutory registration of invention , but the patent application for the sequences themselves continued through the patent examination process . that 1991 patent application generated tremendous controversy until 1994 , when nih director harold varmus decided to abandon the patents , following the advice of patent scholars rebecca eisenberg and robert merges ( 1995 ) . the controversy at nih paradoxically induced interest in commercial biotechnology circles . in 1991 and 1992 , noise over darth venter s turn to the dark side ( by patenting dna sequences from gene fragments ) attracted the attention of scientist randall scott at incyte in california , and incyte began to focus on dna sequencing of human genes . through 1994 , several other companies including human genome sciences , mercator genetics , genset , myriad genetics , millennium pharmaceuticals , genome therapeutics ( renamed from collaborate research ) , hyseq , and sequenom were formed around the idea of mapping and/or the sequencing the human genome , or turned from other pursuits to those ends . one company illustrates the public - science origins of private genomics in particular : human genome sciences . wallace steinberg , a former johnson & johnson executive who had started several biotech companies after leaving j&j , decided to meet venter , having read about him amidst the patenting controversy . he talked venter into leaving nih to form a nonprofit research unit , eventually named the institute for genomic research ( tigr ) , by promising venter $ 70 million ( $ 85 million by the time the deal was done ) ( cook - deegan , 1994 ) . that was enough to build a larger sequencing and sequence - analysis facility than existed anywhere else at the time . ( hgsi ) , was formed as a for - profit corporation that would own the patent rights to tigr s results as well as pursuing its own research leads . there were also plans to form additional companies , industrial genome sciences and plant genome sciences , to exploit different opportunities deriving from high - throughput sequencing and other genomic technologies . haseltine also had his roots in academic science , most notably from his work on hiv / aids at harvard . the first boomlet in genomics startups in the early 1990s paralleled a significant increase in pharmaceutical r&d among established pharma and biotech firms that started in the early 1980s . a pharmaceutical r&d arms race of sorts began in the early 1980s , and during that decade , firms delved ever more deeply into molecular and cellular biology to bolster their absorptive capacity for drug discovery ( cockburn & henderson , 1998 ; fabrizio , 2005 , unpublished data ) , recognizing the importance of rapid and effective use of public domain science to their business plans . indeed , their ability to tap public science was one of the indicators of firms success in pharmaceuticals ( fabrizio , 2004 ) . by historical happenstance , this birth of genomics out of publicly funded science took place as patent rights were being expanded and strengthened , by a combination of changes in legislation , in court decisions , and in patent offices . in academia , the major change was the bayh - dole act of 1980 , which gave grantees and contractors rights and indeed a mandate to seek patents on federally funded research results . mowery and his coauthors review this history and some of its consequences in their book , ivory tower and industrial innovation , which combines economic empiricism , historical research , and policy analysis ( mowery et al . , 2004 ) . foreseeable to some immediately , to others a few years after its launch took root as a field in american academe under the new bayh - dole regime . academic institutions began to patent much more frequently after 1980 , and genomics is one of the areas where this effect was pronounced . moreover , patent rights were being expanded and strengthened in many areas of american law , including biotechnology . the court of appeals for the federal circuit ( cafc ) was formed in 1982 . it was designed to handle appeals of certain cases , including appeals of federal district court decisions about patent litigation . it and the patent office expanded the kinds of inventions that could be patented ( including software and business methods , for example ) and tended to strengthen the hand of patent - holders relative to those contesting patent rights ( jaffe & lerner , 2005 ) . in the hands of the cafc , more territory could be enclosed and patent fences generally got higher . these developments had a particularly strong impact on areas of rapid innovation , including both wet lab biotechnology and bioinformatics , fields directly relevant to genomics . three other factors are not related to changes in policy , but nonetheless make genomics a useful field to study for this policy history : ( 1 ) the story was compressed into a decade , so its narrative is shorter and crisper , ( 2 ) there was intense media coverage , producing an ample public record of events , and ( 3 ) the patentable inventions arising from genomics can be tracked because it is possible to identify relevant patents . patents resulting from genomics r&d almost always make patent claims that use terms distinctive to dna and rna , which can be used to create a searchable patent database mapping to genomics research.2 a science commons can supply information needed to achieve social benefit that for - profit markets in goods and services may fail to achieve . moreover , even in markets well served by the profit motive , a science commons can in some circumstances improve efficiency , when many disparate firms can draw on a common pool of knowledge and data , rather than having to construct the same information firm - by - firm at substantial cost because of duplication . one theoretical rationale for this effect has been set forth by benkler ( 2002 ) . the cases arising in genomics suggest that network theory may have some practical applications in the real world of science and its application . i will illustrate three social goals that can benefit from a robust scientific commons in genomics : advancing science , improving public health , and creating a shared foundation for productively diverse forms of industrial r&d and commercialization . but first , some historical background . the beginning of the human genome project was marked by conflict between scientists who thought it was a poor use of resources versus those who thought it was a useful and efficient way to spend public research dollars . by broadening the project to include maps , tools , and organisms in addition to the human a 1988 report of the national research council reported that consensus ( national research council , 1998 ) . that did not eliminate all conflict , however , because the question of which federal agency should play the larger role remained unresolved , and both the national institutes of health and the department of energy assumed active roles , in a roughly 21 ratio of funding . and even as the rival agencies in the us settled into a generally amicable cooperative framework , other nations began to engage in genomics r&d . the 1991 controversy over gene - tagging sequences erupted while the genome project was getting underway . as that controversy died down , an even more public conflict over sequencing the entire genome exploded in 1998 , pitting a private company against the public sector genome project . the heads of the two private organizations , venter ( tigr ) and haseltine ( hgsi ) , never sang close harmony , despite their supposed corporate matrimony . as tigr moved away from human gene sequencing and into microbial sequencing , including proof of principle that whole - genome shotgun sequencing could work , the noise from the tigr - hgsi conflict got downright cacophonous . tigr s scientific interests hardly coincided with what hgsi would want from its r&d partner , and two alpha - males confined in close corporate space found themselves in frequent conflict . in 1997 , tigr and hgsi severed their ties , with tigr foregoing rights to future payments and hgsi foregoing rights to future tigr discoveries ( tigr , 1997 ) . venter became a free agent , heading up a free - standing nonprofit research institute , until michael hunkapiller of applied biosystems approached him with another big idea ( shreeve , 2004 ) . in discussions with its parent company ( then perkin - elmer cetus , which became applera ) , applied biosystems had begun to think seriously about sequencing the human genome with private funds . it would be a high - profile use of a promising new dna sequencing instrument that was much faster and more scalable than existing sequencers . the question was whether the methods tigr had used on smaller genomes , such as bacteria , could work on the human genome , and produce a final sequence faster than the public genome project . if so , a company could charge both for access to the data , and for access to informatic tools to mine the data . in order to charge users , the company would need a truly impressive bioinformatic capacity , and great tools for analyzing sequence data . if a private company decided to sequence the genome , it might even kick up a market for sequencing instruments , including applied biosystems machines , among the publicly funded laboratories doing dna sequencing , who would buy the same machines to compete with the new genomic sequencing company . in may 1998 , craig venter became the head of a company , later named celera genomics , which would carry out the sequencing and pull together the computing infrastructure to assemble it into a reference sequence , and then begin to interpret the sequence information . celera s 1998 establishment inaugurated another boom in genomics startups , this one entailing many more companies and much more money than the 19921994 boomlet . the initial kernel of the media snowball was an exclusive to nicholas wade of the new york times in may 1998 ( wade , 1998 ) . thus began a privately financed scientific effort at celera running into the hundreds of millions of dollars that competed head to head with the publicly financed human genome project . the drama played out over 3 years and became the biggest story in science , and one of the most visible general interest stories of its period . the story is often told as a race , competition between venter at celera and the public human genome project whose most conspicuous spokesmen were francis collins in the united states and sir john sulston in the united kingdom . collins was director of the national human genome research institute at nih , and sulston directed the sanger institute affiliated with the university of cambridge and funded mainly by the wellcome trust of london ( with additional funding from the uk medical research council ) . the usual narrative strategy was to use the metaphor of a race , but in fact there were not just two human genome projects running in parallel , there were many . a consortium of laboratories funded by government agencies and nonprofit organizations in north america , europe , and japan constituted the sulston emerged as the champion of that faction , emphasizing open science , rapid sharing of data and materials , and a passionate appeal to refrain from patenting bits of the human genome except when they could foreseeably induce investment in developing end - products such as therapeutic proteins . sulston s model for the human genome project was the biology of the worm ( ankeny , 2001)a close - knit community of scientists who studied nematodes , and had made immense scientific progress in a hub - and - spoke model of biology . one at the university of cambridge and another at washington university in saint louis did high - tech , whiz - bang , expensive mapping and sequencing projects on the worm genome . those hubs shared data quickly and widely with the spokes a vibrant network of smaller laboratories throughout the world . sulston wrote the common thread with georgina ferry to tell the genome story from his point of view ( sulston & ferry , 2002 ) . his was the public works model of genomics , with public funding producing a valuable scientific resource . the sanger institute was the trust s foremost research institution , and john sulston its most visible scientist . nih and francis collins , as a government organization and employee , respectively , had to be more cautious in their rhetoric and were , most of the time . the wellcome trust sponsored a bermuda meeting of the major sequencing centers throughout the world in 1996 ( despite the exotic sound of it , the weather was miserable , it was off - season , and the site was chosen deliberately to be neutral , not in the usa or europe ) . one theme of the meeting was how to make sequence data widely available , modeled on the worm science world . a set of bermuda rules emerged from the meeting , mandating daily disclosure of dna sequence data . the pledge to rapidly share data was linked to a plea not to patent dna , unless a gene or dna sequence had been studied further to show its function or practical utility . that kind of functional biology was not the business of the publicly funded dna sequencing centers , so it was in effect a no patents venter was present at the beginning of the bermuda meeting , in his pre - celera days as head of tigr , but fittingly he left early and was gone by the time the bermuda rules were agreed . the wellcome trust played another important role in 1998 , soon after venter announced his intention to sequence the genome at a new startup company . wellcome reacted to the announcement of the new company by proposing to do a faster , better public genome sequence by increasing its commitment to fund genomic sequencing through the public project . wellcome s move , in turn , bolstered funding from the us government , uk government , and other government and nonprofit funders of the public genome project . in addition to the upstart startup celera and the public genome project , the private firms hgsi and incyte were in effect conducting a different kind of genome project in parallel call it a human genes project . both companies had been sequencing genes for 56 years before celera was even formed , and had been sending in patent applications the whole time . hgsi had one main client smithkline beecham ( which later merged with glaxo wellcome to become glaxo smithkline ) . the business strategies of incyte and hgsi were both initially based on sequencing human genes . randall scott at incyte was part of a scientific network with many links to the public genome project . indeed , at times incyte was contemplated as a partner in the public project ( shreeve , 2004 ) . hgsi had some academic and industrial collaborations beyond smithkline beecham , but far fewer than incyte . and scott of incyte was in the public genome family , or at least ate some meals with them ; haseltine was never welcome at the table . haseltine reinforced his role as troublemaker for the public genome project when he wrote a 1998 editorial to the new york times arguing congress should pull the plug on the public project because it was already being done without tax money by his company and others ( haseltine , 1998 ) . haseltine argued government funds for dna sequencing would be better spent on smaller projects in individual laboratories to understand gene sequences . first , it assumed almost all the value of sequencing came from gene sequences , whereas molecular genetics has become focused on many regulatory processes that happen at the rna and dna level and are never translated into protein . it seems a safe bet that a lot of biology would never be approachable if we only got protein - coding sequences . haseltine is an excellent scientist and knew this full well , although for commercial purposes , he could certainly make a good case that the most rapid returns were likely to come from coding sequences . this argument conflated commercial value with scientific value , but as an argument about public support for science , it is simply wrong , as the complexity of gene regulation is becoming obvious , and the importance of dna sequences in addition to protein - coding regions is becoming apparent . gene - based strategies made eminently good sense when hunting for drug targets , because drugs are designed to interact with proteins that are secreted outside of cells , that bind dna , or that extend outward from the surface of cells . but as a tool to understand biology , the entire sequence was a much more powerful tool than just protein - coding regions . the fact that genes were being sequenced by companies did no one but those companies any good if the sequences were not public . academic scientists could , of course , approach hgsi or incyte or other companies to collaborate , getting access to their data , but relatively few did so . the reason was that such collaboration came with strings or ropes , or even cables . the constraints were patent rights that is , exclusive property rights that were routinely being granted for full - length genes by the us patent and trademark office . collaboration with hgsi or incyte meant nondisclosure agreements , publication review , and rights on resulting intellectual property . sometimes this made sense , but it was not terribly attractive for those mainly interested in advancing science . a central condition of collaboration was control of information and constraints on open sharing of data . it made sense in a business context , but as a public works project in science , it made none . and to argue that proprietary gene sequencing was a substitute for public funding of genomic sequencing was ridiculous . scientists could of course wait for patents to issue from privately sequenced genes , but that was not really a practical option because of the many - year delay . perhaps scientists could hope hgsi and incyte would publish the sequences voluntarily someday , but the companies would do that only when patents issued , or if it suited their business needs . the companies did publish , but only very selectively . to academic scientists in the field , waiting for companies to do the work would be surrendering to the competition in any event . pharmaceutical companies working with the incyte and hgsi played from power money and the ability to generate the data themselves if need be with their huge r&d war chests . small academic laboratories were on the other end of the power curve , with relatively little leverage . academic laboratories had a much better alternative , to scan the public genbank for genes of interest , at no cost and with no strings attached . genbank and other databases received sequences from thousands of laboratories throughout the world , as well as ( eventually ) the output of major dna sequencing centers . incyte and hgsi drew regularly on genbank data , but company gene sequence data made their way back to public sequence databases only when a patent issued , or if the company chose to publish an article in the scientific literature.3 in effect , company projects built on the foundation laid by the public genome project and drew regularly upon its data , but only occasionally contributed data back to public databases . this was sensible business practice , but it was misleading for haseltine to imply that leaving the genome projects to companies and small laboratories would produce a genome project with the desired features of the concerted public project . one forthright way to make haseltine s case would have been to indeed allow the private gene sequencing firms to proceed , but for the government and nonprofit funders to pay to make the data public . there are two reasons haseltine may have chosen not to take his arguments to this logical conclusion . first , the offer would likely have been refused by the companies , because their business plan was precisely to keep sequence data proprietary until they could be patented . government procurement of the data would have vitiated this business plan , and turned the companies into contractors . it would have been embarrassingly high , and certainly would have undercut the argument the work could be done with no tax dollars . but pushed to its conclusion , haseltine s line of argument could have made a clean case it might have made sense for the government to buy this particular genomic real estate and dedicate it to the science commons , if the private sector could produce the data faster and cheaper . haseltine started from the premise that human gene sequence data were valuable and who could argue with that ? from 1998 until february 2001 , when nature and science published rival articles containing draft reference sequences of the human genome prepared by the public genome project ( lander et al . , 2001 ) and by celera ( venter et al . , 2001 ) , there were in effect two competing projects focused on sequencing the entire human genome , and in parallel also several other genome projects focused on expressed sequences and bits and pieces of the genome of interest to research communities in both public and private sectors . in addition to the two companies sequencing human genes , many other companies were mapping and sequencing parts of the human genome . and thousands of laboratories were contributing sequencing and mapping information to databases and to scientific publications . by the time the initial genomic sequence publications came out , the ratio of private to public funding appeared to be roughly two private dollars for every one government or nonprofit dollar ( see fig . 1genomics research funding , 2000 , source : world survey of funding for genomics research , stanford university , 2001 ( unpublished data from robert cook deegan , amber johnson , and carmie chan , stanford - in - washington program , based on a survey of over 200 funders ) genomics research funding , 2000 , source : world survey of funding for genomics research , stanford university , 2001 ( unpublished data from robert cook deegan , amber johnson , and carmie chan , stanford - in - washington program , based on a survey of over 200 funders ) in 2001 , the financial genome bubble burst . at the end of 2000 , 74 publicly traded firms were valued at $ 94 billion , of which the largest 15 accounted for approximately $ 50 billion . by the end of 2002 , those 15 firms market value had dropped to $ 10 billion , but their reported r&d expenditures nonetheless climbed from $ 1 to $ 1.7 billion ( kaufman , johnson & cook - deegan , 2004 , unpublished data ) . these data make three simple points : first , the private sector has invested heavily in genomics , but those investments are made in expectation of financial return . that is quite different from the public and nonprofit funding of genomics , which is mainly intended to produce public goods knowledge and materials that are widely available to advance knowledge and combat disease . second , private r&d investment is a powerful complement to the public and nonprofit funding . private r&d follows public r&d in time , it draws on the science commons but does not necessarily contribute back to it . if successful , private r&d investment can create wealth and jobs as well as the social benefit from developing goods and services that would otherwise not be produced . this benefit is real , but it is distinct from the social value of the science commons . genomics also provides several examples of private funding to augment the science commons , such as the snp consortium , and the merck funding to washington university to fund gene sequencing ( cook - deegan & mccormack , 2001 ) . and third , and most to the point for policy purposes , it would be foolhardy to generalize from the happy circumstances when private r&d expands the science commons to expect private r&d to substitute for the science commons except in unusual circumstances , usually related to the grounds of competition among firms in a particular industrial sector . private industrial r&d will sometimes find it useful to contribute to the science commons , but expecting industry to do so always and consistently would be foolhardy . to see why having a healthy science commons matters , we move away from genomics to make a general point about health research . murphy and topel estimated that the gains in life expectancy from medical research 1970 to 1990 were staggering in the range of $ 2.8 trillion per year ( $ 1.5 trillion of this from cardiovascular disease reduction alone ) ( murphy & topel , 1999 ) . many of the health benefits of discovering new information about health and disease come not from drugs or vaccines or medical services , but from individuals acting on information . cutler and kadiyala attributed 2/3 of the health gains in cardiovascular disease reduction to effects of public information , such as stopping or reducing tobacco use , changing diet , getting more exercise , or monitoring one s blood pressure . the second largest determinant was technological change , such as introduction of new drugs and services , followed by increasing cigarette taxes to reduce tobacco use ( cutler & kadiyala , 2001 ) . the estimated return on investment in medical treatment was 41 , but on the public information it was 301 . cutler and kadiyala s result can not be generalized , because smoking is a very large risk factor that is sui generis , and cardiovascular disease has proven far more malleable to many kinds of interventions than nonlung cancer and other chronic diseases . the path from scientific understanding of cause to prevention of cancer , diabetes , arthritis , and alzheimer s disease , among others , appears far less linear , and so the value of public information about risk is correspondingly less powerful and has less impact on health outcomes . few if any risk factors will ever be found to rival tobacco use as predictors of poor health . but the finding that information can have value irrespective of being translated into products and services in a paying market is nonetheless important . even if public information will not be quite as powerful in reducing other chronic diseases as it has been for cardiovascular disease , the vector is likely to point in the same direction . we can not say that public information will always prove more powerful than information channeled into new drugs , vaccines , biologics , devices , and medical services sold for profit in the health care system . where there are public health benefits from public research results , however and the probability there will be no such public health effects of genomics seems vanishingly small the health science commons is essential , because it alone can supply the public information benefits . both words in public information we need new information that arises from science , but to capture social benefits based on that knowledge itself , we also need it to be public . the 2002 report from the world health organization , genomics and world health gave the example of fosmidomycin ( advisory committee on health research , 2002 ) . this drug is currently being testing to treat malaria in africa ( missinou et al . , 2002 ) . that use came to light as a consequence of sequencing the genome of the malaria parasite , and noticing a metabolic pathway not previously known to exist . the compound fosmidomycin was known to inhibit the pathway , and had been developed as a treatment for urinary tract infections . when the new possible use to treat malaria was revealed , fosmidomycin was pulled off the shelf and moved into clinical trials against malaria . this is a treatment that may never turn a profit for any company , but the social returns could be enormous if fosmidomycin works , because so many millions of people are infected with malaria . if not fosmidomycin , then perhaps other findings will lead to prevention or treatment of malaria , enabled by now having the full genomic sequence available for host , pathogen , and mosquito vector ( gardner et al . the information about these organisms available worldwide is essential to accruing the benefits of research . there is only a weak world market for drugs to treat malaria because it is largely an affliction in resource - poor populations . the usual profit motives of the intellectual property system can not create incentives where there is no prospect of profit to pull products through an expensive discovery and testing process . but networks of nonprofit organizations , such as the malaria vaccine initiative , the global fund , the who essential medicines program , and other sources of public capital might nonetheless be capable of discovering and developing new treatments despite the unlikely prospect of commercial profit.5 in theory , public funds might induce a sufficient incentive to motivate profit - driven investments for diseases of poor people living in poor countries , but it is not true now , and betting that money will be found could prove wrong . having a scientific commons with information relevant to vaccines and treatments many of the scientists most motivated to study such diseases work in resource - poor countries ; they do not have rich resources , but they do have strong motivation , as well as computers and access to public databases . strains of the coronavirus that causes sars were identified and sequenced within a month by at least three laboratories in asia , canada , and the united states . that sequence information was shared widely , and a chip to detect the virus was available for research and possible clinical use just a few months later . making progress with such alacrity requires strong norms of open science , with obvious social benefit.6 many of the infectious diseases that plague mankind have long eluded measures to combat them . in many cases , this is because they are difficult to grow in tissue culture , and therefore research progress is slow . with new technology , the genomes of hundreds of nasty bugs have been fully sequenced , giving scientists an entirely new tool to develop drugs , vaccines , and control measures . it is far from clear that this will tilt the battle decisively in favor of humans over schistosomes , trypanosomes , plasmodia , bacteria , viruses , and other organisms that maim and kill humans by the billions , but it is a new line of attack . in the case of organisms on the select agent list of bioterror bugs , there is now extensive research underway to develop preventive and treatment measures . for most infectious agents that afflict those in the poorest parts of the world , however , the prospect of profit will not create a demand - pull for innovation that could improve billions of human lives , unless indirect incentives such as prizes or guaranteed payments for effective remedies by third parties serve as surrogates for paying markets . unlike the public information case described above , however , here the market failure has a different cause . it is not due to the fact that the research results are public goods , but because the potential users are deeply impoverished , and the economic incentives for drug development in advanced economies do not prevail . nongovernment organizations around the globe , including major funders such as the gates foundation , the tb alliance , the global fund , and others , are attempting to use philanthropy , government funding , and creative networking to address this form of market failure . their efforts depend critically on access to scientific and technical information at low or no cost . another likely use of genomic information will be newborn screening , as more diseases are characterized , linked to possible intervention , and incorporated into routine testing . this must be done with care to avoid harms and false positives , but as knowledge accumulates , the list of conditions that can be treated will lengthen , and costs of testing should drop . any benefits from newborn screening are unlikely to arise from strong profit motives , however , as most testing is done by state - funded laboratories in the united states and government public health programs in most other countries . [ two - thirds of us states spent between $ 20 and $ 40 per infant for all screening in 2002 , and no state spent more than $ 61 ( us general accounting office , 2003 ) ] . this is far less than most single genetic tests , or even routine medical laboratory tests . newborn screening is now , and will likely continue to be , a public health service ( newborn screening steering committee , 2005 ) . any shift to dna - based testing , or addition of tests beyond the current testing regimes , will face very serious cost constraints , and advances are unlikely to result from prospect of ample profits in this market . even if we were to stipulate that the public information impact of health research might be less important in the future than it has been in the past , does it diminish the role and importance of the science commons ? in this section , the focus is not on social benefits foregone for lack of a robust commons . instead , the argument shifts to efficiency gains to private r&d that follow from being able to draw upon the commons . several lines of research corroborate the intuition that a pool of public information and materials must surely raise all ships to the benefit of each . the case is likely to be stronger in health research than in other lines of research , just because of the well known deep mutualism between public and private r&d in health research . the late edwin mansfield s surveys of industrial leaders clearly showed that executives in firms believed their lines of business - related r&d depended on academic research , and pharmaceuticals to a greater degree than any other sector he characterized ( mansfield , 1995 ) . narin and colleagues have repeatedly shown how industrial publications cite academic research , and patents related to pharmaceuticals and biotechnology cite academic research far more heavily than most other kinds of inventions ( narin & olivastro , 1992 ) . when steve mccormack and i read through the more than 1,000 dna - based us patents issued 19801993 , we found that 42%were assigned to universities ( 14% to private ; 9% to public ) , nonprofit institutions ( 13% ) , or government ( six percent ) , compared to less than three percent academic ownership of patents overall ( mccormack & cook - deegan , 19961997 , unpublished data ) . this is a tenfold enrichment of academic involvement in life sciences compared to most other kinds of invention . the 1997 survey of the association of university technology managers was the last year for which the questions made it possible to analyze life sciences separately from physical sciences . that year , life sciences accounted for 70 percent of licenses and 87% of income ( massing , 1998).7 industries closest to health research depend on academe , and academic institutions are more heavily involved in technology transfer activities related to the life sciences . if we were looking for a place where public science matters to industry , life sciences would be a good place to start . beyond the special role of academic institutions as the training grounds for both technical and nontechnical workers in the knowledge economy , academic institutions also play a unique role in creating and sustaining the science commons . it is worth noting that the studies above generally focus on academic r&d , not specifically on the science commons , or only open science . recall that universities and nonprofit research centers do not always practice open science , and some elements in the commons come from private industry r&d . while we can not be completely sure , it is quite likely that the main explanation for the importance of academic research is that it is open , producing data and materials available to all . the most direct line of evidence for this comes from the carnegie - mellon survey of industrial r&d managers . cohen , nelson , and walsh conclude that public research has a substantial impact on industrial r&d in a few industries , particularly pharmaceuticals , andthe most important channels for accessing public research appear to be the public and personal channels ( such as publications , conferences , and informal interactions ) , rather than , say , licenses or cooperative ventures . finally , we find that large firms are more likely to use public research than small firms , with the exception that start - up firms also make particular use of public research , especially in pharmaceuticals ( cohen et al . , 2002 ) . the most important channels for accessing public research appear to be the public and personal channels ( such as publications , conferences , and informal interactions ) , rather than , say , licenses or cooperative ventures . finally , we find that large firms are more likely to use public research than small firms , with the exception that start - up firms also make particular use of public research , especially in pharmaceuticals ( cohen et al . this certainly corroborates the stories of genomics startup companies , including companies like celera , depending heavily on their recent past in academic research , and their ongoing collaborations with ( and sometimes customers and markets in ) academic research . and it confirms the role of large firms in preferring to draw inputs from a science commons , rather than having to collect atomized , individually expensive fragments of proprietary technologies and data . the history of genomics provides many examples of this , but two are particularly famous . one salient example is the decision in the period 19881991 by the national institutes of health not to sequence human genes ( i.e. , protein - coding regions ) , but instead to focus on systematically mapping and sequencing the entire genome ( cook - deegan , 2003 ) . that decision opened the way for private firms human genome sciences and incyte to fill the void , attracting private capital to do what the public sector had chosen not to do . because it fell victim to the law of unintended effects , nih s decision not to pursue cdna sequencing , however well - intended and understandable , was a mistake in retrospect . the story behind that decision is mainly about the sociology of science , not a theory of the science commons , but it is instructive nonetheless . the decision not to sequence protein - coding regions was initially about fairness between big labs and small ones , not about commercial prospects . as the genome project took shape , the importance of maps of humans and various model organisms was apparent . what kinds of maps deserved substantial funding and concerted effort remained , however , a matter of ongoing dispute . gene map based on cdna technology that is , making dna copies of the messenger rna translated into protein within cells . construction of cdna libraries was standard fare , and remains a seminal technology in efforts to study expression of many genes through microarray technologies . one question left open during the early debates about the human genome project , 19871991 , was whether the genome project would include gene sequencing to start sequencing efforts with dna known to code for protein , and therefore certain to provide codes for most of the important building blocks of cells , while also providing targets for drug development . a technical means to isolate the rna that is translated into proteins was readily available . this was called cdna technology , complementary to the messenger rna that is exported from the nucleus of the cell to its cytoplasm to be translated into protein . in fact , one could take it a step further and look for genes coding proteins likely to be of particular biological significance and focus on just those cdnas coding for secreted proteins and peptides ( such as hormones or neurotransmitters ) , for receptor or transporter molecules extending outside the cell ( with many trans - membrane domains ) , or proteins that bind dna ( with zinc fingers ) , etc . these functional motifs could be predicted , if imperfectly , from dna sequence data . one logical strategy to start the dna sequencing program was to sequence cdnas of particular interest first , then other cdnas , and then turn to genomic dna between genes . ( dna between genes would still be of interest because such sequences were likely to house regulatory signals for turning genes on and off , and affecting the timing of gene expression , as well as structures involved in cell division and the 3-dimensional shape of dna in cells . ) at one of the first public discussions of the human genome project , at cold spring harbor in june 1986 , walter gilbert responded to one attack on the idea of sequencing the genome by noting , of course you would start by sequencing the cdnas when the congressional office of technology assessment presented a plausible budget for funding the genome project , it included a cdna sequencing component ( us congress , office of technology assessment , 1988 ) . the department of energy did pursue some cdna sequencing , but nih s genome program did not . it was a matter of some discussion , but in the end it was largely james watson s call , as director of the relevant nih center . first , it was already going to happen , since incentives to find genes were strong with funding from other nih institutes , but incentives for individual labs to produce whole genomic sequence data were entirely dependent on genome project funding . if big sequencing centers did cdna sequencing , they would inevitably also be at least tempted to pause to characterize particularly interesting genes , and turn to the fascinating biology sure to follow . there were two problems with this : ( 1 ) it would distract them from the major task at hand of deriving a complete reference sequence of the entire genome , and ( 2 ) it would give them an unfair advantage over the thousands of smaller laboratories lacking the dna sequencing firepower . it was the nih decision not to fund cdna sequencing that left the door open to incyte and hgsi to follow human cdna sequencing with private funding , because in the absence of a big public effort , the low - hanging fruit of the genome was there to be plucked , sequenced , and shipped off with claims to the patent office . when incyte and hgsi began to go down this path , those who saw genes as increasingly important inputs to their r&d efforts particularly large pharmaceutical companies got concerned , for two reasons . one was that the us patent and trademark office was obviously patent - friendly , industry - oriented , and seemingly tone - deaf to the concerns of scientists about enclosing the public domain . if patents were granted , then any firm making , using or selling a gene or gene fragment could be hit up for a piece of the action by the company that first sequenced it . incyte and hgsi were clearly capable of filing patent applications on hundreds of thousands of gene tags , and thousands of full - length genes . moreover , the small genomic startups had a running start on large pharmaceutical firms , the plodding apatosaurus s of the biotech jurassic . merck decided to take action ( williamson , 1999).8 it stepped forward to fund a public domain sequencing effort , starting with gene fragments and moving on to full - length cdnas . the work was to be done at washington university in saint louis , home of one of the largest public genome sequencing facilities , and the data were to be moved quickly into the public domain . merck funded the work through a nonprofit arm and had no privileged access to the data . here was a large company funding data to flow into the science commons where it would be freely available to all . four reasons suggest themselves : ( 1 ) it poisoned the well for incyte , hgsi , and other startup firms , creating an open , academic competitor ( albeit funded by industry ) to shut the window on securing exclusive property rights on genes , and thus limiting the number of genes that would be have to be licensed ; ( 2 ) it built good will with scientists , vital collaborators in merck s drug discovery efforts ; ( 3 ) it was great pr ; and ( 4 ) it took advantage of nonprofit funding . if merck paid for it as corporate r&d , it could deduct the r&d as an expense , but would also have to justify public domain science at stockholder expense . through a nonprofit arm , merck funded great science , burnished merck s image , and enhanced merck s future freedom to operate cleanly , without having to appropriate any returns on an investment . the snp consortium story started 5 years later , but followed the same general outline , with an added level of sophistication . during the late 1990s , it became apparent that there were many single - base - pair differences in dna sequence among individuals . these were dubbed single nucleotide polymorphisms , or snps , because of molecular biologists penchant for impenetrable polysyllabic neologism ( ipn ) and three - letter acronyms ( tlas ) . single nucleotide polymorphisms could be used as dna markers , to trace inheritance , to look for associations with diseases or traits , and to study population differences . many genomic firms , including celera , began to signal they were finding snps and filing patent applications . given the uncertainty about what the patent office would allow to be claimed in patents , it seemed possible patents on snps would be granted , meaning anyone using patented snps would need to get a license . this raised the prospect of needing to get licenses on hundreds or even thousands of snp sequences from some unknown ( but potentially large ) number of patent owners . the court of appeals for the federal circuit had instructed the patent office that the nonobvious criterion for dna sequence was met by any new dna sequence , so obvious did not mean obvious how to find it but sequence determined and in hand . the patent office was signaling it might permit patents for any plausible utility , demonstrated or not , and related to biological function or not ( doll , 1998 ) . this was just the kind of nightmare that michael heller and rebecca eisenberg had speculated might arise in their classic 1998 article on the anticommonssituations when too many exclusive rights upstream needed to be assembled , thus thwarting the development of final products , such as drugs , vaccines , biologics , or instruments ( heller & eisenberg , 1998 ) . this threat awakened some companies and scientific institutions to forge an alliance to defeat patent rights in snps ( the snp consortium , 2005 ; holden , 2002 ; thorisson & stein , 2003 ) . the snp consortium was founded in 1999 to first discover snps , file patent applications , map and characterize the snps , and then finally abandon the patent applications . the expense and paperwork of this elaborate dance were intended to ensure snps landed in the public domain unfettered by patent rights . it was deemed necessary as a defensive strategy to ensure that consortium members would have standing as inventors should disputes arise about priority for related inventions ( in patent parlance , interference proceedings , the administrative procedure to determine the real first inventor ) . here a group of private firms of various sizes found common cause in defeating patents on research tools . they valued their freedom to operate highly and the threat of patenting sufficiently to pay for a complicated , expensive procedure to enlarge the public domain . private firms that dearly loved patents for their own products were working together with academic institutions to defeat patents ? one interpretation might be that the public sector failed to support lines of research with a strong need for a science commons sufficiently . but members of the public genome project were well aware of the need for unfettered access to snps and were as worried about the problem as the private firms that wanted to use snps in their research . the issue here was the presence of many different kinds of genomics firms , some of which saw an opportunity to create and sell access to snp research tools . it was no accident that this episode played out during the genomics bubble years , 19982001 , when seemingly any startup with omics in its name could raise millions in private placements and months later ( before any products hit the market ) tens of millions through initial public offerings of stock . it was conceivable that a company could raise private capital to find snps based on a possible paying market to use them in research . the public sector was simply not going to be able to mount a systematic snp initiative fast enough and large enough to compete , and other companies wanted to avoid having to deal with the snp upstart firms ( yes , celera was one of the firms with an interest in snps ) . one interpretation of this story is that the market , some market somewhere , solved the problem . the wonder of capitalism worked its magic by creating public domain resources at private expense to forestall the undue private appropriation of rents from research tools . can we learn to relax , and assume that excesses of the patent system will be compensated by enlightened capitalists guarding their long - term best interests and future freedom to operate ? the merck gene index and snp consortium show the answer is sometimes yes . the nagging worry is that sometimes the answer may be no . a final historical pastiche before closing out the arguments . consider again the prospect of an alternative universe in which free access to data about the medical literature and scientific data we take for granted in health research might instead be constrained by exclusive proprietary rights . if the history and geography had been different and database firms had turned their attention to genomics just a bit sooner , the story might have been quite different . as it was , the early algorithms for interpreting dna sequence such as the blast and smith waterman algorithms were developed by individuals committed to open science . in more recent years , patents have begun to issue on bioinformatic methods relevant to genomics . in some cases , these patents confer incentives to support products marketed by firms , with service teams and development teams to improve their quality.9 how this story will play out remains to be seen , but the ideas of open genomics are being tested in the real world along - side more proprietary models . the early years of the human genome project were marked by many decisions about the disposition of crucial databases . human genetic disease and variation was lovingly cataloged by a team surrounding its founder , victor mckusick of johns hopkins university , in online mendelian inheritance in man ( omim ) . many databases were established to retain data on human genetic maps of various types , and similar databases for other organisms . dna sequence data were collected primarily by a trio of databases in the united states , europe , and japan , and these shared data among themselves . there was , in effect , just one major , central dna sequence database beginning in the early 1980s . creating and coordinating these databases , including the sequence databases , was its own titanic struggle ( smith , 1990 ) , but the battle was waged with only glancing concern for commercial potential . the databases contain many errors ( pennissi , 1999 ) , and creating financial incentives sufficient to encourage careful curation and maintenance is one reason to support proprietary rights in making databases . but that step should not be taken lightly , and now we have a decade - long experiment in the real world to inform such decisions , with strong protection in europe and only copyright and contractual protections for databases in the united states . how different it might have been had the genome project begun in europe , just a decade later , when the european community saw fit to create a new exclusive right in databases as an incentive for companies to create and maintain valuable data . the impacts of this new form of intellectual property have received particular attention from the scientific community . the landmark report on the topic was the bits of power report from the national research council ( 1997 ) , which has led to a line of further work . much of the most advanced work has focused on weather , remote imaging and other huge and complex data sets . there may be cause for worry , and not just for scientists , but for the innovation system as a whole . it may be that free access to data generated at government and nonprofit expense is far more efficient , and a more powerful prime for the economic engine , than allowing every incremental advance to form the basis for rent - seeking . in the patent - happy united states that moves toward ever - longer copyright and data generated at government expense and published by the government can not be copyrighted , and are thus freely available to anyone who wants to use them . it turns out that when it comes to data about the weather , it is the europeans who are scrooge , charging for access . and yet us businesses that provide weather information to various kinds of users have flourished , and the us market for such information is vastly larger than in europe , despite the nearly equal size of the economies of the european union and the united states . an analysis by peter weiss of the national weather service concludes , the primary reason for the european weather risk management and commercial meteorology markets lag so far behind the us is the restrictive data policies of a number of european national meteorological services ( weiss , 2002 ) . given that genomic databases and most health research databases are publicly administered and protect strong norms of open sharing , concern over database protections could prove a sideshow . perhaps it is silly to think that dna sequence might have been housed in a proprietary database owned by reed elsevier , springer , or thomson . but some databases do straddle nonprofit and for - profit worlds , and if a strong us database right were created , the rules of the game could change . swissprot , a database with information about proteins of interest in molecular biology , has been the subject of dispute , both about how to fund it , and about its pricing and access policies driven by trying to ensure its long - term financial survival . the analogies between weather and dna sequence data are not exact , but careful thinking about policies bearing on health research data , including genomic data , is crucial , because the creation of a us database right similar to the european counterpart remains a distinct possibility . the various genome projects , both public and private , pursued quite disparate policies about sharing of data and materials . proprietary technologies and data were created , mainly by private startup firms , and they contributed to the pace and success of the human genome project . deliberate policies of funding organizations , especially the wellcome trust and national human genome research institute and other funders of the public genome project created and preserved a large and important science commons of genomic data and technologies for analyzing dna structure and function . agreements such as the bermuda rules , privately funded initiatives such as the merck gene index , and public - private hybrids such as the snp consortium were deliberately designed to promote broad access to data and materials . genome projects spanned a full range of openness , from rapid open access under the bermuda rules , to subscription - based access to genomic data and analytical tools at moderate cost ( e.g. , celera ) , to highly proprietary gene - sequencing with public disclosure mainly limited to patents as they were granted and published ( human genome sciences and incyte ) . the practical public information benefits from having information widely and inexpensively available , such as public health advances from new knowledge about health risk , reinforce the benefits for science , where a broad network of investigators can draw on masses of information . it is , however , a powerful argument for the need to support open science and a healthy science commons upon which both public and private science can draw . without explicit policies to foster the science commons , this valuable pool of knowledge would have been shallower , and a less productive fountain of social benefits . science is not just about creating knowledge , it is also about making it widely available and making it useful . deliberate policies to promote open access and low - cost use enable some social benefits that profit - driven r&d can not . public genomics is , however , a creature of deliberate policies , not just to fund the science but also to ensure that the results are shared . it is not a system that can be left to mindless self - assembly or politics as usual . without an expansive science commons , many benefits would be lost and private genomics would be vastly less productive and valuable .
the science commons , knowledge that is widely accessible at low or no cost , is a uniquely important input to scientific advance and cumulative technological innovation . it is primarily , although not exclusively , funded by government and nonprofit sources . much of it is produced at academic research centers , although some academic science is proprietary and some privately funded r&d enters the science commons . science in general aspires to mertonian norms of openness , universality , objectivity , and critical inquiry . the science commons diverges from proprietary science primarily in being open and being very broadly available . these features make the science commons particularly valuable for advancing knowledge , for training innovators who will ultimately work in both public and private sectors , and in providing a common stock of knowledge upon which all players both public and private can draw readily . open science plays two important roles that proprietary r&d can not : it enables practical benefits even in the absence of profitable markets for goods and services , and its lays a shared foundation for subsequent private r&d . the history of genomics in the period 19922004 , covering two periods when genomic startup firms attracted significant private r&d investment , illustrates these features of how a science commons contributes value . commercial interest in genomics was intense during this period . fierce competition between private sector and public sector genomics programs was highly visible . seemingly anomalous behavior , such as private firms funding open science , can be explained by unusual business dynamics between established firms wanting to preserve a robust science commons to prevent startup firms from limiting established firms freedom to operate . deliberate policies to create and protect a large science commons were pursued by nonprofit and government funders of genomics research , such as the wellcome trust and national institutes of health . these policies were crucial to keeping genomic data and research tools widely available at low cost .
Introduction Genomics: public and private science in a fishbowl Work enabled by a science commons Public and private genomics in mortal combat Applications in public health: when markets fail Public inputs to private science The science commons and economic efficiency: costs of data access Conclusion: deliberate policies preserved a healthy science commons in genomics
PMC4214178
the term epigenetics was coined some 70 years ago by sir conrad waddington , who theorized the existence of a necessary layer of molecular complexity beyond the genome that must be responsible for producing distinct and variable cellular phenotypes from a singular genome . instincts and genetic programs as useful conceptualizations of the ontogeny of behavior and , importantly , have initiated an appreciation of the multitude of complex influences on phenotypic expression throughout development . epigenetics in its current formulation is more narrowly defined as the perpetuation of genetic information from a cell to its descendants without any necessary change to the genetic code itself , and has been posited as a molecular bridge between the information contained in the genotype and what emerges as a complex and evermodifiable phenotype . the revolution in molecular biology that began in the 1950s , following shortly after waddington 's theoretical formulation of epigenetics , has provided scientists with tools capable of characterizing and understanding the mechanisms that comprise this layer of complexity beyond the genome ( ie , the epigenome ) . dna exists in a continuum of variably compacted states controlled by the structural state of chromatin , ie , the dna and the histone proteins around which it is wrapped . alterations to the structural state of the chromatin can have profound and persistent effects on gene expression . simply put , by establishing and maintaining the structural state of the chromatin , epigenetic processes regulate the ease in which transcription factors and other proteins can access their dna substrates . for example , the amino acid tails of histone proteins are subjected to various post - translational modifications ( eg , acetylation , methylation , phosphorylation ) that render the chromatin relatively compact and transcriptionally inactive ( ie , heterochromatin ) or less compact and transcriptionally active ( ie , euchromatin ) . methylation at the 5-position of cytosine nucleotides within cpg dinucelotides is the only direct epigenetic modification of dna and is associated with transcriptional silencing . epigenetic processes have long been recognized as indispensable for appropriate embryonic and early postnatal development . more recently it has come to light that these same mechanisms that drive critical processes in development and in mitotic cells throughout the lifespan remain dynamic in neurons that , once differentiated , are incapable of mitosis . describes processes that utilize the same mechanisms classically defined as epigenetic , but are clearly for functionally distinct purposes . moreover , there is now recognition that dna methylation itself , once thought to be the most stable of epigenetic marks , may switch between methylated and unmethylated states that render stretches of the chromatin a dynamic canvas on which epigenetic and other mechanisms can work to promote forms of plasticity necessary for long - term information storage . the hypothesis that the processes underlying stable transmission of chromatin states in dividing cells remain active in neurons for the purpose of long - term information storage is gaining significant traction and interest within neuroscience , and a better understanding of these processes is necessary for a more complete conceptualization of neural plasticity and memory . alterations in the structural state of chromatin appear to be highly conserved mechanisms underlying information storage in invertebrate ( eg , crab , honeybee ) and vertebrate ( eg , rat , mouse , human ) central nervous systems . the primary focus of the current review is to highlight the accumulating data suggesting that dynamic dna methylation and demethylation , and the enzymes responsible for methylating and demethylating dna , are critically involved in memory formation and behavioral plasticity . while the recognition that the structures of chromatin and dna are rapidly modifiable in the brain adds a significant amount of complexity to our understanding of behavioral and neuronal plasticity , it also suggests a heretofore largely untapped therapeutic potential for alleviating a wide range of neurological , and other , disorders . dna methylation plays an essential role in several developmental processes ( eg , genomic imprinting , x - chromosome inactivation ) , in the maintenance of genome stability by silencing repetitive elements , and in maintaining tissue - specific and appropriate patterns of gene expression through cell division . during embryonic and early postnatal development coordinated waves of methylation and demethylation ensure temporally specific patterns of gene expression that act to establish and perpetuate tissue appropriate cellular identities . once established in somatic cells , methylation patterns have traditionally been considered immutable . a seminal study in 2004 by meaney and colleagues showed that variable early postnatal levels of maternal care ( eg , nursing , grooming ) could alter dna methylation patterns in neurons and that these alterations persisted into adulthood and influenced behavioral and neural responses to stress . methylation of a cytosine nucleotide ( 5mc ) is a thermodynamically very stable modification that is endowed with robust power to influence gene expression . for example , methylation of a single site in a brain - derived neurotrophic factor ( bdnf ) exon promoter can silence the gene . transcriptional silencing is thought to occur via one of two nonmutually exclusive mechanisms ; 5mc can physically restrict transcription factor and rna polymerase ii binding , or 5mc can recruit transcriptional repressor protein complexes . until recently , it was believed that 5mc occurred only in the context of cg dinucleotides ; however , new findings have demonstrated the existence of substantial levels of mch methylation where h represents an a , t , or c. like 5mc , mch is depleted in expressed genes and inversely proportional to the level of expressed transcript . the enzymes responsible for catalyzing the transfer of a methyl group to cytosine nucleotides , using dietary sources of s - adenosyl - l - methoinine as the methyl donor , are the dna methyltransferases ( dnmts ) and are broadly subdivided into two categories : the de novo dnmts , dnmt3a and dnmt3b , establish initial methylation patterns on unmethlyated dna ; and the maintenance dnmt , dnmt1 , recreates already established methylation patterns on hemimethylated replicating dna . dnmts are essential to normal development as evidenced by the embryonic or early postnatal lethality of constitutive knockout of dnmt1 or dnmt3a . dnmt1 , dnmt3a , and dnmt3b are all expressed in the postnatal developing rat brain . dnmt1 and dnmt3a are expressed in adult neurons and oligodendrocytes , and dnmt3b expression is detectable as well , although not to the extent of dnmt3a . dnmt mrna generally reaches its highest level at around 1 week postnatal and subsequently decreases in brain . conditional brain - specific dnmt knockouts have yielded insights into the roles of the dnmts in the central nervous system . a conditional dnmt1 knockout induced during embryonic development using the cre - lox system upregulated apoptotic genes and led to degeneration of the cortex and hippocampus , abnormal morphology of dendrites , and alterations in the resting electrophysiological properties of neurons . the conditional forebrain knockout mice survived to adulthood but , not surprisingly , evidenced severe learning and memory impairments , as they failed to show any learning curve in a spatial memory test following 13 days of training . another study that assayed neurological phenotypes in conditional dnmt1 knockout mice in which the knockout occurred at embryonic day 12 ( e12 ) reported that dnmt1 deficiency led to hypomethylation in differentiated neurons and apoptosis of neurons prior to postnatal day 21 in mosaic animals . dnmt3a null mice are essentially normal at birth , but quickly deteriorate and die in early postnatal development . these mice exhibit impaired postnatal neurogenesis accompanied by dramatic alterations in gene expression profiles in neural stem cells with 1253 genes upregulated and 1022 downregulated , effects likely mediated by impaired polycomb repression of neurogenic genes . gene - specific dna methylation as well as neuronal expression of dnmt enzymes , fluctuates as a result of experiences ranging from intake of drugs of abuse , associative and nonassociative learning experiences , cellular insults , and prenatal stressors . furthermore , in several brain pathologies the expression of the dnmts , as well as methylation of specific gene promoters , appears aberrant . endres et al reported that an increase in dna methylation was associated with more robust brain lesions following induction of stroke using the middle cerebral artery occlusion ( mcao ) model . treatment with the nonspecific dnmt inhibitor 5-aza-2'-deoxycytodine ( 5aza ) or mice heterozygous for dnmt evidenced a reduction in neuronal damage following mcao . a more recent study in gerbils determined that 5 minutes of ischemia induced by bilateral common carotid artery occlusion ( 2vo ) significantly upregulated dnmt1 expression specifically in hippocampal ca1 gabaergic neurons as well as in astrocytes 4 days after the occlusion . ninety days of chronic brain hypoperfusion induced by 2vo promoted a decrease in global dna methylation accompanied by a decrease in expression of dnmt3a in parietal lobe cortex with no change in dnmt1 expression . these findings suggest that acute vs chronic ischemic insults may differentially affect dna methylation and expression of de novo and maintenance dnmts in adult brain , and that targeted dnmt inhibition may offer some therapeutic potential in restricting or preventing brain damage invoked by a cerebrovascular accident . temporal lobe epilepsy is associated with aberrant dna methylation of specific genes as well as increased expression patterns of dnmt isoforms in postmortem tissue from human epileptics and in animal models of the disorder . studies using postmortem tissue from patients with intractable epilepsy demonstrate that dnmt1 and dnmt3a protein expression are robustly increased in hippocampal tissue from epileptics , and hypermethylation of the reelin gene promoter , presumably mediated by a dnmt , has been reported . using a rat model , parrish et al recently found that kainic acidinduced epileptic activity in the hippocampus led to increased global dna methylation in the ca1 and ca3 and decreased methylation in the dentate gyrus 6 weeks after kainate treatment . interestingly , distinct patterns of dnmt1 and dnmt3a expression were observed in hippocampal subregions immediately after ( 1 hour ) and 6 weeks after induction of epilepsy . decreased expression of dnmt3a persisted 6 weeks after induced seizure , whereas immediate decreases in dnmt1 expression normalized by 6 weeks . importantly , intrahippocampal treatment with the nonspecific dnmt inhibitor zebularine decreased the latency to seizure onset and prevented the changes in global methylation and promoter methylation of the grin2b / nr2b , a gene known to play a role in epilepsy . collectively , the results from ischemic and epileptic models suggest that wide - ranging insults can influence dna methylation in brain . in the case of epilepsy it is not yet clear if the changes in expression are involved in the etiology of epilepsy or result from the pathology . these findings nevertheless demonstrate that genomic methylation is indeed plastic and likely plays a key role in neurological disorders and neurological responses to insult . prenatal stressors as well as stress paradigms administered in adulthood have repeatedly been shown to influence neuronal dna methylation . indeed , differential methylation of the corticotropin - releasing factor gene promoter follows maternal deprivation stress , prenatal stress , and chronic mild stress . variation in the diet of mice during gestation or later in development can alter the methylation status of dna in a persistent fashion . an increase in dietary l - methionine or treatment with the nonspecific histone deacetylase inhibitor trichostatin a ( tsa ) reverses the effects of poor maternal care behavior on dna methylation and hypothalamic - pituitary - adrenal axis , and behavioral responses to stress , providing further evidence that dna methylation marks in neurons are modifiable . exposure to a cat increases bdnf gene methylation in the dorsal hippocampus in rats , while simultaneously favoring a decrease in methylation of the bdnf gene in the ventral hippocampus with no change in bdnf methylation in the basolateral amygdala or the prefrontal cortex . the functional significance of bidirectional methylation of the same gene in different brain regions , or how this is mediated , is not yet clear . prenatal exposure to the environmental toxin methyl mercury hypermethylates the bdnf promoter region in the hippocampal dentate gyrus , with an accompanying hypoacetylation of histone h3 and a decrease in bdnf mrna expression . these epigenetic alterations are associated with increased depressive - like behavior in adulthood , as assessed by the forced swim test . interestingly , bdnf expression in the hippocampal dentate gyrus is necessary for antidepressant efficacy in the forced swim test , thus suggesting a mechanism whereby various stressors experienced in the intrauterine environment may relate to deleterious behavioral phenotypes in adult animals . tian et al reported that conditioned place preference to cocaine led to global methylation in the prefrontal cortex , and that methionine supplementation prevented the establishment of conditioned place preference for cocaine , but not morphine or food reward . bodetto et al have confirmed a role for dna methylation in mediating the rewarding effects of cocaine . in their study , cocaine increased dna methylation at the protein phosphatase 1 ( pp1 ) gene promoter , a memory suppressor gene , and increased expression of dnmt3a . dnmt3a overexpression specifically in the nucleus accumbens , a brain region often implicated in behavioral responses to drugs of abuse , has been shown to attenuate cocaine reward and increase dendritic spine density of thin spines in nucleus accumbens neurons . by contrast , nucleus accumbens - specific knockout of dnmt3a potentiated conditioned place preference for cocaine . acute vs chronic cocaine use has opposite effects on dnmt3a expression in nucleus accumbens as acute treatment increases , whereas chronic treatment decreases , dnmt3a expression in the accumbens . administration of the dnmt inhibitor rg108 blocks cocaine 's effect on spine density in the nucleus accumbens and enhances conditioned place preference for cocaine . ethanol exposure during all 3 trimesters of embryonic development similarly has been shown to upregulate the expression of dnmt3a as well as dnmt1 and the methyl - binding protein methyl - cpg - binding protein 2 ( mecp2 ) in the hippocampus . importantly , cellular insults , prenatal stressors , or exposure to aversive stimuli are not the only experience - driven changes in dna methylation or dnmt expression . single running wheel exercise sessions or week - long access to a running wheel , known to be a rewarding activity in rodents , demethylate the bdnf exon iv promoter , increase bdnf mrna and protein in the hippocampus of sprague - dawley rats , and can elevate levels of phosphorylated mecp2 , and subsequently silence the associated gene . phosphorylation of mecp2 can lead to its dissociation from chromatin , which may favor transcriptional activation of bdnf . single exercise sessions decreased dnmt3b and dnmt1 in hippocampus in young , but not old , rats . interestingly , long - term exercise is associated with improved learning in rodents and humans , and in enhanced hippocampal plasticity in rodent models , an effect that may be related to the increases in bdnf . in summary , highly variable and biologically relevant environmental experiences appear to alter the methylation state of specific regions of the genome and promote increases or decreases in the associated mrna and protein . long thought to be a paragon of biological stability , gene methylation , at least in the brain , may be a rather dynamic process that is altered as a result of environmental input . broadly speaking , epigenetic processes have been implicated in behavioral adaptations that rely on associative and nonassociative learning processes as well as in the subsequent storage of putative memory traces in the central nervous system . the notion that long - term memories are encoded in the methylation state of the dna was initially proposed by griffith and mahler in a theoretical paper published in nature ( the dna ticketing theory of memory ) , in 1969 . they proposed that the physical basis of memory could lie in the enzymatic modification of the dna of nerve cells . the events in the nervous system that are necessary for the formation of long - term memories are complex and not completely understood . memory formation requires the orchestration of precise and temporally coordinated changes in gene expression , transcription factor activation and inactivation , and bidirectional changes in the expression and activity of chromatin and dna modifying enzymes . these molecular events coalesce to produce de novo changes in the synaptic strength and connectivity within specific circuitry underlying the formation and long - term storage of new information . moreover , the specific neural pattern responsible for the storage of the memory is retrievable in the presence or absence of the stimuli that promoted its formation . importantly , the memory trace must be self - perpetuating , must persist in spite of the continual turnover of molecules involved in its genesis , and can potentially last the lifetime of an organism . the idea that dynamic changes in dna methylation are necessary for long - term memory formation was first given empirical support by sweatt and colleagues . initially the sweatt laboratory reported that treatment with nonspecific dnmt inhibitors impaired the formation of contextual fear associations . a follow - up study found that experience in associative fear learning to context , a paradigm in which an animal is exposed to contiguous presentations of a novel and initially innocuous environmental context paired with an aversive footshock , could rapidly ( ie , within 30 minutes ) increase the methylation of the memory suppressor gene protein phosphatase 1 ( pp1 ) , while concurrently demethylating the promoter region of the plasticity - related gene reelin . experience in fear learning upregulated the expression of dnmt3a and dnmt3b in areas of the brain necessary for learning ( eg , hippocampus ) , with no effect on dnmt1 . moreover , nonspecific inhibition of dnmts impaired memory and prevented the methylation of pp1 , while enhancing the demethylation of reelin . following training , the methylation of reelin and pp1 returned to normal within 24 hours , leading the authors to conclude that dna methylation , while critical for memory formation , is not likely to be a mechanism of longterm storage , at least in the hippocampus . further studies have expanded on these initial findings and implicated methylation of the bdnf gene in associative fear learning . experience in a fear learning paradigm demethylates the bdnf exon iii and exon iv promoters in the hippocampus , and these effects are blocked by application of the nmda receptor antagonist mk801 , indicating that they are activity - driven . in accordance with the aforementioned miller and sweatt study , the effects on methylation of bdnf in the hippocampus were relatively transient and observable 30 minutes and 24 hours after training . although methylation changes in the hippocampus are relatively transient , methylation of the memory suppressor calcineurin is increased in the prefrontal cortex 7 days after fear conditioning , and this hypermethylation migrates to the anterior cingulate cortex as long as 1 month following initial training . infusion of dnmt inhibitors directly into the anterior cingulate cortex blocks memory retrieval 30 days after training . these data are consistent with the known roles of these brain regions in memory formation vs storage , and suggest that transient and long - lasting methylation / demethylation in distinct brain circuits is important in establishing long - term memory of fearful stimuli . day et al have demonstrated that the importance of dna methylation / demethylation in memory formation is not restricted to aversive events ( eg , fear conditioning , morris water maze ) . using a cued - sucrose delivery associative reward learning paradigm , the authors found that experience in the learning task increased the expression of the immediate early genes erg1 and c - fos , which were demethylated following learning . in a neuronal culture preparation , kc1 depolarization did not change dnmt3a or dnmt3b expression ; however , it did increase dnmt3a binding at the genomic sites ( ergl and c - fos ) that underwent de novo methylation in the in vivo reward - learning experiments . dnmt inhibition before kc1 treatment prevented depolarization - induced changes in dna methylation , and pharmacological inhibition of dnmts in the ventral tegmental area in vivo blocked reward learning without influencing motivation in general . interestingly , treatment with dnmt inhibitors can prevent the memory enhancing effects induced by other compounds , as dnmt inhibition has been shown to prevent estrogen - induced improvements in memory . estrogen treatments increase the hippocampal expression of dnmt3a and dnmt3b , but not dnmt1 . using mice with conditional forebrain - specific double knockout of dnmt1 and dnmt3a in neurons , feng et al reported learning and ltp deficits that were not apparent with a single knockout of either dnmt1 or dnmt3a . therefore , although experience in associative learning tasks and other stimuli appear to differentially affect de novo vs maintenance dnmt expression , it seems that each of these distinct isoforms can compensate for the lack of the other . the culmination of the molecular events that promote long - term memory formation leads to structural changes at synapses in brain - region specific circuits that underlie learning and memory . therefore , it is not surprising that epigenetic modifications of chromatin have been implicated in regulating the forms of synapse plasticity believed to establish memory . initial studies implicating epigenetic processes in synaptic plasticity focused on histone modifications such as acetylation and deacetylation . subsequently , manipulation of dnmts in dissociated neuronal cultures and in acute brain slice experiments have implicated the process underlying addition or removal of 5mc in regulating basal neuronal function as well as plasticity within brain circuits . levenson et al found that treatment of a hippocampal slice preparation with the dnmt inhibitors 5aza or zebularine impaired the magnitude of long - term potentiation ( ltp ) at the schaeffer collateral - ca1 pathway . ltp , widely believed to be a cellular correlate of learning , is typically induced via electrical stimulation of brain slices with robust high - frequency stimuli , which causes a demonstrable increase in synaptic responses subsequent to the high - frequency induction protocol . -burst stimulation , a physiologically relevant means of inducing ltp , led to robust and enduring potentiation ( 3 hours ) in vehicle - treated slices ; however , treatment with either dnmt inhibitor impaired ltp magnitude and maintenance . in a follow - up study , it was shown that dnmt - inhibitor induced ltp deficits were rescued by slice application of the histone deacetylase inhibitor sodium butyrate , suggesting a crosstalk between dna methylation and histone acetylation in the regulation of hippocampal synaptic plasticity . in accordance with the observation that nonspecific dnmt inhibition impairs ltp magnitude and maintenance , feng et al have shown that forebrain - specific conditional double knockout of both dnmt1 and dnmt3a led to similar ltp impairments . somewhat surprisingly , no effects on synaptic function were observed in dnmt1 or dnmt3a single knockouts . depolarization of hippocampal neuron cultures with 50 mm kc1 downregulates dnmt1 and dnmt3a expression , an effect that is prevented by the sodium - channel blockers tetrodotoxin and veratridine . kc1-induced increases in neuronal activity demethylate the regulatory region of the bdnf exon iv promoter , and promote the dissociation of a corepressor complex composed of mecp2 , histone deacetylases , and sin3a from the bdnf promoter . bdnf has been broadly implicated in neuronal viability , synaptic plasticity , synaptogenesis , and memory , suggesting important activity - dependent functional consequences of bdnf demethylation . although basal synapse function is normal in acute hippocampal slices treated with dnmt inhibitors , dnmt inhibitors administered to dissociated hippocampal neuron cultures decrease the frequency of spontaneous miniature excitatory postsynaptic currents ( mepscs ) and demethylate the bdnf i promoter . these effects are activity - dependent , as the nmda receptor antagonist ap5 prevented bdnf promoter demethylation . although most studies assessing the role of histone modifications and dna methylation on ltp have examined the hippocampus , sui et al found that high frequency stimulation - induced ltp led to demethylation of the reelin and bdnf gene promoters in the prefrontal cortex and increased acetylation of histone 3 and histone 4 , marks of active transcription . dnmt inhibitor treatment impaired ltp in prefrontal cortex and prevented the alterations in histone acetylation . in future experiments it will be interesting to determine if the electrophysiological correlates of memory formation ( eg , ltp ) migrate from brain regions necessary for memory formation ( eg , hippocampus ) to areas more involved in memory storage ( eg , anterior cingulate cortex , prefrontal cortex ) and if such changes require dna methylation status updates within this geography . any role for rapid dna methylation / demethylation in memory formation necessitates the existence of an active mechanism for removing 5mc from specific genes involved in plasticity and memory . in other words there switch on specific cytosine nucleotides that is responsive to environmental contingencies that promote associative and nonassociative forms of learning . among the strongest candidates as molecular agents of demethylation tet1 , tet2 , and tet.3 are known to be bona fide mediators of 5mc demethylation in plants , and in mammalian tissue exhibit a strong preference for cpg - rich motifs . the pathway responsible for conversion of 5mc to cytosine is thought to involve successive oxidation of 5mc to 5-hydroxymethylcytosine ( 5hmc ) to 5-formylcytosine ( 5fc ) to 5-carboxylcytosine ( 5cac ) . the presence of 5hmc in the brain is significantly diminished when tet proteins are inhibited . all three members of the tet protein family are capable of converting 5mc to 5hmc as well as subsequent oxidation of 5hmc to 5fc and 5cac . the modified bases can then be further subjected to deamination , glycosylation , and base excision repair to result in final conversion back to a cytosine base . reconversion back to cytosine may require the activity of base - excision repair mechanisms . 5hmc can be deaminated to 5hmu by activation - induced deaminase ( aid ) , with subsequent removal of 5hmu by thymine dna glycosylase ( tdg ) , methyl - binding domain protein 4 ( mdb4 ) , and single strand - specific monofunctional uracil dna glycosylase 1 ( smug1 ) . thymidine glycosylase can also directly target 5fc and 5cac ; however , any in vivo role of tdg in demethylation remains unclear . interestingly , it was recently shown that dnmts may also act as demethylases capable of converting 5hmc to c , possibly by a direct interaction with tdg . methyl - binding domain protein 2 ( mdb2 ) can directly demethylate dna containing 5mc by a reaction that releases formaldehyde . guo et al have demonstrated activity - dependent demethylation of two plasticity - related genes , fibroblast growth factor 1 ( fgf1 ) and bdnf following electrical stimuli capable of inducing epileptiform activity . in the hippocampus , mice with reduced levels of tet1 were incapable of demethylating bdnf and fgf genes following seizure - inducing stimuli . tet1 knockout mice exhibit downregulated expression of the neuronal activity - related genes npas4 , c - fos , and arc and a reduction in 5hmc levels in the hippocampus and cortex with no change in 5mc . tet1 knockouts develop normally without any observable brain abnormalities , which may suggest that tet2 or tet3 can compensate for loss of tet1 . tet1 knockouts also have impaired short - term spatial memory formation and abnormal neuro - genesis in neural precursor cells , but do not appear to have robust long - term memory impairments . the knockouts surprisingly have abnormally enhanced long - term depression elicited in the hippocampus and an impaired ability to extinguish previously acquired associative memories ( ie , extinction ) . expression of several plasticity - related genes were shown to be diminished , however , including c - fos , arc , npas4 , and erg2 in hippocampus , likely resulting from hypermethylation of those genes . tet1 knockout elevated promoter methylation of 478 genes and decreased methylation in 38 genes , with an overlap of 39 that were both hypermethylated and downregulated . by contrast , hippocampaltargeted overexpression of tet1 upregulated memory - associated genes including c - fos , arc , erg1 , homer1 , and nr4a2 and impaired contextual fear learning . interestingly , overexpression of a catalytically inactive form of tet1 also increased expression of those genes and impaired fear learning suggesting demethylase - dependent and -independent effects on learning and gene expression . recent studies have also characterized the role of other potential demethylation factors in brain function . the growth arrest and dna - inducible 45 ( gadd45 ) family of enzymes may bind to and focus the enzymatic activity of cytidine deaminases and thymidine glycosylases to specific gene promoters , thereby tagging specific genes for active demethylation . ma et al suggested that neuronal activity may focus base excision repair mechanisms to cpg promoters and this may be mediated by gadd45 enzymes . leach et al reported that gadd45b knockout mice are impaired in contextual fear conditioning , whereas sultan and coworkers found improvements in contextual fear learning at 24 hours and 28 days post - training , and an enhanced late - phase ltp . the reasons behind the contrasting results are not yet clear ; however , in both cases the studies suggest that manipulations of putative demethylases can alter normal learning and memory . any role for rapid dna methylation / demethylation in memory formation necessitates the existence of an active mechanism for removing 5mc from specific genes involved in plasticity and memory . in other words there must exist a switch on specific cytosine nucleotides that is responsive to environmental contingencies that promote associative and nonassociative forms of learning . among the strongest candidates as molecular agents of demethylation tet1 , tet2 , and tet.3 are known to be bona fide mediators of 5mc demethylation in plants , and in mammalian tissue exhibit a strong preference for cpg - rich motifs . the pathway responsible for conversion of 5mc to cytosine is thought to involve successive oxidation of 5mc to 5-hydroxymethylcytosine ( 5hmc ) to 5-formylcytosine ( 5fc ) to 5-carboxylcytosine ( 5cac ) . the presence of 5hmc in the brain is significantly diminished when tet proteins are inhibited . all three members of the tet protein family are capable of converting 5mc to 5hmc as well as subsequent oxidation of 5hmc to 5fc and 5cac . the modified bases can then be further subjected to deamination , glycosylation , and base excision repair to result in final conversion back to a cytosine base . reconversion back to cytosine may require the activity of base - excision repair mechanisms . 5hmc can be deaminated to 5hmu by activation - induced deaminase ( aid ) , with subsequent removal of 5hmu by thymine dna glycosylase ( tdg ) , methyl - binding domain protein 4 ( mdb4 ) , and single strand - specific monofunctional uracil dna glycosylase 1 ( smug1 ) . thymidine glycosylase can also directly target 5fc and 5cac ; however , any in vivo role of tdg in demethylation remains unclear . interestingly , it was recently shown that dnmts may also act as demethylases capable of converting 5hmc to c , possibly by a direct interaction with tdg . methyl - binding domain protein 2 ( mdb2 ) can directly demethylate dna containing 5mc by a reaction that releases formaldehyde . guo et al have demonstrated activity - dependent demethylation of two plasticity - related genes , fibroblast growth factor 1 ( fgf1 ) and bdnf following electrical stimuli capable of inducing epileptiform activity . in the hippocampus , mice with reduced levels of tet1 were incapable of demethylating bdnf and fgf genes following seizure - inducing stimuli . tet1 knockout mice exhibit downregulated expression of the neuronal activity - related genes npas4 , c - fos , and arc and a reduction in 5hmc levels in the hippocampus and cortex with no change in 5mc . tet1 knockouts develop normally without any observable brain abnormalities , which may suggest that tet2 or tet3 can compensate for loss of tet1 . tet1 knockouts also have impaired short - term spatial memory formation and abnormal neuro - genesis in neural precursor cells , but do not appear to have robust long - term memory impairments . the knockouts surprisingly have abnormally enhanced long - term depression elicited in the hippocampus and an impaired ability to extinguish previously acquired associative memories ( ie , extinction ) . expression of several plasticity - related genes were shown to be diminished , however , including c - fos , arc , npas4 , and erg2 in hippocampus , likely resulting from hypermethylation of those genes . tet1 knockout elevated promoter methylation of 478 genes and decreased methylation in 38 genes , with an overlap of 39 that were both hypermethylated and downregulated . by contrast , hippocampaltargeted overexpression of tet1 upregulated memory - associated genes including c - fos , arc , erg1 , homer1 , and nr4a2 and impaired contextual fear learning . interestingly , overexpression of a catalytically inactive form of tet1 also increased expression of those genes and impaired fear learning suggesting demethylase - dependent and -independent effects on learning and gene expression . recent studies have also characterized the role of other potential demethylation factors in brain function . the growth arrest and dna - inducible 45 ( gadd45 ) family of enzymes may bind to and focus the enzymatic activity of cytidine deaminases and thymidine glycosylases to specific gene promoters , thereby tagging specific genes for active demethylation . ma et al suggested that neuronal activity may focus base excision repair mechanisms to cpg promoters and this may be mediated by gadd45 enzymes . leach et al reported that gadd45b knockout mice are impaired in contextual fear conditioning , whereas sultan and coworkers found improvements in contextual fear learning at 24 hours and 28 days post - training , and an enhanced late - phase ltp . the reasons behind the contrasting results are not yet clear ; however , in both cases the studies suggest that manipulations of putative demethylases can alter normal learning and memory . the brain exhibits dynamic patterns of dna methylation and dnmt expression during aging , and the transcription of key memory - related genes declines in aging . consistent with this observation is that in aged animals impairments in ltp magnitude and maintenance are observed when weak ltp induction protocols ( ie , near - threshold ) are used . siegmund et al have shown changing patterns of dna methylation patterns at various gene loci in human brain , with a general trend for increasing methylation over the lifespan in the 50 loci examined . inappropriate methylation of the activity - related cytoskeleton - associated protein ( arc ) , a known actor in synaptic plasticity and memory , may play a role in age - related memory impairments . old rats have higher levels of methylation of the arc promoter than do adult rats , and the transcription of arc is reduced in the aged hippocampus after learning events relative to younger animals . in addition , inhibition of arc interferes with maintenance of ltp , likely due to its role in synaptic -amino-3-hydroxy5-methyl-4 isoxazolepropionic acid ( ampa ) receptor trafficking . oliviera et al have demonstrated an aging - associated decrease in the expression of dnmt3a2 , one of two transcripts from the dnmt3a gene , in the hippocampus of aged mice . dnmt3a2 is structurally identical to dnmt3al except it lacks 219 amino acids in the n- terminus , is associated with euchromatin , and appears to act as an immediate early gene . learninginduced activation of dnmt3a2 was shown to be impaired in aged mice and overexpression of dnmt3a2 , which increased global dna methylation in the hippocampus , improved performance in fear learning and object location memory tasks . specific gene methylation alterations have been shown in postmortem brain tissue from alzheimer 's disease patients , and these patients exhibit an accelerated rate of age - related change in methylation . alzheimer's - related alterations in dna methylation may be complex and vary by brain region , as global hypomethylation has been shown in entorhinal cortex in postmortem alzheimer 's disease tissue , as well as hypermethylation in the dorsolateral prefrontal cortex . debilitating psychiatric disorders including schizophrenia , bipolar disorder , and major depressive disorder have been linked to aberrant dna methylation . mill and coworkers examined genomic dna from 125 postmortem brains of schizophrenics , bipolar , and nonpsychiatric patients and concluded that dna methylation is significantly altered in major psychiatric disorders . the reelin and gad1 promoter regions are hypermethylated in the brains of schizophrenic patients , and dnmt1 expression is upregulated . the gad1 gene may be of particular interest as it codes for glutamic acid decarboxylase , the enzyme responsible for synthesis of -aminobutyric acid ( gaba ) , the major inhibitory neurotransmitter in the brain . noh et al demonstrated that an antisense - driven knock - down of dnmt1 in mouse cortical neuron cultures was accompanied by an increase in reelin expression and suggest that reduced reelin and gad67 protein may be due to dnmt1-mediated hypermethylation of their promoters . treatment with the histone deacetylase inhibitors tsa or valproate can decrease gad1 promoter methylation , decrease dnmt1 expression in mouse cortex , and increase the expression of reelin and gad67 . increased reelin and gad67 expression were associated with the dissociation of mecp2containing corepressor complexes from their promoter regions . deficits in the ability to inhibit a startle response to an auditory stimulus when that stimulus is preceded by a smaller magnitude auditory stimulus are observed in human schizophrenic patients as well as in animal models of schizophrenia . our laboratory has recently found a dissociable impact of conditional forebrain knockout of dnmt1 and dnmt3a on prepulse inhibition in mice ( unpublished data ) . dnmt1 knockout mice showed an enhanced inhibition of their startle response at all prepulse stimulus magnitudes tested , effects in opposition to the impairments in prepulse inhibition observed in schizophrenia models . significant stressors experienced in the prenatal environment may predispose an individual to the development of a psychiatric disorder in adulthood . restraint stress experienced by a pregnant mouse leads to increases in dnmt1 and mecp2 binding at the reelin and gad1 promoter regions , changes that resemble those observed in postmortem samples from schizophrenic brain . polymorphisms in the dnmt3b gene were recently found to be associated with suicide attempts in depressed patients . mcgowan et al reported increased methylation of the glucocorticoid receptor gene in postmortem brain tissue from suicide victims that had experienced child abuse , findings that are consistent with the behavioral abnormalities observed in animal models of insufficient postnatal care . overexpression of dnmt3a in the nucleus accumbens has been demonstrated to induce depressive - like behavior in mice , whereas inhibition of dnmts in the nucleus accumbens has antidepressantlike effects in a chronic social defeat model as well as in the forced swim test . one hurdle in the way of a better understanding of the role of dna methylation , and chromatin modification in general , in memory is the robust and dynamic interplay between the various enzymes and proteins capable of altering the chromatin . 5mc , by poorly understood signaling pathways , is able to recruit methyl - binding proteins and subsequently large chromatin remodeling complexes that are believed to stably mark stretches of the genome . the methyl - binding protein mecp2 recognizes single 5mc sites and is thought to further recruit transcriptional compressor complexes . interestingly , approximately 95% of cases of the neurodevelopmental mental retardation syndrome rett syndrome are caused by mutations of mecp2 , with some similar phenotypes apparent in cases of mecp2 duplication syndrome , which involves a duplication of the xq28 chromosomal region harboring mecp2 . using a mouse model of mecp2 duplication syndrome , our laboratory has shown that a 50% increase in mecp2 in brain promotes motor coordination deficits , anxiety , learning and memory , and ltp impairments . therefore , the molecular events proceeding from dna methylation followed by binding of mecp2 appear to be of critical importance for cognitive function and synaptic plasticity . dnmt inhibition can impair memory formation as well as ltp in hippocampal slices , and these effects are reversed by treatment with nonspecific histone deacetylase inhibitors . drugs that promote the acetylation of histones may facilitate the loosening of chromatin by leading to the release of methyl - binding proteins ( eg , mecp2 ) . our laboratory has shown that pharmacological inhibition of dnmts in cultured hippocampal neurons decreases mepscs , however this effect is occluded in mecp2 knockout neurons . another large gap remaining in the pursuit of a more complete understanding of the role of dna methylation in memory formation is determining the mechanisms both upstream and downstream of the epigenetic alterations as well as how individual epigenetic modifiers are activated in distinct environmental and cellular contexts . for instance how are dnmts or tet proteins directed in a sequence - specific manner ? although the factors that guide a dnmt to a specific methylated site are not known for certain , it is believed that interactions with transcription factors and other chromatin proteins play a critical role . heterochromatin is characterized by distinct epigenetic marks , eg , methylated lysines 9 ( h3k9 ) and 27 ( h3k27 ) , and dna cpg methylation , which are associated with further recruitment of methyl - binding proteins , histone deacetylases , and other proteins . these histone modifications and the enzymes that catalyze their formation are known to interact with dnmts and methyl - binding proteins . for example , the histone protein hid recruits dnmt1 and dnmt3b , and the histone methyltransferases suv39h1 and g9a recruit dnmt3a . it has previously been shown that the mir-29 family of micro rnas is capable of targeting dnmt3a , dnmt3b , and the tet proteins , potentially establishing a balance between methylation and demethylation at specific genomic targets . vire et al have highlighted the importance of polycomb group proteins as links between the methylation of histones and dna methylation . trimethylation of histone 3 at lysine 9 ( h3k9 ) and h4k20 appears to be a prerequisite for subsequent methylation of a gene . following h3k9 binding , heterochromatin protein 1 ( hp1 ) can associate with dnmt3a by direct binding to its plant homeodomain ( phd ) motif . how dynamic are chromatin states and the molecular agents that confer specific states resulting from environmental experience ? are there distinct roles for different dnmt or tet isoforms in methylation and demethylation in specific tissues or in response to distinct signals ? in spite of the inherent complexity of the epigenome , great strides are being made to determine its role in the central nervous system . these advances are , and will continue to be , essential for understanding diverse processes including memory formation , responsiveness to stressors and neurological insults , and the etiology of psychiatric disorders .
dynamic regulation of chromatin structure in postmitotic neurons plays an important role in learning and memory . methylation of cytosine nucleotides has historically been considered the strongest and least modifiable of epigenetic marks . accumulating recent data suggest that rapid and dynamic methylation and demethylation of specific genes in the brain may play a fundamental role in learning , memory formation , and behavioral plasticity . the current review focuses on the emergence of data that support the role of dna methylation and demethylation , and its molecular mediators in memory formation .
Introduction DNA methylation is indispensable for normal organismal development DNA methylation and DNMT expression are responsive to environmental stressors and cellular insults DNA methylation, DNMTs, and memory DNA methylation and synaptic function Determining an active demethylation process DNA methylation in disorders presenting with cognitive impairment Future questions
PMC3859170
malaria remains an important public health concern in countries where transmission occurs regularly as well as in areas where transmission has been largely controlled or eliminated . it was estimated that there are 39 million children under 5 years of age who experience 33.7 million malaria episodes and 152,000 childhood deaths from malaria each year in areas suitable for seasonal malaria chemoprevention . factors such as drug pressure , strain variation , or approaches to blood collection affect the morphological appearance of malaria species which have created diagnostic problems that invariably had a negative effect on malaria control . with the introduction of high cost antimalarial ( artemisinin based therapies ) the need for accurate diagnostic tools for monitoring malaria elimination / eradication successes becomes a task that must be achieved [ 3 , 4 ] . in most endemic countries malaria diagnosis depends mainly on clinical evidence and in some cases thick film microscopy ( tfm ) and rapid diagnostic technique ( rdt ) may be used for laboratory confirmation . microscopy remains the gold standard for malaria diagnosis and it is less costly with a threshold sensitivity of 5 to 50 parasite/l ( depending on the microscopist expertise ) . the major constraints of microscopy include the requirement of considerable technical expertise and the fact that it is time - consuming for optimal blood film preparation , examination and interpretation . rdt , an immunochromatographic capture procedure was developed to improve the timeless sensitivity , and objectivity of malaria diagnosis through less reliance on expert microscopy . preferred targeted antigens for rdts are those which are abundant in all asexual and sexual stages of the parasite . currently the focus of rdt is on the detection of histidine - rich protein2 ( hrp-2 ) from plasmodium falciparum and parasite - specific lactate dehydrogenase ( pldh ) or plasmodium aldolase from the parasite glycolytic pathway found in all species . however , several factors in the manufacturing process as well as environmental conditions may affect rdt performance , and these include suboptimal sensitivity at low parasite densities , inability to accurately identify parasites to the species level or quantify infection density , and a higher unit cost relative to microscopy . polymerase chain reaction ( pcr ) , another diagnostic technique , detects specific nucleic - acid sequence and its values lie in its sensitivity , with the ability to detect five parasites or less/l of blood . pcr is useful both for initial parasite diagnosis and for followup during drug efficacy study . it is also useful as a sensitive standard against which other non - molecular methods can been evaluated . however it is expensive and time - consuming and because of the amount of resources needed in the running of the pcr laboratory , it is used more for research purposes . clinical diagnosis is imprecise but remains the basis for therapeutic care for the majority of febrile patients in malaria endemic areas , where laboratory support is often out of reach . clinical diagnosis also referred to as presumptive diagnosis is the least expensive and most commonly used method and is the basis for self - treatment in endemic countries . overlap of malaria symptoms with other tropical diseases like typhoid fever , respiratory tract infections and viral infections impairs the specificity of presumptive diagnosis thereby encouraging indiscriminate use of antimalarials in endemic areas . accuracy of clinical diagnosis varies with the level of endemicity , malaria season , and age group . therefore no single clinical algorithm can be regarded as a universal predictor . this paper reports the comparative performance of clinical diagnosis , tfm , rdt , and pcr in the diagnosis of p. falciparum malaria in nigeria . osogbo is the state capital of osun state , nigeria , and it represents a typical urban setting in nigeria . patients ( ages 4 months to 20 years ) who were clinically diagnosed for malaria at the outpatient departments of general hospital asubiaro and lautech health centre in osogbo were recruited into the study . all the patients that were clinically diagnosed were subsequently confirmed using tfm , rdt , and pcr before treatment . ethical approval was obtained from the ethical committee of osun state hospital management board , osogbo . clinical diagnosis based on fever ( temperature 37.5c ) and/or history of fever other symptoms considered for clinical diagnosis include headache , joint pains , body weakness , cough , diarrhea , loss of appetite / refusal of feeds , abdominal pain , and generalized body weakness . 5 ml of blood was collected aseptically from antecubital vein of consenting febrile patients , into edta bottle . rdt was performed on about 5 l of blood using paracheck ( orchid biomedical system , verna , goa , india ) according to manufacturer 's instruction . a drop of blood was used for microscopic examination of malaria parasites using thick films method stained with 5% giemsa for 30 minutes . parasites were counted against 200 white blood cells ( wbcs ) from the thick film . the parasite density was obtained by assuming a total wbc count of 8000/ml and 4.5 million rbc / ml and at least 200 fields were examined before being taken as a negative result . 10 l of blood was dotted on whatman 3 mm filter paper and air - dried at room temperature for pcr . parasite genomic dna was extracted from blood samples collected on filter paper using methanol extraction method as previously described . pcr was carried out using primer pairs that target the multicopy p. falciparum stevor gene . primary amplification was performed with reaction mixture of 25 l containing 2.5 l 10x reaction buffer , 5 l of magnesium chloride , 0.75 l of each primers ( p5 , p18 , p20 , p19 ) , 0.2 l of dntps , 9.05 l of water , 0.25 l of taq polymerase , and 5 l of dna extract . the pcr programme was as follows : 93c for 3 minutes , 22 cycles of 30 seconds at 93c , 50 sec at 50c , and 30 sec at 72 and final extension period of 3 minutes at 72c . 2.0 l of the first pcr product was used in the second round amplification which was performed with a reaction mixture of 25 l containing 2.5 l 10x reaction buffer , 2.5 l of magnesium chloride , 0.4 l of each dntps , 0.25 l of taq polymerase , 1.0 l of each primers ( p24 , p17 ) , and 15.35 l of water . dna extracted from fcr p. falciparum laboratory adapted strain was used as positive control and water as negative control . pcr products were subjected to electrophoresis on 1.5% agarose gels and visualized using syngene gel documentation system ( syngene , cambridge , uk ) after staining with ethidium bromide . the sensitivity , specificity , and predictive values of each of the three test methods were calculated by comparing to a composite reference gold standard generated from the three methods . the composite reference method was defined as a method that is positive for malaria parasites by all the three methods ( tfm , rdt , and pcr ) and also negative for malaria parasites by all the three methods . this gives the method 100% hypothetical sensitivity , specificity , and positive and negative predictive values . the sensitivity , specificity , and predictive values of each of the 3 methods were then calculated using the formulas ( 1)sensitivity = tp(tp+fn)100specificity = tn(tn+fp)100ppv = tp(tp+fp)100npv = tn(tn+fn)100 , where tp = true positive , fp = false positive , tn = true negative , and fn = false negative . sensitivity was defined as the probability that a truly infected individual will test positive and specificity as the probability that a truly uninfected individual will test negative . we compared the diagnostic value of 3 methods ( tfm , rdt , and pcr ) for the detection of malaria parasites in nigeria . a total of 217 individuals clinically diagnosed for malaria were recruited into the study . of these , the mean age of the patients was 8 years 3.04 and the mean axillary temperature was 38.2c 0.96 . one hundred and six ( 48.8% ) individuals were positive for malaria by tfm , 84 ( 38.7% ) by rdt , and 125 ( 57.6% ) by pcr . there were significant differences ( p = 0.0005 ) when the prevalence of 3 methods ( tfm , rdt , and pcr ) was compared ( table 1 ) . using a composite reference ( gold standard ) method generated from the three diagnostic methods , only 71 ( 32.7% ) patients were found to be truly infected , with p. falciparum 90 ( 41.5% ) truly uninfected while 56 ( 25.8% ) were misidentified as infected or noninfected by the three methods . when each of the 3 diagnostic methods was compared with the composite reference method , pcr had sensitivity of 97.3% , specificity of 62.5% , positive predictive value ( ppv ) of 56.8% , and negative predictive value ( npv ) of 97.8% ; microscopy had sensitivity of 77.2% , specificity of 72% , ppv of 66.9% , and npv of 81.1% , while rdt had sensitivity of 62.3% , specificity of 87.4% , ppv of 67.7% , and npv of 84.5% ( table 2 ) . correlation of rdt and pcr to parasite density observed by microscopy is shown in table 3 . out of 109 patients that were negative by microscopy 22 and 29 out of this 81 microscopy positive patients , 47 and 73 patients were detected by rdt and pcr , respectively ( table 3 ) . this study provides a dataset for judging the performance of clinical diagnosis against tfm , rdt , and pcr for the detection of p. falciparum in a malaria endemic area . clinical diagnosis , for instance , is commonly used because it is cheap and allows for prompt treatment of the patient . nonspecific symptoms like fever , headache , weakness , myalgia , chills , dizziness , abdominal pain , diarrhea , nausea , vomiting , anorexia , and pruritus and other malaria related symptoms are used as the basis for clinical diagnosis . microscopy remains the gold standard for malaria diagnosis ; it is less expensive compared to other laboratory methods but has a low sensitivity . it requires well trained microscopist and when this is not present the result will not be reproducible , there will be variable sensitivity and unacceptably high false - positive rates . rdts are antigen capture tests that have been shown to be capable of detecting > 100 parasites/l ( 0.002% parasitemia ) and of giving rapid results ( 15 to 20 min ) . they are commercially available in kit form and the ease of performance of the procedures does not require extensive training , equipment , or difficulty in result interpretation . the main drawback is in its specificity as parasite antigen could persist in the blood of the patient after parasite clearance by chemotherapy thereby producing false positive . pcr values lie in its high sensitivity , with the ability to detect five parasites or less/l of blood [ 15 , 17 ] ; however it is expensive and time - consuming . our results show that the continuous practice of using clinical diagnosis as the basis for antimalarial treatment in endemic area is by far not an effective diagnostic method in our study area . out of the 217 ( 100% ) patients that were clinically diagnosed for malaria , 104 ( 49.8% ) , 83 ( 38.2% ) , and 123 ( 56.7% ) were positive by tfm , rdt , and pcr , respectively . invariably irrespective of the laboratory method , about half of the patients who were diagnosed as having malaria through clinical diagnosis ( syndrome approach ) and who should have received antimalarial turned out to be parasite - negative . there is therefore an urgent need to review the clinical diagnosis procedure . although it may be argued that in some cases especially in children , promptness of malaria treatments reduces the progression of simple malaria to severe malaria , which still encourages syndromic approach to malaria diagnosis . nevertheless malaria over diagnosis is still a major public health problem in africa with studies suggesting between 50% and 99% of those prescribed antimalarial to be test negatives depending on endemicity of the clinical setting [ 5 , 18 , 19 ] . the ability to rule out malaria can help to better diagnose and treat other causes of fever such as acute respiratory infection , typhoid fever , and meningitis and also avoid exposing those without malaria to drug and restricting antimalarial use to true test - positives . till date our study confirmed that continual dependence on this method will lead to overdiagnosis of malaria which will result into drug wastage and encourage antimalarial drug resistance . in this study routine microscopic examination of giemsa - stained blood smears which is considered as the gold standard for malaria diagnosis had a sensitivity of 77.2% and was able to detect more parasites than the rdt ( sensitivity 62.3% ) . though the specificity of microscopy ( 72% ) was not as high as that of rdt ( 87.4% ) ; nevertheless , it has high sensitivity , possibility for quantification of parasitemia , and easy handling which is a good advantage . detection of parasites depends on many factors including the amount of blood processed and the competence of the microscopist , among others . also the information obtained by microscopy is limited when parasite levels are very low or when parasite morphology is altered . the development of rapid diagnostic assays has attempted to address some of these shortcomings of microscopy . rdts have the potential to improve the accuracy and time needed for malaria diagnosis particularly for laboratories in low or nonendemic countries , where expertise with microscopy may be limited . major advantages of rdts include the fact that it can be performed close to home in settings with no sophisticated infrastructure , and they do not require much skill although some level of training is needed in order for rdts to be used properly . different pcr based methods have been constantly shown to be powerful tools for malaria diagnosis with better sensitivity than conventional microscopy and antigen - based diagnostic tests [ 18 , 22 ] . most positive cases were detected by the stevor pcr in this study and this method has been reported to be at least 100-fold more sensitive than other pcr assays [ 15 , 23 ] . generally , pcr has proven to be a sensitive method for diagnosis of all four species of human malaria parasites . the detection of < 5 parasite/l and identification to the species level make this an excellent technique against which to compare the sensitivity and specificity of other nonmolecular methods . greater percentage of children presented at general outpatient department of the hospital in our study with fever were diagnosed for malaria ( pcr56.7% , microscopy 49.8% , and rdt 38.2% ) . available records also show that at least 50% of the population of nigeria suffer from at least one episode of malaria each year accounting for over 45% of all out - patient visits . the implication of this is that malaria is still a public health problem in this area . more concerted effort is needed by government and all stake holders involved in malaria control if the goal of eradicating malaria by 2015 is to be achieved . in conclusion our study revealed the need for complete shift from symptom - based diagnosis to parasite - based diagnosis . this can bring significant improvement to tropical fever management and reduce drug wastage and also help to curtail development of malaria drug resistance .
this study compares the performance of clinical diagnosis and three laboratory diagnostic methods ( thick film microscopy ( tfm ) , rapid diagnostic test ( rdt ) , and polymerase chain reaction ( pcr ) ) for the diagnosis of plasmodium falciparum in nigeria . using clinical criteria , 217 children were recruited into the study out of which 106 ( 48.8% ) were positive by tfm , 84 ( 38.7% ) by rdt , and 125 ( 57.6% ) by pcr . using a composite reference method generated from the three diagnostic methods , 71 ( 32.7% ) patients were found to be truly infected and 90 ( 41.5% ) truly uninfected , while 56 ( 25.8% ) were misidentified as infected or noninfected . when each of the 3 diagnostic methods was compared with the composite reference , pcr had sensitivity of 97.3% , specificity of 62.5% , positive predictive value ( ppv ) of 56.8% , and negative predictive value ( npv ) of 97.8% ; microscopy had sensitivity of 77.2% , specificity of 72% , ppv of 66.9% , and npv of 81.1% , while rdt had sensitivity of 62.3% , specificity of 87.4% , ppv of 67.7% , and npv of 84.5% . pcr test performed best among the three methods followed by tfm and rdt in that order . the result of this study shows that clinical diagnosis can not be relied upon for accurate diagnosis of p. falciparum in endemic areas .
1. Introduction 2. Methods 3. Results 4. Discussion
PMC5339655
gestational diabetes mellitus ( gdm ) is defined as impaired glucose tolerance appearing specifically during pregnancy ( 1 ) . gdm develops by excessive insulin resistance , inadequate -cell compensation , reduced -cell function , or any combination of these ( 2 ) . a large number of epidemiological studies demonstrated that diabetes in pregnant women is associated with an increased risk of maternal and neonatal morbidity ( 3 ) . infants of a diabetic mother ( idms ) have been shown to be prone to the development of complex diseases , including obesity , and metabolic and cardiovascular complications , during childhood and adulthood ( 4 , 5 ) . in animal models of gdm , it has been shown that idms overtly develop diabetes throughout life ( 6 , 7 ) . furthermore , population - based studies have also demonstrated that idms have an increased risk for type 2 diabetes in later childhood and as adults ( 8) . insulin - producing -cells in the endocrine pancreas play a pivotal role in maintaining glucose homeostasis . adult -cells can dynamically respond to systemic increases in insulin demand by expan - ding their functional mass . compensatory changes in -cells mass are controlled by increases in cell size ( hypertrophy ) or increase in number of cells ( hyperplasia ) ( 9 , 10 ) . recent findings have shown that defect in these mechanisms is a key feature of the pathogenesis of diabetes ( 11 , 12 ) . similarly , idm mature animals show -cells dysfunction and decreased insulin secretion in response to glucose ( 7 ) . however , the mechanisms responsible for -cells malfunction in idms are unknown . therefore , understanding how -cells proliferate and function is important and may lead to development of new therapeutic strategies for diabetes . strikingly , recent studies have shown a link between the cell cycle regulators and the risk of type 2 diabetes ( 13 , 14 ) . cyclin - dependent kinases ( cdk ) a family of serine / threonine protein kinases phosphorylate a number of substrates like retino - blastoma protein ( prb ) mainly implicated in cell cycle progression . subsequent phosphorylation of prb by the cdk results in the release of e2f . upon activation , e2f1 is able to turn on genes required for progression through g1 into the s phase of cell cycle ( 15 ) . in addition to activation of cell cycle , e2f directly contributes to insulin secretion through the regulation of kir6.2 ( also referred as kcnj11 ) expression . it has been proven that kir6.2 channels play a pivotal role in the regulation of insulin secretion . so , cdk4-prb - e2f1 pathway directly contributes to both proliferation and regulation of insulin secretory capacity of -cells ( 16 ) . indeed , no study has investigated the gene expre - ssion of kir 6.2 and cdk4-prb - e2f1 factors in adult offspring of diabetic rats . thus , the purpose of the present investigation was to evaluate the expression changes of these genes in pancreatic islands extracted from offspring of streptozotocin - induced mildly hyperglycemic rats . this experimental study was performed to evaluate the effect of gestational diabetes on expression of cdk4-prb - e2f1 pathway genes in pancreatic islets of rat offspring . all animal procedures followed the guidelines set by the institutional animal care and use committee at the golestan university of medical sciences , gorgan , iran . the rats were kept in a temperature - controlled environment ( 212 c ) on 12-hr light / dark cycles and allowed free access to standard rat chow and water . vaginal plaque was mentioned daily as a positive sign of pregnancy and the day on which vaginal plaque was observed , considered as day 0 of pregnancy . a total of 20 dams were made diabetic by single ip injection of freshly prepared streptozotocin ( stz ) solution ( 40 mg / kg body weight ) in sterile saline solution ( 0.85% ) on day zero of gestation ( 17 ) ; 7 dams were injected with equivalent volume of normal saline as control group . four days later , blood glucose levels were checked using a glucometer ( accu - chek active glucometer , roche diagnostics , germany ) . if glucose levels were between 120 - 250 mg / dl , the rats were selected and used as gdm . mildly hyperglycemic dams in the current series of experiments were ~40% , or 8 out of 20 stz- injected rats . totally six diabetic offspring from gdm mothers at the age of 12 and 15 weeks were selected . islets were isolated from pancreases of diabetic and control group rats by a modification of the collagenase digestion technique ( 18 ) . this involved cannulation of the common bile duct and the sequential administration of 2 ml digest solution containing 0.2 mg / ml liberase tl ( roche , cat # 05 401 020 001 ) and 10 pg / ml dnase ( takara ) in serum free rpmi 1640 medium . the organ was then incubated at 37 c for 15 min and dispersed using pipetting action . at the end of the incubation , the tubes moved to ice and 10 ml rpmi 1640 with 10% serum added . the islets were separated by centrifugation on a ficoll gradient ( histopaque1077 , sigma - aldrich10771 ) and collected from the histopaque / media interface with a disposable 10 ml serological pipette and resuspended in serum containing rpmi . the islets were then isolated by passage through a 100 m cell strainer ( bd falcon ) and handpicked with a pasteur pipette using a dissecting microscope . rna was isolated from islets using the genabioscience rna extraction kit according to the manufacturer s instructions . residual dna was digested with 10 u rnase - free dnase ( dnase i , takara ) in the presence of 20 units of rnase inhibitor at 37 c for 20 min . after heat inactivation for 10 min at 75 c in 2 mm edta , concentration and purity of the dnase i - treated samples was measur- ed using a nanodrop nd-1000 spectrophotometer ( a260/a280>1.8 and a260/a230>1.6 ) . the integrity and stability of the rnas was confirmed by demonstrating the intact 28s and 18s bands on gel electrophoresis . for real - time rt - pcr , the cdna was synthesized from 1 g of dnasei - treated total rna using prime script rt reagent kit ( takara ) with random hexamer and oligo dt primers following the manufacturer s protocol . the forward and reverse pcr primers for the 7 genes were designed in accordance to the real - time pcr conditions , using perlprimer software ( bio - rad , usa ) , and the sequences are listed in table 1 . for each genes , the cdna amplified by specific primers using taq polymerase kit ( takara ) , and correct product was confirmed by running on gel electrophoresis . real - time pcr primer name , sequences , size , genbank accession number and pcr condition real - time rt - pcr was performed using the sybr - green pcr master mix kit ( takara ) in the thermo cycler ( abi , 7300 ) . the cycling conditions were 95 c for 30 sec followed by 40 cycles at 95 c for 5 sec , 55 c for 30 sec and 72 c for 1 min . we used rat -actin as internal control and non- diabetic offspring islets cdna as calibrator . amplification specificity was confirmed by gel electrophoresis . the relative expression level of mrna between the diabetic and non - diabetic samples was determined with the comparative ct ( cycle threshold ) method . first , both ct values of target gene from control islets cdna ( n=3 ) and diabetic islets cdna ( n=3 ) were normalized by ct value of the internal control ( ct ) . then , the former was subtracted by the latter , namely ct . the value of 2 , the fold change of gene expression , real - time serial data were statistically analyzed . every real - time pcr experiment was repeated with three samples and each sample was run in duplicate . relative target gene expression and blood glucose level was analyzed with one - way anova using spss 16.0 statistical analysis software . the differences between groups were compared using unpaired t test and p<0.05 was chosen as the level of significance . the rats were kept in a temperature - controlled environment ( 212 c ) on 12-hr light / dark cycles and allowed free access to standard rat chow and water . vaginal plaque was mentioned daily as a positive sign of pregnancy and the day on which vaginal plaque was observed , considered as day 0 of pregnancy . a total of 20 dams were made diabetic by single ip injection of freshly prepared streptozotocin ( stz ) solution ( 40 mg / kg body weight ) in sterile saline solution ( 0.85% ) on day zero of gestation ( 17 ) ; 7 dams were injected with equivalent volume of normal saline as control group . four days later , blood glucose levels were checked using a glucometer ( accu - chek active glucometer , roche diagnostics , germany ) . if glucose levels were between 120 - 250 mg / dl , the rats were selected and used as gdm . mildly hyperglycemic dams in the current series of experiments were ~40% , or 8 out of 20 stz- injected rats . totally six diabetic offspring from gdm mothers at the age of 12 and 15 weeks were selected . islets were isolated from pancreases of diabetic and control group rats by a modification of the collagenase digestion technique ( 18 ) . this involved cannulation of the common bile duct and the sequential administration of 2 ml digest solution containing 0.2 mg / ml liberase tl ( roche , cat # 05 401 020 001 ) and 10 pg / ml dnase ( takara ) in serum free rpmi 1640 medium . the organ was then incubated at 37 c for 15 min and dispersed using pipetting action . at the end of the incubation , the tubes moved to ice and 10 ml rpmi 1640 with 10% serum added . the islets were separated by centrifugation on a ficoll gradient ( histopaque1077 , sigma - aldrich10771 ) and collected from the histopaque / media interface with a disposable 10 ml serological pipette and resuspended in serum containing rpmi . the islets were then isolated by passage through a 100 m cell strainer ( bd falcon ) and handpicked with a pasteur pipette using a dissecting microscope . rna was isolated from islets using the genabioscience rna extraction kit according to the manufacturer s instructions . residual dna was digested with 10 u rnase - free dnase ( dnase i , takara ) in the presence of 20 units of rnase inhibitor at 37 c for 20 min . after heat inactivation for 10 min at 75 c in 2 mm edta , concentration and purity of the dnase i - treated samples was measur- ed using a nanodrop nd-1000 spectrophotometer ( a260/a280>1.8 and a260/a230>1.6 ) . the integrity and stability of the rnas was confirmed by demonstrating the intact 28s and 18s bands on gel electrophoresis . for real - time rt - pcr , the cdna was synthesized from 1 g of dnasei - treated total rna using prime script rt reagent kit ( takara ) with random hexamer and oligo dt primers following the manufacturer s protocol . the forward and reverse pcr primers for the 7 genes were designed in accordance to the real - time pcr conditions , using perlprimer software ( bio - rad , usa ) , and the sequences are listed in table 1 . for each genes , the cdna amplified by specific primers using taq polymerase kit ( takara ) , and correct product was confirmed by running on gel electrophoresis . real - time pcr primer name , sequences , size , genbank accession number and pcr condition real - time rt - pcr was performed using the sybr - green pcr master mix kit ( takara ) in the thermo cycler ( abi , 7300 ) . the cycling conditions were 95 c for 30 sec followed by 40 cycles at 95 c for 5 sec , 55 c for 30 sec and 72 c for 1 min . we used rat -actin as internal control and non- diabetic offspring islets cdna as calibrator . amplification specificity was confirmed by gel electrophoresis . the relative expression level of mrna between the diabetic and non - diabetic samples was determined with the comparative ct ( cycle threshold ) method . first , both ct values of target gene from control islets cdna ( n=3 ) and diabetic islets cdna ( n=3 ) were normalized by ct value of the internal control ( ct ) . then , the former was subtracted by the latter , namely ct . the value of 2 , the fold change of gene expression , real - time serial data were statistically analyzed . every real - time pcr experiment was repeated with three samples and each sample was run in duplicate . data were presented as meanstandard deviation ( sd ) . relative target gene expression and blood glucose level was analyzed with one - way anova using spss 16.0 statistical analysis software . the differences between groups were compared using unpaired t test and p<0.05 was chosen as the level of significance . fasting blood glucose concentration was significantly increased in idm rats ( figure 1 ) . by 12 weeks of age , about 40% of the idms developed mild hyperglycemia and at 15 weeks of age , glucose levels were markedly elevated in idms compared to controls ( p<0.001 ) . blood glucose concentrations in idms ( infants of a diabetic mother ) and control animals ( week 12 and week 15 ) . blood glucose level of offspring was obtained via tail vein and was measured with accu - chek glucometer . values are means sem . * * * p<0.001 we assessed whether the expression levels of cell cycle regulator genes ( cdk4-prb - e2f1 ) were affected by gestational diabetes in adult offspring . real - time pcr melting curve for e2f1 , cdk4 , kir6.2 , prb and -actin as internal control ( a ) . agarose gel electrophoresis of pcr products following real - time sybr green amplification ( b ) the expression levels of prb in isolated islets were not significantly different between the two groups , indicating that the expression of prb was not disturbed by gdm . but analysis of the mrna levels of cdk4 and e2f1 showed significant alterations in the diabetic group compared to controls ( figure 3 ) . on the other hand , kir6.2 mrna expression severely reduced in idms , which followed the expected decrease in e2f1 protein ( figure 3 ) . mrna levels were measured using gene specific primers ( table 1 ) and the values were normalized to -actin . statistical significance was calculated using the t test ( * p<0.05 , * * p<0.01 , n=3 ) . fasting blood glucose concentration was significantly increased in idm rats ( figure 1 ) . by 12 weeks of age , about 40% of the idms developed mild hyperglycemia and at 15 weeks of age , glucose levels were markedly elevated in idms compared to controls ( p<0.001 ) . blood glucose concentrations in idms ( infants of a diabetic mother ) and control animals ( week 12 and week 15 ) . blood glucose level of offspring was obtained via tail vein and was measured with accu - chek glucometer . we assessed whether the expression levels of cell cycle regulator genes ( cdk4-prb - e2f1 ) were affected by gestational diabetes in adult offspring . real - time pcr melting curve for e2f1 , cdk4 , kir6.2 , prb and -actin as internal control ( a ) . agarose gel electrophoresis of pcr products following real - time sybr green amplification ( b ) the expression levels of prb in isolated islets were not significantly different between the two groups , indicating that the expression of prb was not disturbed by gdm . but analysis of the mrna levels of cdk4 and e2f1 showed significant alterations in the diabetic group compared to controls ( figure 3 ) . on the other hand , kir6.2 mrna expression severely reduced in idms , which followed the expected decrease in e2f1 protein ( figure 3 ) . mrna levels were measured using gene specific primers ( table 1 ) and the values were normalized to -actin . statistical significance was calculated using the t test ( * p<0.05 , * * p<0.01 , n=3 ) . maintenance of appropriate insulin - producing -cells growth and mass is critical for metabolic balance ( 19 , 20 ) ; thus , the molecular mechanisms by which -cells proliferate are the focus of recent studies . the first years of life in humans and the first postnatal months of life in rodents are crucial periods of islet -cells growth that result in establishment of appropriate -cells mass ( 21 - 24 ) . fetal hyperglycemia is a consequence of maternal mild hyperglycemia that may disturb -cells growth during early postnatal period and causes diabetes in offspring later in life ( 25 ) . previous studies have demonstrated that cdk4-prb - e2f1 pathway plays a crucial role in the control of glucose homeostasis . as a cell cycle regulatory pathway , cdk4-prb - e2f1 genes not only regulate -cells proliferation , but also control the expression of genes implicated in insulin secretion , such as kir6.2 ( 15 , 16 ) . in spite of several studies regarding the effects of diabetes i and ii on pancreas structure and function , there is no investigation about the effect of gdm on cell cycle regulators expression in offspring s pancreatic islets . thus , this study was designed to investigate the effects of hyperglycemic intrauterine environment on the expression of cell cycle regulator genes in langerhans islets of adult diabetic offspring . consistent with some previous studies ( 6 , 7 ) , our offspring rats developed diabetes at 12 weeks of age . in this study , we observed a 51% decrease for cdk4 expression in langerhans islets of diabetic offspring ( p<0.05 ) . many studies implicate cdk4 as a crucial factor for successful expansion of -cells mass in various conditions . for instance , when cell cycle arrest is altered specifically in -cells by deletion of cdk4 postnatally , -cells mass decreases or fails to expand ( 13 ) . also rieck et al showed that during the peak of -cells dna synthesis , cdk4 expression is induced in the islets of pregnant mice ( 26 ) . also , it has been shown that pharmacological inhibition of cdk4 activity on glucose tolerance in mice drama - tically decreases the clearance of glucose in treated , compared to non - treated mice with idcx , which is a specific cdk4 inhibitor ( 15 ) . correlated with previous studies , our observation links cdk to the risk of diabetes in offspring of gdm mothers . furthermore , we showed that gdm also causes reduction of prb and e2f1 gene expression in pancreatic islets of diabetic offspring by 35% and 86% ( p<0.01 ) , respectively . findings have shown that e2f1 mice have decreased pancreatic size and insulin secretion , as the result of impaired postnatal pancreatic growth . the results have also demonstra - ted that e2f1 was highly expressed in non - proliferating pancreatic -cells , suggesting that e2f1 , besides the control of -cells number could have a role in pancreatic -cells function ( 27 ) . these findings provided enough evidence to propose that cdk4-prb - e2f1 pathway genes are critical mediators of insulin secretion ( 16 ) . regarding the down - regulation of cdk4-prb - e2f1 pathway genes in pancreatic islets of rat ( idms ) , we can conclude that induction of diabetes in offspring of gdm is mediated by the reduction of cdk4 activity and subsequent e2f1 transcriptional activity . in addition to evaluation of genes involved in -cells proliferation , we also wanted to evaluate whether gdm affects -cells function . so , we analyzed the expression of kir6.2 , a key component of the atp - sensitive potassium channel involved in the regulation of glucose - induced insulin secretion in -cells . our data indicated that gdm causes reduction in kir6.2 gene expression by 0.12 fold compare to control group . recent in vitro and in vivo investigations have shown that e2f1 directly controls the expression of kir6.2 . several studies on pancreas of e2f1-/- mice have shown that expression of kir6.2 is downregulated that causes insulin secretion defects in these animals ( 15 , 16 , 27 ) . our data are in agreement with other reports and suggest that uncontrolled gdm may cause diabetes in offspring by repression of cdk4-prb - e2f1 pathway in their pancreases . our data showed that down - regulation of cdk4-prb - e2f1 pathway genes is related with development of diabetes in offspring of gestational diabetic rats . furthermore , in this study , decreased kir6.2 mrna expression in diabetic offspring underscores a dual effect of the gdm on both proliferation , and function of -cells . taken together , this study may open up new ways for understanding the molecular basis of gdm and type 2 diabetes in the offspring . however , many advances must be made to fully appreciate the exact molecular mechanism of inducing diabetes in offspring by gestational diabetes .
objective(s):the link between a hyperglycemic intrauterine environment and the development of diabetes later in life has been observed in offspring exposed to gestational diabetes mellitus ( gdm ) , but the underlying mechanisms for this phenomenon are still not clear . reduced -cells mass is a determinant in the development of diabetes ( type 1 and type 2 diabetes ) . some recent studies have provided evidence that the cdk4-prb - e2f1 regulatory pathway is involved in -cells proliferation . therefore , we postulated that gdm exposure impacts the offspring s -cells by disruption in the cdk4-prb - e2f1 pathway.materials and methods : adult wistar rats were randomly allocated in control and diabetic group . the experimental group received 40 mg / kg / body weight of streptozotocin ( stz ) on day zero of gestation . after delivery , diabetic offspring of gdm mothers and control dams at the age of 15 week were randomly scarified and pancreases were harvested . langerhans islets of diabetic and control groups were digested by collagenase digestion technique . after rna extraction , we investigated the expressions of the kir 6.2 and cdk4-prb - e2f1 pathway genes by quantitative real - time pcr.results:gdm reduced the expression of cdk4-prb - e2f1 pathway genes in langerhans islets cells of offspring . cdk4 , prb and e2f1 pathway genes were downregulated in diabetic islets by 51% , 35% and 84% , respectively . also , the expression of kir 6.2 was significantly decreased in diabetic islets by 88%.conclusion : we suggest that the effect of gestational diabetes on offspring s -cells may be primarily caused by the suppression of cdk4-prb - e2f1 pathway .
Introduction Materials and Methods Generation of the diabetic rat model Isolation of langerhans islets RNA extraction Real-time RT-PCR analysis Data analysis Results Glucose level qRT-PCR results Discussion Conclusion Conflict of Interest
PMC4001265
apical periodontitis is an infection of the area around the root of a tooth , usually caused by bacteria . although bacterial infection can be substantially reduced by standard intracanal procedures such as intracanal medication and root canal treatment , it is very difficult to render the root canal free of bacteria . this is because bacteria are located in inaccessible areas such as deep inside the dentinal tubules and lateral canals and it is difficult for any intracanal medication to reach these locations . moreover , bacteria may survive and re - colonize in the areas around the root canal whenever there is opportunity and this may become a primary source of persistent infection . bacteria are commonly found within dentinal tubules of clinically infected canals . among these bacteria , enterococcus faecalis is of interest because it is the most frequently detected species in root - filled teeth with persistent lesions . some possible factors facilitating the long - term survival of e. faecalis in the root canal system are its ability to invade dentinal tubules , where it can survive for a prolonged period under adverse conditions such as starvation , high ph of calcium hydroxide medication and adhesion of e. faecalis to collagen . angiotensin - converting enzyme ( ace ) and a serine protease ( spr ) are the collagen binding proteins produced by e. faecalis . with the help of these proteins , ace promotes the binding of e. faecalis to type i collagen and in vitro ace gene expression at 37c was enhanced in the presence of collagen . in this study , the interaction of e. faecalis with root cementum and its role in persistent infection was investigated . this in vitro study was conducted in the department of conservative dentistry and endodontics . teeth sterilization ( gamma irradiation at 25 kgy ) was performed at microtol , bangalore . data was obtained using an inverted confocal laser scanning microscope ( clsm ) ( zeiss lsm 510 meta . gmbh , mannheim , germany ) at the indian institute of science , bangalore . a total of 60 human single - rooted teeth recently extracted for orthodontic reasons were collected for the study . , caries - free teeth were examined under 20 microscope to rule out any cracks , caries , fractures or craze lines and radiographed to confirm the presence of a single canal . teeth that had already undergone root canal treatment or teeth with more than one canal , immature root apices , root caries , restorations , fractures or craze lines , thin curved roots and calcified canals were excluded from the study . the teeth were cleaned off soft - tissues , calculus and stains using sharp hand scalers and thoroughly washed under running tap water to remove any tissue remnants sticking to the tooth surface . all the 60 specimens were randomly divided into three experimental groups as follows : group i ( n = 20 ) : ( control group ) intact teeth with no access cavity preparation and sealing of the root apex was done using varnish . group ii ( n = 20 ) : access opening was done to gain access to the root canal . group iii ( n = 20 ) : 1 mm of the root apex was exposed to lactic acid ( organic acid ) at ph below 5.5 to mimic apical demineralization . apical root cementum was roughened using diamond point to mimic apical resorption that is seen in apical periodontitis cases , followed by access opening to gain access to the root canal . for the samples in group i ( control group ) , no access preparation was done and apical 1 - 2 mm of the teeth was sealed with three coats of varnish followed by gamma irradiation of all the samples to eradicate any bacteria that was previously present . for the specimens in groups the teeth were then subjected to gamma irradiation , followed by inoculation with the e. faecalis broth within the root canal with the help of a micropipette . simultaneously , apical one - third of the teeth were submerged in the broth for all the teeth samples and incubated for 8 weeks to allow bacterial growth with alternate day refreshment . streptomycin - resistant strains of e. faecalis ( atcc 29212 ) were cultured in tryptone soya bean agar broth prepared by mixing 1.8 g powder in 60 ml of distilled water . the e. faecalis strain was inoculated in the broth and placed in an incubator to allow the bacteria to grow at 37c for 24 - 48 h. gram staining was done to confirm bacterial growth . the e. faecalis broth was inoculated within the root canal of the teeth samples with a micropipette . furthermore , apical one - third of the teeth were submerged in the broth to mimic primary infection . after 8 weeks of culturing , the specimens of groups ii and iii were subjected to biomechanical preparation followed by obturation up to the working length ( root zx ii , j. morita , japan ) . the teeth were instrumented using protaper ni - ti rotary instrument system in a contra - angle gear reduction handpiece ( x - smart dentsply ) and finally obturated with gutta - percha ( single - cone technique ) using ah plus sealer . after coronal seal , apical one - third of all the samples were again immersed in the e. faecalis broth for 8 weeks with alternate day refreshment to show secondary infection . after the incubation period of 8 weeks , all the samples were washed using 1 ml phosphate buffered saline to remove non - adherent bacteria . a vertical groove was made on buccolingual surface starting from occluso - apical of the all teeth samples using a tapered fissure diamond point . then with the help of a chisel , the teeth were stained with a fluorescent dye to observe under an inverted clsm ( zeiss lsm 510 meta . the teeth were stained with 50 l fluorescein diacetate ( fda , sigma , st . louis , mo ) and 50 l of propidium iodide ( pi , sigma ) . , the dye crosses the cell membrane and gets metabolized by intracellular esterases and converted to fluorescein ( green ) so the viable cells appears green in color . pi is a non - cell permeable , red fluorescent dye , which adheres to ruptured cell membranes so the dead bacteria appear red in color . single - rooted , caries - free teeth were examined under 20 microscope to rule out any cracks , caries , fractures or craze lines and radiographed to confirm the presence of a single canal . teeth that had already undergone root canal treatment or teeth with more than one canal , immature root apices , root caries , restorations , fractures or craze lines , thin curved roots and calcified canals were excluded from the study . the teeth were cleaned off soft - tissues , calculus and stains using sharp hand scalers and thoroughly washed under running tap water to remove any tissue remnants sticking to the tooth surface . all the 60 specimens were randomly divided into three experimental groups as follows : group i ( n = 20 ) : ( control group ) intact teeth with no access cavity preparation and sealing of the root apex was done using varnish . ( n = 20 ) : access opening was done to gain access to the root canal . ( n = 20 ) : 1 mm of the root apex was exposed to lactic acid ( organic acid ) at ph below 5.5 to mimic apical demineralization . apical root cementum was roughened using diamond point to mimic apical resorption that is seen in apical periodontitis cases , followed by access opening to gain access to the root canal . for the samples in group i ( control group ) , no access preparation was done and apical 1 - 2 mm of the teeth was sealed with three coats of varnish followed by gamma irradiation of all the samples to eradicate any bacteria that was previously present . for the specimens in groups ii and iii , access opening and canal debridement were done . the teeth were then subjected to gamma irradiation , followed by inoculation with the e. faecalis broth within the root canal with the help of a micropipette . simultaneously , apical one - third of the teeth were submerged in the broth for all the teeth samples and incubated for 8 weeks to allow bacterial growth with alternate day refreshment . streptomycin - resistant strains of e. faecalis ( atcc 29212 ) were cultured in tryptone soya bean agar broth prepared by mixing 1.8 g powder in 60 ml of distilled water . the e. faecalis strain was inoculated in the broth and placed in an incubator to allow the bacteria to grow at 37c for 24 - 48 h. gram staining was done to confirm bacterial growth . the e. faecalis broth was inoculated within the root canal of the teeth samples with a micropipette . furthermore , apical one - third of the teeth were submerged in the broth to mimic primary infection . after 8 weeks of culturing , the specimens of groups ii and iii were subjected to biomechanical preparation followed by obturation up to the working length ( root zx ii , j. morita , japan ) . the teeth were instrumented using protaper ni - ti rotary instrument system in a contra - angle gear reduction handpiece ( x - smart dentsply ) and finally obturated with gutta - percha ( single - cone technique ) using ah plus sealer . after coronal seal , apical one - third of all the samples were again immersed in the e. faecalis broth for 8 weeks with alternate day refreshment to show secondary infection . after the incubation period of 8 weeks , all the samples were washed using 1 ml phosphate buffered saline to remove non - adherent bacteria . a vertical groove was made on buccolingual surface starting from occluso - apical of the all teeth samples using a tapered fissure diamond point . then with the help of a chisel , each tooth was split into two halves . after coding the teeth samples , the teeth were stained with a fluorescent dye to observe under an inverted clsm ( zeiss lsm 510 meta . the teeth were stained with 50 l fluorescein diacetate ( fda , sigma , st . louis , mo ) and 50 l of propidium iodide ( pi , sigma ) . , the dye crosses the cell membrane and gets metabolized by intracellular esterases and converted to fluorescein ( green ) so the viable cells appears green in color . pi is a non - cell permeable , red fluorescent dye , which adheres to ruptured cell membranes so the dead bacteria appear red in color . for adhesion , we used scoring criteria of 0 ( for no adhesion ) and 1 ( for adhesion ) . for penetration measured in m , we used the depth - measuring tool from the clsm software . accordingly , results were subjected to statistical analysis using median test , anova and student 's t - test . figure 1 shows live bacteria ( green color ) and dead bacteria ( red color ) . i ( control group ) no access cavity was prepared and at the same time apical one - third of the teeth were sealed with varnish . results showed no adhesion [ table 1 ] , [ figure 1a ] and no penetration of e. faecalis into the root cementum in any of the samples [ table 2 ] , [ figure 2a ] . in group ii few samples showed adhesion , mean of adhesion is 0.55 [ table 1 ] , [ figure 1b ] but no penetration was seen in any of the samples [ table 2 ] , [ figure 2b ] . this means that if an intervention like root canal treatment was done at early stage of infection when no apical changes like demineralization or resorption had taken place , there would be lesser chances of e. faecalis penetration and persistent infection or re - infection . in group iii , apical one - third of the teeth were exposed to acid and apical cementum was roughed to mimic apical periodontitis . in this group , all the samples showed adhesion [ table 1 ] , [ figure 1c ] and highest values of penetration was up to 160 m deep in root cementum [ table 2 ] , [ figure 2c ] . this means that delay in treatment leads to changes in apical environment such as apical demineralization and apical resorption , which helps e. faecalis to penetrate deep into cementum and in favorable conditions chances of persistent infection or re - infection also increases . a comparison of values shows high significant difference ( p < 0.01 ) between groups i and iii and groups ii and iii and significant difference ( p < 0.05 ) between groups i and ii [ tables 1 and 2 ] . mean and sd of adhesion for three groups . comparisons between groups with median test ( a ) confocal image showing group i ( control group ) , absence of enterococcus faecalis . red color shows dead e. faecalis and green shows live e. faecalis under confocal laser scanning microscope ( clsm ) . red color shows dead e. faecalis and green shows live e. faecalis under clsm mean and sd of penetration for three groups . f value calculated by one way anova and comparisons between groups with student 's t - test ( a ) depth measuring confocal image showing , group i ( control group ) absence of enterococcus faecalis . ( b ) group ii samples showing adhesion of e. faecalis up to 1 m deep under confocal laser scanning microscope ( clsm ) . ( c ) group iii samples showed presence of e. faecalis up to 160 m deep in root cementum under clsm in this study , e. faecalis was chosen as the test organism because of its ability to penetrate the root dentin in vitro and it is found up to 90% in persistent infection . e. faecalis has unique properties such as production of ace and spr that are collagen binding proteins , which help e. faecalis to adhere strongly to mainly type i and iv collagen present in the root dentin . e. faecalis is a very virulent microorganism , which can survive in alkaline ph and during long starvation periods and then becomes viable in the presence of serum . e. faecalis can survive long periods without any nutrient availability because it can derive nutrients from hyaluronan , which is converted by enzyme hyaluronidase and also derive energy sources from dentinal fluid even in a well - sealed root canal system . under stress , lipoteichoic acids protect e. faecalis against lethal conditions , cytolysin , as-48 and bacteriocin , which inhibits other bacterial growth . cytolysin destroys cells such as erythrocytes , pnm cells , macrophages and kills gram - positive microorganisms . due to this effect the adhesion of e. faecalis to type i and iv collagen is the basis of the present study because apical one - third of the root is made of cellular cementum primarily composed of type iv collagen . adhesion is the first step in colonization and our study confirms adhesion and invasion by e. faecalis up to 160 m into the root cementum [ figure 2c ] . however , a previous study showed e. faecalis penetration only up to 150 m deep into the root dentin . deeper penetration of e. faecalis in our study is due to the change in apical environment such as demineralization and apical resorption . in group i , there was no invasion or adhesion of e. faecalis as it is a control group . in group ii , only few samples showed adhesion , mean of adhesion is 0.55 [ table 1 ] and no samples showed penetration . on the other hand in group iii , all samples showed adhesion [ table 1 ] and highest penetration into root cementum up to 160 m [ table 2 ] . this shows that if early treatment is done after primary infection as in group ii , there are less chances of penetration of e. faecalis and re - infection or persistent infection . whereas in group iii , there was delay in treatment at early stages of infection , leading to change in apical environment like demineralization of root cementum and root cementum resorption . these apical changes helped e. faecalis to penetrate deep into the root cementum , thereby increasing the chances of persistent infection or re - infection . in our study , we cultured for 8 weeks twice because 18 weeks showed primary infection followed by biomechanical preparation and obturation giving the impression of normal root canal treatment done to subside primary infection . group ii showed only adhesion but no penetration [ figures 1b and 2b ] , whereas group iii showed deeper penetration . this means irrespective of the primary or secondary infection , what matters most is the apical changes in environment like demineralization or apical resorption , which help e. faecalis to penetrate deep into root cementum and is the cause for re - infection or persistent infection , as confirmed by the samples in group iii . we used gamma irradiation to sterilize the teeth because it does not alter collagen characteristics of the teeth . e. faecalis produces collagen binding protein , with the help of which it adheres strongly to collagen . other methods of sterilization of teeth samples are by autoclaving , using hot air oven , etc . the disadvantage of autoclave is that it collapses the collagen strands and use of hot air oven makes teeth dehydrated and more brittle . in our study , data was collected using a clsm as it has advantages over other methods such as histological samples , which can not distinguish between viable and dead bacteria . scanning electron microscope has the disadvantage of multiple steps for sample preparation making it time consuming and also it can not differentiate between dead and viable bacteria . fluorescence probe has the disadvantage that it can not distinguish between viable and dead bacteria and also it can not show distribution of bacteria . the clsm ( zeiss lsm 510 meta gmbh , mannheim , germany ) analysis has advantage over other methods to visualize bacteria . our study confirms that clsm can give a clear picture about viability and spatial distribution of bacteria . fda is a non - fluorescent cell - permeable dye , which in viable cells crosses the cell membrane , gets metabolized by intracellular esterases and is converted to fluorescein ( green ) . pi is non - cell permeable fluorescent dye , which gets adhered to ruptured cell membranes and dead cells appear red in color . we used an organic acid ( lactic acid ) for demineralization of root cementum because the end product of sucrose metabolism is lactic acid . demineralization and apical resorption at root cementum plays a significant role in penetration of e. faecalis into root cementum . the severity of infection is directly proportional to depth of penetration of e. faecalis into root cementum . the progress of the disease plays a critical role in the adhesion and penetration of e. faecalis to root cementum .
aim : the aim of this study is to address the cause of persistent infection of root cementum by enterococcus faecalis.materials and methods : a sample of 60 human single - rooted teeth were divided into three groups . group i ( control group ) had no access opening and one - third of the apical root cementum was sealed using varnish . group ii had no preparation of teeth samples . in group iii , apical root cementum was exposed to organic acid and roughened using diamond point to mimic apical resorption . after access opening in groups ii and iii , all teeth samples were sterilized using gamma irradiation ( 25 kgy ) . e. faecalis broth was placed in the root canal and apical one - third of the tooth was immersed in the broth for 8 weeks with alternate day refreshment followed by biomechanical preparation , obturation and coronal seal . apical one - third of all teeth samples were again immersed in the broth for 8 weeks with alternate day refreshment to mimic secondary infection . the samples were observed under a confocal microscope after splitting the teeth into two halves.results:e . faecalis penetrated 160 m deep into the root cementum in group iii samples and only showed adhesion in group ii samples.conclusion:penetration and survival of e. faecalis deep inside the cementum in extreme conditions could be the reason for persistent infection .
INTRODUCTION MATERIALS AND METHODS Inclusion criteria Exclusion criteria Procedure Culturing procedure RESULTS DISCUSSION CONCLUSION
PMC3703717
major depressive disorder ( mdd ) constitutes the first leading cause of years lived with disability , and its incidence is on the rise globally . yet , until recently , little was known about its pathogenesis , as these conditions are not associated with relevant brain alterations or clear animal models for spontaneous recurrent mood episodes . the clinical phenomenology of major depression implicates brain neurotransmitter systems involved in the regulation of mood , anxiety , fear , reward processing , attention , motivation , stress responses , social interaction , and neurovegetative function . mdd is associated with blunted reactivity to both positively and negatively stimuli ; thus , the decline in hedonic responses may be related to generalized affective insensitivity , instead of deficits in the capacity to feel pleasure at the level of basic sensory experience . from the middle of the last century , a great effort has been made to elucidate the brain areas involved in emotion control and in the pathophysiology of mood disorders . animal and human studies have indicated the involvement of the limbic system including the hippocampal formation , cingulate gyrus , and anterior thalamus the amygdala and different cortical structures as well as the hypothalamus in these processes [ 5 , 6 ] . these structures are connected in two main networks : the orbital and the medial prefrontal networks . the orbital network appears to function both as a system for integration of multimodal stimuli and as a system for assessment of the value of those stimuli , and , probably , the support of abstract assessment of reward . the structures of the medial prefrontal network have been shown to contain alterations in gray matter volume , cellular elements , neurophysiological activity , receptor pharmacology , and gene expression . dysfunction within this system underlies the disturbances in emotional behavior and other cognitive aspects of the major depressive disorder . treatments for depression , involving pharmacological , neurosurgical , and deep brain stimulation methods , appear to suppress pathological activity within the components of medial prefrontal network such as the subgenual anterior cingulate cortex , ventromedial frontal cortex , striatum , and amygdala . although the causes of mdd are not yet completely known , genetic factors appear to play an important role although other factors deal with acute or chronic stress , childhood trauma , viral infections , and others [ 12 , 13 ] . regarding genetic causes , certain polymorphisms in genes related to the serotonergic system as the serotonin transporter , the brain - derived neurotrophic factor , the monoamine oxidase a , or the tryptophan hydroxylase 1 , may increase the risk for depression or the vulnerability to stress . not all the studies published to date have found gene - environment interactions ; however , the combination of both factors seems to predict more accurately a person 's risk to suffer from major depressive disorder better than genes or environment alone . the discovery that some drugs as iproniazid and imipramine exert an antidepressant effect dates back to the 1950s . in 1965 , it was shown that these drugs act through the monoaminergic system by increasing the brain levels of those monoamines . these observations led to the development of the classical monoaminergic hypothesis of depression , which proposes that low monoamine brain levels in depressed individuals are responsible for this pathology . the classic antidepressants that increase monoamine neurotransmitters in the synaptic cleft are generally used for first - line treatment . however , the clinical benefit of these treatments is not immediate , taking 3 - 4 weeks to obtain a full response . other therapeutic problems of currently used antidepressant drugs include relapses , drug side effects , incomplete resolution , residual symptoms , and drug resistance . traditionally , research in the neurobiology of major depressive disorder has been focused on monoamines . however , several lines of evidence have led to the conclusion that the abnormalities associated to depression go beyond monoaminergic neurotransmission : thus , the development of better antidepressants will surely depend on the discovery and understanding of new cellular targets . in this regard , in the late 90 's a new hypothesis has tried to explain major depression based on molecular mechanisms of neuroplasticity . the neuroplasticity hypothesis was postulated based on several findings : first , stress decrease hippocampal neurogenesis and synaptic plasticity in prefrontal cortex ( pfcx ) [ 1922 ] . moreover , most known antidepressant therapies stimulate the proliferation of hippocampal progenitor cells , which constitutes the first stage of adult hippocampal neurogenesis . however , the contribution of hippocampal neurogenesis to the pathogenesis of depression is far from being fully understood . second , hippocampal morphologic analyses in depressed patients reveal volume loss and gray matter alterations . while some studies suggest that decreased adult neurogenesis could be responsible for such fluctuating changes , others show that the hippocampal volumetric reductions could be due to changes in neuropil , glial number , and/or dendritic complexity , and not necessarily to a cell proliferation decrease . third , different neuroplasticity- and proliferation - related intracellular pathways appear to be involved in the antidepressants ' action as brain - derived neurotrophic factor ( bdnf ) , -catenin [ 10 , 26 ] , or the mammalian target of rapamycin ( mtor ) . dentate gyrus proliferation is decreased by stress [ 1922 ] , and in several animal models of depression as unpredictable stress , chronic administration of corticosterone , olfactory bulbectomy , or maternal deprivation [ 22 , 2831 ] . this loss in cell proliferation is correlated with a decreased hippocampal volume [ 3234 ] . hippocampal proliferation decrease is also observed in other disease models such as diabetic mice , which present a high incidence of depression reported in individuals with that primary illness . in these animals , the reduced hippocampal proliferation is reversed with chronic antidepressant treatments . in animals subjected to acute or chronic stress , a period of at least 24 h or 3 weeks , respectively , is required to get a recovery of the cellular proliferation . however , although all these changes have been extensively studied , major depression is not generally considered as hippocampal disorder . it is unlikely that impaired adult hippocampal neurogenesis alone may fully explain the neuropathology of major depression . in this sense , other studies have addressed cellular proliferation in anatomical structures quite relevant to depressive disorders , such as prefrontal cortex and amygdala , by using animal models of depression . thus , medial frontal cortex presents a reduction in cell proliferation , downregulation of genes implicated in cell proliferation [ 37 , 38 ] , decreased cell growth and survival , and apoptosis inhibition . structures such as amygdala present an opposite pattern , with an increase in neuronal dendrite length in stress models . chronic administration of antidepressants leads to an increased proliferation in prefrontal cortex [ 36 , 40 ] although the fate of the new generated cells goes toward the formation of glia rather than neurons , in contrast to hippocampus . no data are available regarding antidepressant effects on amygdalar cell proliferation ; however , this structure has been involved in the negative control of the hippocampal cell survival induced by antidepressant treatments , based on the increased cell survival observed in hippocampus , after the basolateral complex of the amygdale ( bla ) lesion . it is interesting to note that amygdala implicated in fear - related learning that impairs the memory processing of the hp - pfcx memory shows an enhancement of ltp under stress situations , which is not reverted by antidepressants . thus , antidepressants as tianeptine are able to restore the normal functionality of hp and pfcx under stress situations , while the amygdala retains the ability to increase its activity in the same stress conditions . it could only be the most conspicuous feature of a more fundamental type of cellular plasticity , which could also govern the prefrontal cortex and other regions . it has also been proposed that , in addition to neural proliferation , changes in synaptic plasticity would also be involved in the biological basis of depression , being modulated by antidepressant treatments [ 43 , 44 ] . pfcx is also a region sensitive to stress - induced effects , with a reduction in the number and length of spines in apical dendrites of the pyramidal cells in the medial prefrontal cortex area [ 46 , 47 ] , as well as changes in the number , morphology , metabolism , and function of glial cells , that produce changes in the glutamatergic transmission , resulting in memory impairments [ 4851 ] , and reduced synaptic plasticity in the hp - pfcx neuronal pathway . the increase in extracellular glutamate could be one of the reasons underlying the molecular changes associated to stress . however , while frontal cortex and hippocampus are reduced and hypofunctional in major depression , structures such as amygdala present hypertrophy and hyperactivity . an increased apoptosis has also been related to a higher risk of suffering major depression since increased cell death in areas as dentate gyrus ( dg ) , ca1 , and ca4 areas of the hippocampus , entorhinal cortex , and subiculum are reported in studies using human postmortem brain samples though this phenomenon does not seem to account for the hippocampal volume reduction . animal studies also report that acute stress increase hippocampal apoptosis , while chronic stress induces no changes , increased apoptosis in cortex [ 54 , 55 ] , or hippocampus . antidepressant treatment decreases cell death by different mechanisms , as the activation of the expression of trophic factors ( bdnf and its receptor trkb ) which results in increased cell survival [ 56 , 57 ] or directly reducing cellular apoptosis in animal stress models as reported for fluoxetine [ 28 , 58 ] . it has been suggested that an increase in serotonin levels mediates the raise in cell proliferation , while the depletion of this neurotransmitter does not lead to an immediate effect over the hippocampal cell division . in line with that treatments exerting a direct action over the serotonergic system include chronic but not acute administration with drugs such as tricyclic antidepressants , monoamine oxidase inhibitors ( maoi ) , serotonin - selective reuptake inhibitors ( ssri ) , serotonin and noradrenaline reuptake inhibitors ( snri ) , and 5ht4 agonists [ 8 , 19 , 20 , 6065 ] . a nonpharmacological intervention such as the silencing of the serotonin transporter ( sert ) by rnai in dorsal raphe serotonergic neurons also leads to increased hippocampal proliferation . the administration of other drugs , such as lithium in combination with antidepressants as desipramine , produces an increase in hippocampal proliferation and a decrease in apoptosis of hippocampal progenitor cells in irradiated animals . treatments with antidepressants that increase serotonin levels in brain act by targeting different progenitor cell populations . thus , chronic administration of the ssri fluoxetine or subchronic treatment with a 5-ht4 agonist increases cell proliferation and neurogenesis - targeting amplifying neural progenitors ( anps ) ( figure 1 ) while chronic electroconvulsive seizure ( ecs ) produces a fast - acting effect targeting both quiescent neural progenitors ( qnps ) and anps . an increased hippocampal proliferation as a consequence of chronic antidepressant treatment has been proven necessary for some [ 7173 ] , but not all the antidepressant - like effects in animals . the antidepressant - like effects have been related to the increased hippocampal proliferation [ 8 , 71 , 74 ] , dendritic arborization , maturation , and functional integration of newborn neurons . however , other drugs with potential antidepressant action do not mediate their effect through the activation of progenitor cells division , since the complete elimination of hippocampal proliferation by direct irradiation of this structure does not block the antidepressant response promoted by the blockade of drugs acting on other neurotransmitter systems as the corticotrophin - releasing factor receptor ( crf ) or arginine vasopressin 1b ( v1b ) receptors . classically , the modulation of different neurotransmitter systems has been implicated in the mediation of the antidepressant effects , and , for some of them , a link with proliferative or plastic changes has been reported . the traditionally involved neurotransmitter systems include the serotonergic , adrenergic , and dopaminergic ones , while others , such as the glutamatergic and cannabinoid systems and the corticotropin - releasing factor ( crf ) system implicated in the secretion of acth are acquiring increasing importance in the last years . here we will focus on the serotonergic receptors most relevant to modulating neural proliferation and synaptic plasticity processes . the partial lesion of dorsal and medial raphe nuclei , which results in a decrease of serotonergic neurons that innervate the dentate gyrus of the hippocampus and other projection areas as cortex and amygdala , decreases the proliferation in the subgranular zone of the dentate gyrus . several serotonin receptors have been involved in the antidepressant - induced increase of cell proliferation in the hippocampus , together with neurite outgrowth and cell survival in cells expressing these receptors . however , other authors report a lack of changes in proliferation and/or neurotrophic factors expression after chronic treatment with the antidepressant fluoxetine , questioning the importance of the serotonergic system in hippocampal proliferation [ 133135 ] . the importance of this serotonergic subtype in the effects of antidepressants has been shown in studies in vivo using 3 day treatment with the 5-ht1a agonist 8-oh - dpat , and chronic administration of this drug [ 74 , 79 , 80 ] that produce an increase in proliferation in the subgranular zone ( sgz ) of the hippocampus that depends on the 5-ht1a postsynaptic receptors . in studies using hippocampal neural progenitors , the serotonin - mediated increase in proliferation the acute administration of 5-ht1a antagonists produces a decrease of hippocampal proliferation , or no changes after 14 days . knock out animals for the 5-ht1a receptor subtype present no changes in basal proliferation compared to wild type animals [ 9 , 71 ] , but present a decreased hippocampal cell survival . the 5-ht1a receptor subtype has been proven necessary for the hippocampal proliferative effect of some antidepressants as fluoxetine , although other drugs as imipramine , acting on other neurotransmitter systems , increases hippocampal proliferation in a 5-ht1a - independent manner ( table 1 ) . the role of the 5-ht2a / c receptors in the regulation of neurogenesis is less clear . the chronic administration of 5-ht2a antagonists as ketanserin , and 5-ht2c antagonists as sb243,213 and s32006 , produce the increase in hippocampal proliferation , while the acute treatment with 5-ht2a / c agonists or antagonists produce no changes or a decrease in proliferation , respectively [ 74 , 81 ] . interestingly , the subchronic treatment with ketanserine in combination with the ssri fluoxetine increases a series of synaptic plasticity markers as -catenin and n - cadherin present in the membrane fraction , together with bdnf gene expression , however , hippocampal proliferation is not significantly modified . the increased proliferation or synaptic plasticity parallel the antidepressant - like effect observed for the treatments with antagonists [ 83 , 137 , 138 ] , while the administration of 5-ht2a agonist counteracts the effect of ssris . the blockade of 5-ht2a receptor subtype located in gabaergic interneurons produces the activation of hippocampal pyramidal neurons that modulates dendritic activation and synaptic plasticity ( table 1 ) . in the last years , the 5-ht4 receptor subtype has been proven to have an outstanding role on the depressive pathology . this receptor subtype density and signaling cascade through camp are up - regulated in the frontal cortex and caudate - putamen of depressed humans . chronic treatments with classical antidepressants produce a desensitization of this subtype in structures as hippocampus [ 97 , 98 ] . in the last years , it has been described a short - term antidepressant - like response mediated by 5-ht4 partial agonists [ 8 , 64 , 65 ] , or when coadministered with classical antidepressants . the antidepressant effect of 5-ht4 agonist is mediated by an increase in hippocampal proliferation in vivo [ 8 , 64 ] , together with other proliferative and plasticity markers as -catenin , akt , bdnf [ 8 , 65 ] , phosphorylated camp response element binding ( creb ) protein [ 8 , 6365 ] . the 5-ht4 implication in serotonin - induced hippocampal proliferation has been observed blocking this receptor with the 5-ht4 antagonist dau 6285 in primary hippocampal progenitor cell cultures ( table 1 ) . the role of the 5-ht6 receptor subtype in depression is not clear , but tricyclic antidepressants as amitriptyline and atypical antidepressants as mianserin have high affinity for this serotonin receptor subtype , acting as antagonists . this receptor subtype is present postsynaptically in brain areas as cortex and hippocampus and is implicated in learning and memory . the action over this receptor to date is contradictory since it has been published that both antagonists and agonists exert antidepressant and anxiolytic effects alone [ 142 , 143 ] or enhance the beneficial effect when combined with antidepressant drugs . however , this effect is not mediated by increased neurogenesis but for an increase in neural cell adhesion molecule polysialylation ( psa - ncam ) that may mediate memory consolidation through long - term changes in synaptic plasticity ( table 1 ) . recent studies have shown that the blockade of the 5-ht7 receptor subtype produces antidepressant - like behaviour [ 147 , 148 ] . this is supported by studies in animal depression models as the olfactory bulbectomy , the antidepressant - like behaviour of knock - out mice for the 5-ht7 receptor subtype , and clinical data using the antagonist lu aa21004 . moreover , a 7-day treatment with the 5-ht7 antagonist sb-269970 produces an increase in proliferation in the subgranular zone of the hippocampus although changes in the number of dividing cells do not appear in 5-ht7 knock - out animals ( table 1 ) . in an attempt to explain those brain changes implicated on depression and/or antidepressant effect that could not be included in the initial monoaminergic hypothesis of depression , it was postulated the so - called neurotrophic hypothesis of depression that later was revised to a this hypothesis links the changes in depression models to a decrease of brain - derived neurotrophic factor ( bdnf ) and the antidepressant effect to an increase in bdnf in hippocampus [ 18 , 19 , 151 , 152 ] . moreover , the decreased bdnf observed in heterozygous knock - out mice ( bdnf ) is related to a depression - like phenotype . these changes in brain bdnf expression are paralleled by serum levels , so that it has been proposed as a biomarker for depression disease , positive or negative response of the individuals to the antidepressant treatment [ 153156 ] , and even a marker of suicidal depression . however , the role of bdnf is still not clear in the depressive pathology since some authors describe a lack of changes on the bdnf levels associated to stress animal models [ 57 , 158160 ] . the infusion of bdnf in brain [ 161 , 162 ] or more specifically in hippocampus [ 163 , 164 ] produces antidepressant - like effects moreover , within the hippocampus , the infusion of bdnf in the dg but not in the ca1 region produces an antidepressant - like effect , which is supported by the lack of antidepressant action in mice selectively knocked out for the bdnf gene in the dg and not in the ca1 . even peripherally administered bdnf is able to display antidepressant - like actions , resembling the increased serum bdnf observed after antidepressant treatments . chronic administration of antidepressants produces an increase in hippocampal bdnf mrna expression and bdnf protein levels ( figure 2 ) [ 8 , 65 , 167 ] . the blockade of 5-ht2a receptor reverses the effect of stress - induced downregulation of bdnf mrna expression in hippocampus . also , the subchronic treatment with ssri and 5-ht2a antagonists is able to increase bdnf expression in the dentate gyrus of the hippocampus ; however , the protein level is not yet modified in subchronic treatments ( figure 2 ) . the main role of bdnf regarding adult neurogenesis is not linked to proliferation , but to the increase in cell survival , as described using bdnf and its receptor trkb knock - out animals which present a reduced bdnf expression [ 61 , 169 ] . bdnf is implicated in synaptic plasticity , and proteins as neuritin that are induced by bdnf are decreased in stress - induced animal models of depression and increased after chronic antidepressant treatment , contributing to the bdnf antidepressant effect [ 170 , 171 ] . the existence of a single - nucleotide polymorphism ( snp ) in the human bdnf gene , bdnf ( val66met ) is associated to reduced bdnf secretion , and to an increased incidence of neuropsychiatric disorders [ 173 , 174 ] . in animals bdnf ( val66met ) predisposes to a depression - like behaviour after stress situations that recover normal values after the administration of antidepressants . other important trophic factor is the vascular endothelial growth factor ( vegf ) implicated in the this theory proposes the need of vascular recruitment associated to active sites of neurogenesis formed by proliferative cells that present an endothelial phenotype in 37% of the cases . vegf expression is reduced in hippocampal dentate gyrus after irradiation , and in stress models although other authors do not show changes associated to stressed animal models . from studies using irradiated rats , it was proposed that the decrease of progenitor cells responsible for the expression of vegf would underlie the decrease of this factor . some antidepressant treatments , as the electroconvulsive therapy ( ecs ) [ 178 , 181 , 182 ] , approache with antidepressant - like effect as exercise , or mood stabilizers as lamotrigine , result in the upregulation of vegf expression . moreover , the local administration of this trophic factor produces an increase in hippocampal proliferation . in addition , the silencing of hippocampal vegf or the use of antagonists for its receptor flk-1 blocks its antidepressant - like effect and decreases markers of newborn neurons as doublecortin ( dcx ) . even though these data indicate the importance of vegf brain levels in the depressive disorder , preliminary reports do not show a clear correlation between peripheral vegf and depressive disorders , not allowing for the use of this molecule as a marker of depression and/or antidepressant response [ 185 , 186 ] . the activation of receptor tyrosine kinases by neurotrophic factors promotes the activation of the pi3k / akt pathway that is linked to the wnt/-catenin through the inhibition of gsk-3 and to the mtor pathway through the phosphorylation of mtor protein that are discussed below . the pi3k / akt pathway per se has an outstanding role in promoting adult hippocampal proliferation and the inhibition of cell differentiation . antidepressant treatments also produce increases in akt levels in structures as hippocampus [ 8 , 10 ] and frontal cortex . the upstream and downstream components of the camp signaling pathway have been extensively involved in the pathophysiology of mood disorders as well as in the actions of antidepressant drugs . alterations in several elements of this pathway , such as g proteins ( gs or gi ) , adenylate cyclase ( ac ) , camp levels , camp - dependent protein kinase ( pka ) , and the camp response element - binding protein ( creb ) transcription factor , have been described in peripheral cells and the postmortem brain of patients with affective disorders , both untreated or after antidepressant therapy [ 11 , 100 , 189 , 190 ] . various elements along this pathway have been identified as potential targets for antidepressant drugs ( table 2 ) . in peripheral cells and postmortem brains of patient with mayor depression , there is a reduction of the adenylyl cyclase ( ac ) activity in response to forskolin , 2-adrenergic agonists [ 8893 ] , and 2-adrenoceptor agonists . chronic treatment with antidepressant drugs produces the increase in camp levels in rat hippocampus , cortex , and striatum , as well as in postmortem human frontal cortex samples from depressed patients ( figure 3(a ) ; personal observation ) . this effect has been attributed to both enhanced coupling of gs proteins to adenylyl cyclase and increased adenylyl cyclase activity [ 95 , 96 ] . the direct injection of camp or inhibition of camp degradation by rolipram produces antidepressant - like effect in animals . chronic antidepressant treatment in rat desensitizes camp response to serotonergic receptor as 5-ht1a receptor ( figure 3(b ) ) and 5-ht4 receptor [ 8 , 97 , 98 ] and increases the cb1-mediated inhibition of adenylyl cyclase ( ac ) in prefrontal cortex , an effect that is modulated by 5-ht1a receptors . the next step in this signaling pathway is the activation of camp - dependent protein kinase ( pka ) by camp , so that pka activity is increased after chronic antidepressant administration .active pka phosphorylates proteins as creb , a transcription factor that regulates the expression of several genes involved in neuroplasticity , cell survival , and cognition [ 191195 ] . creb has been widely involved in the pathophysiology of depression and both behavioural and cellular responses to antidepressant treatments [ 11 , 190 ] . hippocampal expression of creb is reduced in response to stress exposure [ 102 , 103 ] . in contrast , chronic but not acute antidepressant therapy and electroconvulsive shock ( ecs ) increase levels of creb mrna , creb protein ( figure 4 ) , and creb activity promoting the phosphorylation of this protein effects that seems to be area and drug dependent [ 11 , 103 , 105108 ] . thus , increased phosphorylated creb levels in hippocampus are linked to antidepressant - like behaviour , as observed after viral - mediated overexpression of creb in hippocampus in behavioural models of depression . contrary to what could be expected , creb overexpression in the nucleus accumbens produces prodepressive effects , and lowered creb in the nucleus accumbens in mice exhibits an antidepressant - like response . a different pattern appears also for amygdala , in which high creb levels produce opposite effects depending on the timing . thus , when creb overexpression is induced before learned helplessness training , there is a prodepressant effect , while the increase of creb after the training is antidepressant . studies in postmortem human brain indicate lower levels of creb protein in depressed antidepressant - free subjects , in contrast to the increased creb level in patients taking an antidepressant at the time of death . these results are parallel to studies in human fibroblasts of patients with major depression , which is consistent with animal studies . among the several target genes regulated by creb , two of the more relevant , are the brain - derived neurotrophic factor ( bdnf ) and the vascular endothelium growth factor ( vegf ) [ 19 , 60 , 197 , 198 ] . a growing body of data shows that other signalling cascades can modulate creb activity through phosphorylation , such as the calcium / calmodulin - dependent kinase ( camkii ) and the mitogen - activated protein ( map ) kinase cascades , and may also be implicated in the mechanism of action of antidepressants [ 11 , 199 ] . initially , all effects of camp increase were attributed to the activation of pka / creb , but two novel targets as the camp - regulated ion channels and epac ( exchange protein directly activated by camp ) are now known to be involved in mediating camp responses . an increase in epac-2 levels , but not epac-1 , has been found in postmortem samples of prefrontal cortex and hippocampus of depressed subjects . the wingless - type ( wnt ) family of proteins has key roles in many fundamental processes during neurodevelopment . the role of this pathway in neural development , through the modulation of neural stem cells ' ( nsc ) proliferation and differentiation , has been clearly demonstrated . some of the processes regulated by wnt/-catenin pathway activity are neural differentiation , hippocampal formation [ 203 , 204 ] , dendritic morphogenesis [ 205 , 206 ] , axon guidance [ 207 , 208 ] , and synapse formation . moreover , it also plays an important role in spatial learning and memory , including long - term potentiation ( ltp ) phenomena . in the absence of wnt signaling , -catenin function is blocked by a destruction complex consisting of axin , apc , and gsk-3 and ck1a kinases , which phosphorylates -catenin for destruction in the proteasome [ 212 , 213 ] . canonical wnt signaling results in the inhibition of gsk-3 which is constitutively active , and the non - phosphorylated -catenin is stabilized in the cytoplasm and translocated to the nucleus , which is essential for canonical wnt signaling . once in the nucleus , -catenin forms a complex with the t - cell factor / lymphoid enhancer factor ( tcf / lef ) transcription factors , to activate the expression of wnt target genes . tcf / lef transcription factors are bound to groucho , a protein producing repressive effects . nuclear -catenin promotes the displacement of groucho and the binding of the histone acetylase cyclic amp response element - binding protein ( creb ) , activating the transcription machinery [ 214 , 215 ] . the noncanonical pathway or -catenin independent is mediated through rac / rho ( wnt / pcp ) or through calcium ( wnt / ca ) . in the last years several evidences have implicated wnt - signaling pathway in the pathophysiology and treatment of mood disorders and other cognitive pathologies . gsk-3 and -catenin are regulated either directly or indirectly by lithium , valproate , antidepressants , and antipsychotics [ 8 , 10 , 113117 ] , while gsk-3 has also been identified as a target for the treatment of alzheimer 's disease ( table 2 ) . postmortem human brain samples from depressed subjects and teenage suicide victims present a dysregulation of wnt / gsk-3 signaling with a decrease in -catenin expression in prefrontal cortex . -catenin knock - out mice with 5070% decrease of -catenin expression in forebrain regions present an increased immobility time in the tail suspension test indicating a depression - like state , but not in other anxiety tests . the inhibition of gsk-3 activity , either pharmacologically [ 117119 ] , or through deletion in mouse forebrain , results in an increase in brain -catenin levels , as well as in antidepressant - like effects or decreased anxiety , as observed by the direct overexpression of -catenin in mouse brain . gsk-3 inhibition by lithium is an important regulator of cell survival related to mood stabilizers and displays antidepressant efficacy [ 115 , 118 , 215 , 217 , 218 ] . in contrast , gsk-3 knockin mice displayed increased susceptibility to stress - induced depressive - like behaviour , presenting decreased cell proliferation in the subgranular zone of the dentate gyrus , accompanied by a reduction in vegf , but not bdnf , and blunted neurogenesis in response to antidepressant treatments . these data support the importance of the wnt pathway activation and -catenin levels associated to mood disorders and their treatment . in addition , snp variation in the promoter region of gsk-3 plays a protective role in the onset of bipolar illness and increased antidepressant response . recent studies have identified the wnt / gsk-3/-catenin - signaling pathway as a key regulator of adult neurogenesis in hippocampus [ 220 , 221 ] or subventricular zone , highlighting the role of gsk-3 on neural progenitor homeostasis . wnt proteins are signaling molecules that are released from hippocampal neural stem cells ( nsc ) and astrocytes , acting autocrinally to regulate proliferation via wnt canonical pathway [ 220 , 221 ] . wnt/-catenin pathway is activated by antidepressant treatments as electroconvulsive therapy , chronic treatments with classical antidepressants as the dual serotonin - noradrenaline reuptake inhibitor ( snri ) venlafaxine ( figure 5(a ) ) , and 5-ht4 partial agonists . the antidepressant - induced -catenin increase is observed in the subgranular zone ( sgz ) of the dentate gyrus ( dg ) of the hippocampus , in membrane and nuclear fractions [ 10 , 26 ] . the increased proliferation observed in sgz after chronic antidepressant treatments is localized in cell clusters that also show a positive -catenin staining [ 8 , 10 ] . other treatments with antidepressant - like efficacy , such as the subchronic administration of ssri fluoxetine together with the 5-ht2a antagonist ketanserin , also produce a -catenin increase in the membrane fraction but not in the nuclear one , which corresponds with a lack of changes in hippocampal proliferation ( figure 5(b ) ) . the increase in membrane - associated -catenin is parallel to an elevation of n - cadherin protein , both members of the -catenin / n - cadherin complex present in pre- and postsynaptic terminals [ 223 , 224 ] , where -catenin recruits scaffolding proteins , conforming cell - cell adhesion complexes , recruiting synaptic vesicles [ 209 , 227 ] , and acting on the development of new synapses . this suggests a preference of modifications in synaptic plasticity instead of proliferation , as previously reported for other antidepressant treatments . in addition , frizzled receptors and gpcrs can interact through several pathways [ 228 , 229 ] . some gpcrs act through gq and/or gi proteins activating pkb ( protein kinase b)/akt which inhibits gsk-3 via phosphorylation . these receptors can also activate gs proteins that activate prostaglandin e2 ( pge2 ) , phosphoinositide 3-kinase ( pi3k ) , and pkb / akt , leading to the inhibition of gsk-3. other receptors act on gq or g12/13 proteins , activating the phospholipase c ( plc ) and protein kinase c ( pkcs ) and inhibiting gsk-3 . taken together , these data support the possible existence of interactions between the gsk-3/-catenin pathway and other neurotransmitter systems involved in depression , including serotonin . the pharmacological modulation of the different elements of the wnt/-catenin pathway with antidepressant purposes has to be clarified in the near future , probably modulating at the level of wnts or -catenin activity . interestingly , a number of patents regarding gsk-3 inhibition as the therapeutic mechanism for treatment of neuropsychiatric disorders are being launched , including treatment of depression . target of rapamycin ( tor ) genes , members of the phosphoinositol kinase - related kinase ( pikk ) family of kinases , was first described in yeast as the pharmacological targets of the microbicide rapamycin . mtor , the mammalian form of this protein , exists in two different functional multiprotein complexes within the cells , mtorc1 and mtorc2 , which are evolutionarily conserved from yeast to mammals [ 232 , 233 ] . mtorc2 is involved in cytoskeletal remodeling and in the regulation of cell survival and cell cycle progression . mtorc1 , the primary target of rapamycin , is involved in cell proliferation , cell growth and survival by protein translation , energy regulation , and autophagy in response to growth factors , mitogens , nutrients , and stress [ 235237 ] . in neurons , mtorc1 activity is regulated by phosphorylation in response to growth factors , as bdnf , mitogens , hormones , and neurotransmitters through the activation of g protein - coupled receptors ( gpcrs ) or ionotropic receptors . the mtorc1 phosphorylation is mediated by erk / mapk , pi3k , pka , and epac . the activation of mtorc1 results in the phosphorylation and activation of several downstream targets as the eukaryotic initiation factor 4e - binding protein 1 ( 4e - bp1 ) , p70 ribosomal s6 kinase ( p70s6k ) , rna helicase cofactor eif4a , extracellular signal - regulated kinase ( erk , including both erk1 and erk2 ) , or pkb / akt ; and the inhibition of the eukaryotic elongation factor 2 kinase ( eef2 ) [ 238 , 239 ] . mtor has been extensively studied related to cancer , development , metabolism , and more recently to the central nervous system ( cns ) physiology and diseases [ 238 , 240 , 241 ] . mtor - signaling pathway is involved in synaptic plasticity , memory retention , neuroendocrine regulation associated with food intake and puberty , and modulation of neuronal repair following injury . the target proteins of mtor , 4e - bp1 , and eukaryotic initiation factor-4e ( eif4e ) have been detected in cell bodies and dendrites in cultured hippocampal neurons and their distribution completely overlaps with the postsynaptic density protein-95 ( psd-95 ) at synaptic sites , suggesting the postsynaptic localization of these proteins . the activation of mtor has been functionally linked with local protein synthesis localized presynaptically as synapsin i , or postsynaptically as psd-95 and glur1 , and cytoskeletal proteins as the activity - regulated cytoskeletal - associated protein ( arc ) [ 27 , 241 , 243 ] . mtor - signaling pathway has been also related to a number of neurological diseases , such as alzheimer 's disease , parkinson 's disease , and huntington 's disease , tuberous sclerosis , neurofibromatosis , fragile x syndrome , epilepsy , brain injury , and ischemic stroke . dysfunction of mtorc1 is associated with the pathogenic mechanisms of alzheimer 's disease , and the activation of p70s6k , downstream of mtorc1 , has been identified as a contributor to hyperphosphorylated tau accumulation in neurons with neurofibrillary tangles . recent studies have also associated mtor signaling in affective disorders since the administration of ketamine produces a fast - acting antidepressant - like effect in animals and human . in stressed rats , a reduction in pi3k - akt - mtor - signaling the inhibition in mpfcx of calcineurin , a serine / threonine protein phosphatase that participates in the regulation of neurotransmission , neuronal structure and plasticity , and neuronal excitability , induces a depression - like behaviour , accompanied by a decrease in mtor activity . this effect can be reverted by the activation of mtor by nmda or the chronic administration of the antidepressant venlafaxine , promoting an antidepressant - like effect . in human postmortem samples of prefrontal cortex of depressed subjects , there is a decrease in the expression of mtor , as well as some of the downstream targets of this pathway , as p70s6 kinase ( p70s6k ) , eif4b , and its phosphorylated form , which suggests the impairment of the mtor pathway in major depressive disorder ( mdd ) that would lead to a reduction in protein translation ( table 2 ) . the subchronic , but not acute , administration of rapamycin in rodents has an antidepressant - like effect shown in two behavioural tests as forced swimming and tail suspension tests . acute administration of different nmda receptor antagonists as the ketamine , ro 25 - 6981 , and mk-801 or antagonists of the group ii of the metabotropic glutamate receptors ( mglu2/3 ) , as mgs0039 and ly341495 , produce a fast antidepressant effect [ 250 , 251 ] mediated by mtor - signaling pathway activation . ketamine rapidly activates the mammalian target of rapamycin ( mtor ) pathway , increases synaptogenesis , including increased density and function of spine synapses , in the prefrontal cortex of rats [ 27 , 243 ] , and increases hippocampal bdnf expression , that results in a rapid antidepressant - like effect in rats [ 27 , 243 ] and humans . moreover , blockade of mtor signalling by the specific antagonist rapamycin completely blocks the ketamine induction of synaptogenesis and behavioural responses in models of depression . other antidepressant strategies as the electroconvulsive treatment ( ect ) also activate the mtor pathway , leading to an increase in vegf . therefore , modulation of mtor could be a novel approach to develop strategies for the treatment of affective disorders . the neurogenesis hypothesis of depression was based upon the demonstration that stress decreased adult neurogenesis in the hippocampus . this reduction in the production of newborn granule cells in the hippocampal dentate gyrus is related to the pathophysiology of depression . since then , several studies have established that newborn neurons in the dentate gyrus are required for mediating some of the beneficial effects of antidepressant treatments since the increase in cell proliferation after antidepressant treatment is only observed in the sgz and not in svz , suggesting a specificity of the antidepressants to regulate hippocampal neurogenesis . moreover , psychotropic drugs without antidepressant activity do not increase neurogenesis [ 135 , 256 ] . the disruption of hippocampal proliferation by irradiation is not sufficient to drive a depression - like phenotype . both x - irradiation and genetic manipulation approaches demonstrated a requirement of hippocampal neurogenesis in mediating some of the antidepressant treatment effects [ 71 , 76 , 257 ] , while mice exposed to x - irradiation of the svz or cerebellum responded normally to the antidepressants . however , some drugs with potential antidepressant action do not mediate their effect through the increase in hippocampal proliferation , as drugs acting on corticotrophin releasing factor receptor ( crf ) or arginine vasopressin 1b ( v1b ) receptors , as indicated previously . the appearance of the antidepressant - like effect in behavioural tests after 2 - 3 weeks parallels the time needed for the growth of newborn cells in hippocampus . however , this time course does not always take so long . for classic antidepressants as the serotonin transporter inhibitors , a chronic regime is needed to observe that increased proliferation rate [ 10 , 60 ] , while , for others as ecs and 5-ht4 agonists [ 8 , 64 ] , an acute or subacute treatment , respectively , is enough to increase proliferation . the putative role of changes in synaptic plasticity and/or neural proliferation in the depressive pathology is proposed some time ago . synaptic plasticity , as indicated for proliferation , is also modulated by antidepressant treatments [ 43 , 44 ] . the neural plasticity is not only functional but structural and is impaired in animal models . for example , there is a decrease in spine number in hippocampal ca1 and ca3 areas in bulbectomized animals that are reverted with antidepressant treatment [ 259261 ] . this structural plasticity is more striking when new neurons are born , or there is an increase in neuron survival as a consequence of antidepressant treatment or ecs . the new dendritic spines formed are associated to smaller postsynaptic densities ( psds ) and a higher frequency of miniexcitatory postsynaptic currents ( mepscs ) , suggesting an increased number of new and active glutamatergic synapses . the rapid antidepressant response to drugs as ketamine acting through the blockade of nmda receptors appears as a new target for having fast - acting effects on the treatment of mood disorders compared to the weeks or months required for standard medications . ketamine and other glutamate antagonists through the increase of the number and function of new spine synapses in rat prefrontal cortex by the activation of mtor do not modify hippocampal cell proliferation . it would also be critical for future work to validate the relative importance of antidepressant - induced neurogenesis and synaptic plasticity in the antidepressant effects . however , evidence is strong that neurogenesis is required for at least some of the beneficial effects of antidepressant treatment . the exact role of neuroplastic / neuroproliferative changes in other brain structures as mpfcx and amygdala should be elucidated . as indicated in this review , the importance of either proliferation or plasticity , or both , is still a matter of debate . as the involvement of proliferation and plasticity has been mainly studied in hippocampus , we might be underestimating its role in the antidepressant effect . in this sense , as the hippocampus is responsible for the learning and cognition part of the depressive disorder , the fact that the impairment of hippocampal proliferation would not block the antidepressant effect of some drugs does not necessarily conclude that the proliferation is only dependent on hippocampus . in the last years , prefrontal cortex , a structure with a great importance in mood control and working memory , is gaining increasing relevance in the plastic changes linked to antidepressant effects promoted by drugs as ketamine . in this sense , hippocampal proliferation would be only a small part of the plastic changes that are taking place within the hippocampus , and other brain areas . thus , we must not underestimate the implication of synaptic plasticity in those antidepressant treatments that are not accompanied with increased proliferation .
it is widely accepted that changes underlying depression and antidepressant - like effects involve not only alterations in the levels of neurotransmitters as monoamines and their receptors in the brain , but also structural and functional changes far beyond . during the last two decades , emerging theories are providing new explanations about the neurobiology of depression and the mechanism of action of antidepressant strategies based on cellular changes at the cns level . the neurotrophic / plasticity hypothesis of depression , proposed more than a decade ago , is now supported by multiple basic and clinical studies focused on the role of intracellular - signalling cascades that govern neural proliferation and plasticity . herein , we review the state - of - the - art of the changes in these signalling pathways which appear to underlie both depressive disorders and antidepressant actions . we will especially focus on the hippocampal cellularity and plasticity modulation by serotonin , trophic factors as brain - derived neurotrophic factor ( bdnf ) , and vascular endothelial growth factor ( vegf ) through intracellular signalling pathways camp , wnt/-catenin , and mtor . connecting the classic monoaminergic hypothesis with proliferation / neuroplasticity - related evidence is an appealing and comprehensive attempt for improving our knowledge about the neurobiological events leading to depression and associated to antidepressant therapies .
1. Introduction 2. Cell Proliferation and Plasticity Role in Mood Disorders and Antidepressant Treatments 3. Pathways Leading to Proliferation and Neural Plasticity Changes That Exert Antidepressant-Like Effect 4. A Further Step: Neuroplasticity versus Proliferation 5. Conclusion
PMC4965211
the triathlon is a relatively new sport with only three decades of competition at international level . the triathlon involves a combination of three separate disciplines - swimming , cycling and running . the order of events is usually swimming , cycling , and running although some professional sprint triathlons vary . the olympic distance triathlon includes swimming ( 1500 meters ) , cycling ( 40 kilometers ) and running ( 10 kilometers ) ( 1 ) . the most successful athletes have amorphology of competitors in , such as to the ironman triathlon event , as these athletes are usually heavier and have a stronger build . recognizing current participants with potential in sports is important , and to develop their skills and groom their learning ability has its own importance . in most organizations , physiological parameters and anthropometry may guide coaches and trainers in optimizing training and identifying talent ( 3 ) . however , the coaching staff remains in a dilemma to determine what somatotype , proportions , and shape are best in order to maximize sporting performance ( 4 ) . earlier , it had been established that body type could play a crucial role in determining the position of players in the game of volleyball . sports scientists were encouraged to utilize such information while determining the position of a specific program ( 5 ) . similar observation were suggested in playing positions for handball players after analyzing anthropometric profiling ( 6 ) . height and body mass had been found to be correlated with sprint performance using anthropometric profile for young teen soccer players ( 7 ) . however , knechtle ( 8) noticed that no association existed between total race performance time , body weight loss and skeletal mass . similarly , in another study knechtle ( 9 ) found that anthropometry and its influence on race were very little on ultra - endurance triathletes in the longest triathlon . while in swimming events , the length of limbs played a more vital role than the frequency of stride or stroke length . on the other hand , body fat was marked negatively as a dead weight that an athlete needs to carry with him or her during the whole event ( 10 ) . therefore , there is a need to identify the influence of anthropometric profiling on triathlon events , particularly on running and cycling performances . overall , anthropometric characteristics and performances are interlinked in the majority of athletic sports ( 11 ) . past studies have been observed at international competition level and with good validity to predict success by using anthropometric characteristics . however , very few studies have been conducted to date on the anthropometric characteristics of the new zealand junior elite triathletes and little data exists to compare with international data . therefore , the general objective of this study was to determine the correlation between physical traits of calf girth or sum of eight skinfolds ( anthropometry ) and running or cycling performance of junior elite triathletes selected for the new zealand national squad . in addition , we also compared the anthropometric data from our study of new zealand junior elite triathletes with that of previously published data from the world elite and world junior elite triathletes ( 4 ) . this was a cross - sectional study conducted at a sports camp in hamilton , new zealand . the study population comprised of junior elite triathletes who were selected for the new zealand national squad . all junior elite triathletes who were selected for the new zealand national squad were invited to the camp . subsequently , athletes who agreed to be the part of investigation and showed a willingness to sign informed consent were examined in the study . the standard procedure for anthropometric profiling by the international society for advancement of kinanthropometry ( isak ) guideline was followed for each athlete . criterion anthropometrist ( isak level 3 accredited ) marked the key anthropometric positions on the triathletes and then directed the athletes towards three stations . this included skinfold ( 8 measurements ) , length ( 7 measurements ) , girth ( 7 measurements ) , breadths ( 7 measurements ) , body weight and height . all triathletes were measured by a trained anthropometrist ( isak level 2 ) , whose technical error was less , than which was recommended by isak . the 5-km running time and 10-km cycling time were evaluated as minute : seconds ( m : s ) . the correlation between anthropometric profile and cycling and running performances were tested using interclass correlation ( icc ) along with 90% confidence interval ( ci ) limits . all collected data was compiled on microsoft excel 2007 , and analyzed using the statistical package for social sciences ( spss inc . this was a cross - sectional study conducted at a sports camp in hamilton , new zealand . the study population comprised of junior elite triathletes who were selected for the new zealand national squad . all junior elite triathletes who were selected for the new zealand national squad were invited to the camp . subsequently , athletes who agreed to be the part of investigation and showed a willingness to sign informed consent were examined in the study . the standard procedure for anthropometric profiling by the international society for advancement of kinanthropometry ( isak ) guideline was followed for each athlete . criterion anthropometrist ( isak level 3 accredited ) marked the key anthropometric positions on the triathletes and then directed the athletes towards three stations . this included skinfold ( 8 measurements ) , length ( 7 measurements ) , girth ( 7 measurements ) , breadths ( 7 measurements ) , body weight and height . all triathletes were measured by a trained anthropometrist ( isak level 2 ) , whose technical error was less , than which was recommended by isak . a 5-km running performance the 5-km running time and 10-km cycling time were evaluated as minute : seconds ( m : s ) . the correlation between anthropometric profile and cycling and running performances were tested using interclass correlation ( icc ) along with 90% confidence interval ( ci ) limits . all collected data was compiled on microsoft excel 2007 , and analyzed using the statistical package for social sciences ( spss inc . , chicago , il , usa ) program , version 15 . the participation in the study was voluntary . prior to anthropometric profile testing , written informed consent was obtained from all participants . the study included 11 junior elite triathletes ( 6 females , 5 males ; av . furthermore , the overall performance data for the 5-km run on two occasions and 10-km cycling time trial for top 5 athletes are given in table 2 . the individual body measurements , ( i.e. skinfold and girth average ) the top 5 triathletes are given in table 3 . the findings of correlation analysis between skinfold measurements and athlete performance data are given in table 4 . similarly , the findings of correlation analysis between girth measurements and athlete performance data are given in table 5 . overall , a non - significant positive correlation was observed between the sum of eight skinfolds and running performance ( icc : 0.10 ; 90% ci : 0.68 0.77 ; p>0.05 ) as well as cycling performance ( icc : 0.15 ; 90% ci : 0.65 0.79 ; p>0.05 ) , which suggested athletes with greater body fat may have a better athletic performance . conversely , a significant negative correlation was observed between calf girth and running performance ( icc:0.66 ; 90% ci : 0.94 0.12 ; p<0.05 ) and a non - significant negative correlation was observed between calf girth and cycling performance ( icc:0.94 ; 90% ci : 0.97 0.68 ; p>0.05 ) . this study was to investigate whether the physical traits of calf girth or sum of eight skinfolds ( anthropometry ) correlate with running or cycling performances in the triathlon event . from the data , it can be said that athletes with i d number 13 and 17 had a very close overall performance time . athlete number 13 had a higher thigh girth than athlete number17 , and secured a faster time in the 5-km run . athlete number 17 had a higher calk girth than athlete number 13 , and had better cycling performance . however , it should be noted that athlete number10 secured first place in both events despite having a lower calf and thigh girth than athlete number 8 . additionally , the anthropometric data obtained by ackland et al . ( 4 ) on world elite and world junior triathlons were compared with new zealand junior triathlons from the present study . new zealand female athletes had a lower amount of 8 skinfolds by 8% ; had a larger arm span by 12% ; had about 5% less chest girth , waist girth , femur breadth , and thigh girth ; had a 4% longer forearm length ; and a 3% longer hand length as compared to the world junior athletes . similarly , new zealand male athletes had a lower sum of 8 skinfolds by 17% ; had a longer thigh length by 9% ; had a lower thigh girth by 5% ; a lower chest girth by 6% ; a lower waist girth by 4% ; and a lower femur breadth by 4% as compared to the world junior athletes . the stature of new zealand s athletes was longer than world junior athletes by 3% . the comparison of anthropometric profile between the new zealand junior triathletes and the world junior triathletes could help in identifying and selecting youth players from schools and the communities . it can be considered as a little step towards an era where sports will select the athlete rather than the athlete select the sport . however , it should also be considered that if only by body shape or by physical characteristic one could determine the success of an athlete , it could be easier to judge the winner . although many other aspects are involved in it , we are of strong opinion that body structure or shape is a major contributing factor in choosing an athlete . the present study investigated whether the physical traits of calf girth or sum of eight skinfolds ( anthropometry ) correlate with running or cycling performances of junior elite triathletes from new zealand . the study indicated a correlation between the calf girth and performance , suggesting that the triathletes ran well if they had smaller calves . we believe that anthropometric data can help in predicting the ideal body profile for specific athletic events , and may help in choosing the ideal athlete .
introductionthe triathlon involves a combination of three separate disciplines - swimming , cycling and running . to date , very few studies have been conducted on the anthropometric characteristics of the new zealand junior elite triathletes . the aim of this study was to determine the correlation between physical traits of calf girth or sum of eight skinfolds ( anthropometry ) and running or cycling performances in the triathlon event.methodseleven junior elite triathletes ( 6 females , 5 males ; ( av . age : 17 ) who were selected for the new zealand national squad , were examined in this cross - sectional study . all athletes were measured for the complete anthropometric profile , as per the international society for advancement of kinanthropometry ( isak ) guidelines . it was then correlated with the cycling and running performances using interclass correlation ( icc ) with 90% confidence interval ( ci ) limits.resultsa non - significant positive correlation observed between eight skinfolds tests on running performance ( icc : 0.10 ; 90% ci : 0.680.77 ; p>0.05 ) and biking performance ( icc : 0.15 ; 90% ci : 0.650.79 ; p>0.05 ) , suggested athletes with greater body fat may render a better athletic performance . conversely , a significant negative correlation was observed between calf girth and running performance ( icc:0.66 ; 90% ci : 0.94 0.12 ; p<0.05 ) and a non - significant negative correlation was observed between calf girth and cycling performance ( icc:0.94 ; 90% ci : 0.97 0.68 ; p>0.05).conclusionanthropometric data can help in predicting an ideal body profile . this research indicates the similarities and differences of the new zealand junior profile and the world junior profile .
1. Introduction 2. Material and Methods 2.1. Research design and setting 2.2. Sampling 2.3. Study variables and measurement tools 2.4. Data collection 2.5. Statistical analysis 2.6. Research ethics 3. Results 4. Discussion 5. Conclusions
PMC5114790
limited rest time , high intensity , and competition rate of strenuous sports emphasize on the necessity of following the most suitable approach to manage the functional overload of professional athletes . a proper nutritional strategy is helpful in order to achieve proper recovery , with multiple competing periods and several times per day . the content and time of nutrient consumption impact the resynthesis of fuel supply , reduction of muscle injury , and optimizing the competition performance . liquids are more tolerable in suppressed appetite of instant post - exercise period and they help in cell rehydration and substituting lost electrolytes due to sweating . beverage micro and macronutrient content and their utilization , during or post - exercise period , are effective in fuel restoration . studies reflect that the consumption of carbohydrate - protein beverages during post - exercise recovery periods can facilitate glycogen restore and muscle turnover speed . chocolate milk composition is similar to common sports drinks , and it can enhance blood sugar level , speed of muscle glycogen repletion , and protein turnover . its branched chain amino acids , carbohydrates , electrolytes , and easily absorbed casein and whey protein help athletes muscle stores . dough or persian salty yogurt drink with a consistency similar to milk which contains high amount of whey protein and critical electrolytes such as sodium and calcium can affect athletes performance . moreover , non - alcoholic beer as a source of carbohydrate , minerals , and vitamins is a popular and available supplement fluid . choosing the most proper recovery beverage can be useful for nursing and medical members of sports medicine to guide athletes . in the present study , we compared the effects of dough , non - alcoholic beer , and carbohydrate replacement drink on lactate dehydrogenase enzyme level , f2-isoprostane , lipid , and glucose blood level . the professionals were asked to stop their exercises for 24 hours before initiation of the intervention program . they were also asked to note their food intake in one - day food recall questionnaires . five milliliter venous blood was collected , and after 10-min warming up exercises , athletes followed standard protocol of running - based anaerobic sprint test ( rast ) . blood lactate was tested after running the protocol and 1-h post - rast by a calibrated lactometer ( scout company , germany ) . athletes at a 4 day interval , received 500 cc isocaleric beverages as dough , non - alcoholic beer , and chocolate milk . after taking the first beverage then , the participants venous blood was taken ; the other two beverages were consumed following the same guidelines . in other words , 21 participants who enrolled in one experimental study group consumed all three beverages . indirect vo2 max was determined by harvard step test and their 24-h recalls were assessed using a nutritionist iv software ( version 7.0 ; n - squared computing , salam , or , usa ) . serum triglyceride , total cholesterol level , and blood sugar assessed with enzymatic kits ( pars azmoon ins , tehran , iran ) . data were compared between different times using a simple repeated - measures analysis , and post - hoc comparisons were also performed too . all the analyses were done using the statistical package for the social sciences software ( spss inc , chicago , il , usa ) ( version 20 ) . paired t - test and analysis of variance ( anova ) test were also performed . ethics committee of iums approved the study process , and informed consent was gained from all of the subjects . ethics committee of iums approved the study process , and informed consent was gained from all of the subjects . the taekwondo players mean age was 23 2.7 years . levels of lipid profile and blood sugar before and after drinking beverages during pre and post - recovery periods are shown in table 1 . total cholesterol levels decreased after the three intervention periods , however , this reduction was not significant . comparison of total cholesterol change after intervention did not reflect a significant difference ( p > 0.05 ) . plasma concentrations of profile lipids and blood sugar before and after ingestion of beverage plasma triglyceride was lower after dough and carbohydrate replacement drink intake . the mentioned decrease was marginally significant in taekwondo players after dough consumption ( p = 0.076 ) , whereas there was a non - significant difference after carbohydrate drinking periods . in addition , non - alcoholic beer intake non - significantly increased triglyceride level . between groups was marginally significant after consuming non - alcoholic beer ( p = 0.083 ) , however , the mean change of plasma glucose did not show a significant increase for the three beverages . moreover , lactate dehydrogenase level reduced after all the intervention cycles . mean change of this plasma enzyme level was statistically significant after non - alcoholic beer consumption ( p = 0.048 ) . in addition , no significant increase was observed between mean pre and post - recovery f2-isoprostane values and between groups comparison did not show any statistically significant difference ( p > 0.05 ) . participants did not complaint about any side effects . mean and standard error of oxidative stress and muscle damage biomarkers before and after drinking beverages are shown in table 2 . in this study , we compared the effects of various types of beverages including dough , non - alcoholic beer , and carbohydrate - rich beverages on blood sugar , lipid profile , lactate dehydrogenase , and f2-isoprostane levels of elite taekwondo players . findings show that all three beverages at pre- and post - recovery periods enhanced blood sugar and f2-isoprostane levels , whereas these fluids supplements intake decreased plasma total cholesterol and lactate dehydrogenase levels . non - alcoholic beer increases the triglyceride level , and the other liquids lowered plasma triglyceride level . in a study by bishop et al . , plasma glucose levels reduced in the placebo group in comparison to the group that received carbohydrate - rich beverages , both at fatigue and at 1 hour after exercise . it seems that stress hormone release was accompanied with post - exercise blood glucose reduction . moreover , it can balance the fatigue perception , lowering quality of athletes performance . nutrient profile of milk and its products as carbohydrate , whey , electrolytes , and water can be effective on glucose level in athletes recovery periods . the observed effect of fluid drink on triglyceride and total cholesterol levels are comparable with previous findings . in a 6-week intervention trial with fermented milk product , ageraek et al . observed a significant reduction in cholesterol level of 58 healthy participants , whereas plasma triglyceride showed no significant change . cholesterol and triglyceride levels reduction shows an approximately similar trend , however , its strength is affected by sample size of the studied participants . the effect of yogurt , as a milk product , on lowering serum total cholesterol can be explained by its lactobacillus acidophilus content . bioactive compounds , calcium , conjugated linoleic acid fermentation bacteria , and probiotic components can play critical roles in reducing plasma cholesterol and triglyceride levels . exercise leads to a higher lactate dehydrogenase concentration as a converting enzyme with fuel supplying roles and also its level reflected the increased free radical concentration which is caused by stress of sport . in addition , its level affects lactate concentration of athletes muscle and their performance ability . karp et al . on comparing the effects of chocolate milk , fluid replacement drink , and carbohydrate replacement drink consumption in highly - trained cyclists observed increased post - exercise lactate level ; exhaustion time and glycogen - depleting exercise of participants can be managed by milk chocolate beverages . however , the mentioned effects were non - significant in a comparison of within - subject difference in the thomas trial on male trained cyclists . the non - significant within and between comparison of our supplement beverages on f2-isoprostane as a muscle injury and free radical arachidonic acids peroxidation indices can be explained using steensberg findings . they observed that plasma f2-isoprostane level decreases significantly in response to sport stress , however , this reduction is compensated in 1 hour after the recovery period . the trevor trial involving 127 men and women in the age group 30 - 65 years showed that low fat diet containing a daily 3-rich fish meal can reduce cell lipid peroxidation rate and lower urinary f2-isoprostane excretion . this reduction was higher in participants following aerobic exercises in addition to the mentioned diet . lack of reported studies on the effects of fluid supplements on f2-isoprostane level makes more comparison impossible . these findings can help nursing and medical team members of sports medicine to guide elites and professional athletes in rapid and most proper recovery periods . the limitations of our study were small sample size of participants and a before - after study design . moreover , measuring detailed nutrient and electrolyte content of beverages can be effective in our assessment . in regards to the strong points , the present study assessed the effects of isocaloric volume of dough intake in comparison to other available fluid supplements on lipid profile , blood sugar , muscle damage , and oxidative stress markers in professional athletes for the first time . in conclusion , we observed that dough , non - alcoholic beer , and carbohydrate replacement drink consumption at pre- and post - recovery periods can decrease plasma total cholesterol and lactate dehydrogenase level . non - alcoholic beer increases triglyceride level , and the consumption of other liquids was accompanied with lower plasma triglyceride in elite taekwondo players .
background : athletes recovery is important in improving their performance . nutritional strategies can be effective in enhancing recovery rate . choosing the best food items in appropriate intervals can play effective roles in resynthesis of fuels and recovery of muscle injury . beverage micro and macronutrient content are helpful in fuel restoration . in this study , we assess the effects of various kinds of beverages on oxidative stress , muscle injury , and metabolic risk factors in taekwondo players.materials and methods : this quasi - experimental study was performed on 21 taekwondo players of isfahan . after collecting fasting blood , they performed runningbased anaerobic sprint test ( rast ) . blood lactate was tested again and participants were divided into 3 intervention groups , that is , receiving 500 cc dough , non - alcoholic beer , and chocolate milk at 4 day intervals . after a 2-h recovery period , blood sampling was repeated . elites consumed other beverages in later phases . dietary intake and fasting triglyceride , cholesterol , blood sugar , lactate dehydrogenase , and f2-isoprostane concentrations were determined . data were analyzed with a simple repeated - measures test and post - hoc tests using the statistical package for the social sciences software.results:data showed that cholesterol levels non - significantly decreased after intervention . triglyceride level was lower after taking dough and carbohydrate replacement drink . blood glucose concentration increased after intervention periods , however , this increase was significant only after non - alcoholic beverage consumption . lactate dehydrogenase levels reduced after all cycles , however , f2-isoprostane level showed no significant change . there was not significant change in lactate dehydrogenase and f2-isoprostane levels.conclusions:non-alcoholic beer consumption can reduce lactate dehydrogenase concentration ; however , it leads to blood sugar increase . moreover , dough consumption significantly reduced triglyceride level in taekwondo players .
I M Ethical considerations R D C Financial support and sponsorship Conflicts of interest
PMC3679756
cobalamin , c63h88o14n14pco ( figure 1 ) , participates in only two known mammalian enzymatic reactions . yet , these two cbl - dependent enzymes , cytosolic methionine synthase ( ms ) [ ec 2.1.1.13 ] , requiring methylcobalamin ( mecbl ) , and mitochondrial methylmalonyl - coa mutase ( mu ) [ ec 5.4.99.2 ] , requiring adenosylcobalamin ( adocbl ) [ 1 , 2 ] , are critically involved in key metabolic pathways essential for gene expression and regulation , via formation of s - adenosylmethionine ( sam ) and methylation , and in protein synthesis and catabolism , cellular respiration , and energy . activation of methionine synthase also ensures key antioxidant defense status , as it triggers concurrent activation of cystathionine -synthase ( cs ) , the pivotal enzyme at the homocysteine junction in the trans - sulfuration pathway to glutathione ( gsh ) . anaemia , and macrocytic or megaloblastic anaemia , as well as for subacute combined degeneration of the spinal cord . however , an increasing body of work suggests that cbl may also play a central role in the regulation of immunity and inflammation ( reviewed in ) . cbl confers significant protection in various animal models of shock , from anaphylaxis to trauma and sepsis [ 57 ] , and has remarkable organ / tissue protective effects when used clinically for the treatment of analogous inflammation in cn poisoning ( reviewed in ) . amongst cbl 's known immunological effects are an augmentation of the cd8+/cd4 + t - lymphocyte ratio and natural killer cell activity [ 9 , 10 ] , both significantly reduced in inflammatory pathology , with negative consequences in septic patients . interesting homeostatic links between cbl and pivotal cytokines are also emerging , indicative of complex but still incompletely defined regulatory circuits : mecbl lowers interleukin-6 ( il-6 ) expression in peripheral blood monocytes , whilst cbl deficiency raises circulating il-6 in humans and cbl physiological status regulates il-6 levels in rat cerebrospinal fluid . moreover , in both rodents and humans there appears to be an inverse relation between cbl physiological levels and tumour necrosis factor alpha ( tnf- ) serum levels . in vitro , a reasonable hypothesis is that such cbl / tnf-/il-6 regulation may be partly effected via cbl indirect regulation of the central immune regulatory transcription factor , nuclear factor kappa b ( nf-b ) . normal physiological levels of cbl in spinal fluid appear to correlate with nf-b quiescence , at least , in a non - inflammatory / non - immune challenge model . recently , a kinetic study reported that cob(ii)alamin reacts with superoxide at rates approaching superoxide dismutase . cncbl protects human aortic endothelial cells , and neuronal cells , in vitro , against superoxide induced injury [ 19 , 20 ] . given that oxidative stress is a major trigger of nf-b activation , this potential antioxidant effect of cbl could theoretically lead to nf-b inhibition . it may also be of critical local importance in vivo , as the phagocytic burst includes release of the cbl carrier , haptocorrin ( hc / tc 3 ) , in the immediate vicinity of nadph oxidase [ 8 , 21 ] one of the major biochemical sources of superoxide in immune challenge and inflammation . hc / tc3 , moreover , is upregulated by il-1 , itself expressed within fifteen minutes of inflammatory challenge . nevertheless , though antioxidant effects of cbls have been observed in vitro and may , indeed , be important in vivo , no systematic analysis of the in vivo mechanisms of cbl conferred protection against inflammation during acute immune challenge has hitherto been done . we wondered if a more comprehensive explanation for cbl effects on inflammation and immunity , and thence beneficial outcomes in sepsis and other forms of shock , may lie in a potential direct / indirect regulation by cbl of one or more of the several actions of nitric oxide ( no ) as a ubiquitous , cell - signal transduction molecule and second messenger for post - translational modification , whose targets include soluble guanylate cyclase .no is the product of three nitric oxide synthases ( nos ) : two constitutive , nnos ( neuronal nos ; nos i ) and enos ( endothelial nos ; nos iii ) , and one inducible , inos ( nos ii ) , at much higher levels of expression , with the potential to produce 1000-fold higher than normal amounts of no , during gestation , growth , and the immune response . cobalamins are known to have effects on no [ 2729 ] , but these have hitherto been thought to be a consequence of cbl / no scavenging effects [ 7 , 3035 ] demonstrable chemically and in vitro [ 36 , 37 ] , but biologically unproven in vivo and still controversial [ 3840 ] . nitrosylcobalamin has not been detected , to date , in vivo or in vitro , amongst naturally occurring intracellular cbls . nevertheless , if the hypothesis that cbl is involved in nos catalysis has any substance [ 42 , 43 ] , then it is conceivable that , in analogy to previously observed ferric - heme - no complex formation at the conclusion of nos catalysis , nocbl might be transiently formed , just prior to release of free no by the nos . such a theoretical transience and discrete localisation might account for the failure to detect nocbl in vivo to date . ubiquitous and continuous cbl scavenging of no , on the other hand , may pose biochemical hazards . for no has important antioxidant and cell - signalling actions which might be obstructed by hocbl 's previously proposed , indiscriminate no scavenging , or even just by a recently proposed , cbl structural - based , direct inhibition of the nos tout court and nothing else . there is some evidence that hocbl can discriminate between exogenous no donors and the natural endogenous donor , s - nitrosoglutathione , gsno , actually prolonging only gsno - induced , gastric fundus relaxations . moreover , there are also diverse indications that positive cbl status is allied to beneficial no activity : in diabetic rats , high cobalamin levels correlate with high nos protein levels , no activity , and increased erectile function ; cbl supplementation of vegetarians with low cbl status significantly increases enos no release in the brachial artery ; in the digestive tract of endotoxemic rats , the highest expression of inos is in the ileum , precisely where cbl is internalized , and both cbl and no are known to mediate cell protective effects via erk1/2 and akt [ 5054 ] . these protective effects of no and cbl include induction and regulation of heme oxygenase-1 ( ho-1 ) [ 52 , 5558 ] , which converts biliverdin to the powerful antioxidant bilirubin , and carbon monoxide . ( for a more comprehensive list of coincidences of cbl's / no 's positive actions , see table 1 and its related discussion in ) . thus , in these studies we explored an alternative hypothesis to that of cbl as just an no , or , indeed , superoxide , mop . we posited that the principal mechanism behind cbl 's beneficial , pleiotropic effects in inflammation may involve a biphasic regulation of nos expression and protein translation and the ensuing no synthesis , during the two distinct pro- and anti - inflammatory phases of the immune response . male c57bl/6 mice , weighing 20 to 25 g , were purchased from harlan , uk , and maintained on a standard chow pellet diet , containing standard amounts of cbl ( 50 g / kg vitamin b12/cncbl ) , with tap water supplied ad libitum . animals were kept in a 12:00 h light / dark cycle , and all were housed for 7 days prior to experimentation . all experiments were performed in accordance with uk home office regulations ( guidance on the operation of animals : scientific procedures act , 1986 ) . the coenzymes , 5-deoxyadenosylcobalamin , and methylcobalamin ; vitamin b12a , cyanocobalamin , and hydroxocobalamin ( cas 78091 - 12 - 0 ) were purchased from sigma - aldrich ( uk ) . glutathionylcobalamin and n - acetyl - cysteinyl - cobalamin were synthesized and supplied by professor nicola brasch ( kent state university , ohio , usa ) . all cbls ( and cbl - treated animal samples ) were protected from light during storage and handling , and were 98% to 99.5% pure . 5-deoxyadenosylcobalamin ( adocbl ) , methylcobalamin ( mecbl ) , hydroxocobalamin ( hocbl ) , glutathionylcobalamin ( gscbl ) , and n - acetyl - cysteinyl - cobalamin ( nac - cbl ) were all stored at 20c , and fresh solutions of them were made using sterile , pyrogen - free , phosphate - buffered saline ( pbs ; gibco ) , prior to the experiments . for the in vivo experiments , cobalamins were diluted at 10 ml / kg prior to treatments ( with pbs used as vehicle ) . raw 246.7 macrophage cells , stably transfected with nf-b luciferase reporter construct ( stratagene ) , were maintained in dulbecco 's modified eagle 's medium , supplemented with 10% ( v / v ) fetal bovine serum , 2 mm l - glutamine , 1 g / ml geneticin , and 50 g / ml g418 . cells ( 2 10 cells ) were seeded in 96-well plates and then preincubated for 1 h with increasing concentrations ( 110100 m ) of the five principally occurring , intracellular cbls . thereafter , at time 0 h , cells were stimulated with e. coli lps ( 0111 : b4 ; 1 g ) for 4 h and then processed for measurement of luciferase activity in a luminometer ( luminometer td-20/20 ; turner designs instruments ) . endotoxaemia was induced by the intraperitoneal injection of lps ( 0.1 mg / kg ) , alone ( non - lethal ) or , in the lethal endotoxaemia protocol , in combination with 1 g / kg d - galactosamine ( table 1 ) . sample collection in non - lethal endotoxaemia was carried out at both 4 and 24 h after lps challenge . animal survival , in all lethal endotoxaemia experiments , was monitored for a total of 5 days , and all data were analysed using chi - squared or kaplan - meier tests . times shown are in relation to time 0 h , when either lps alone or lps+d - gal was administered by intraperitoneal injection . individual cobalamins were injected into the peritoneum at the doses and times reported in table 1 . blood ( 500 l ) was centrifuged ( for 5 min , at 2500 rpm ) , and the plasma then collected for elisa analysis . 500 l of trizol reagent ( invitrogen ) was added to the remaining fraction . 20 l ) was treated with 2 u ( 1 l ) of turbo dnase 1 ( ambion , austin , tx ) , as described by the manufacturer , to remove any contaminating genomic dna . an aliquot of the dna - free rna ( 7.6 l ) was then transferred to a new rnase - free tube and reverse - transcribed into complementary dna ( cdna ) , using superscript iii reverse transcriptase ( invitrogen ) , as described by manufacturer . the following reagents were used : oligo dt primers ( invitrogen ) ; 1 l , 10 mm dntp ( bioline ) ; 4 l of 5x first - strand buffer ; 1 l , 0.1 m dtt ; 1 l ( 40 u ) rnaseout ; and 1 l ( 200 u ) of superscript iii reverse transcriptase ( invitrogen ) . after synthesis , cdna was quantified using a nanodrop nd-1000 and diluted ( 80 ng/l ) in molecular biology grade water and then loaded into 384-well plates for real - time pcr . real - time pcr assays were performed on the various samples in order to evaluate the expression of the following genes : gapdh , rpl32 , il-1 , cox-2 , inos , enos , tnf- , and hmgb1 ( table 2 ) . for each gene analyzed , reactions were performed using 1 l of the qiagen quantitect primer assay , added to 5 l power sybr green pcr master mix ( applied biosystems , warrington , uk ) and then diluted with 2 l molecular grade water . a final volume of 8 l was dispensed into each well and 2 l of diluted cdna ( 160 ng / reaction ) was added . each sample was tested in triplicate for each gene , and pcr reactions were performed using abi prism 7900 real - time pcr equipment . the thermal profile consisted of 95c for 15 min , then 40 cycles of 94c for 15 s , 55c for 30 s , and 72c for 30 s. this rest mcs software was utilized for the calculation of the relative difference between the test groups . liver tissues were harvested from ( n = 5 ) animals , after lps endotoxaemia , with or without hocbl treatment and then homogenized in lysis buffer , which contained a cocktail of protease inhibitors . protein concentrations prior to loading were determined using the bradford assay ( sigma ) : samples were mixed with 6x laemmli sample buffer , and equal protein amounts ( 100 g ) then underwent electrophoresis on a 10% polyacrylamide gel in running buffer ( 0.3% tris base , 1.44% glycine , and 0.1% sds in distilled water ) . this was followed by transfer of the proteins onto pvdf membranes in transfer buffer ( using 0.3% tris base , 1.44% glycine , and 20% methanol , in distilled water ) . membranes were blocked for 1 h with 5% nonfat milk solution in tbs containing 0.1% tween 20 . inos expression was assessed using a specific monoclonal antibody ( 1 : 1000 ; santa cruz , usa ) . the signal was amplified with hrp - linked anti - mouse secondary antibody ( 1 : 2000 ) and visualized by ecl ( western blotting detection reagent ; amersham biosciences , usa ) . densitometric analysis was performed using nih imagej software and normalised to tubulin loading controls in the same sample . animals ( n = 5 ) were challenged with lps and treated with cbls as described above . at 4 h and 24 h after lps challenge , lung and liver tissue samples were harvested , homogenized , and processed for determination of nos activity , as measured by nitrate / nitrite end - products of no . the ultrasensitive , nos assay used ( oxford biomedical research , oxford , mi , usa : ultrasensitive colorimetric nos assay : cat no . nb78 ) employs an nadph recycling system nadp , glucose-6-phosphate , glucose-6-phosphate dehydrogenase and the substrate , l - arginine , but not the cofactor , bh4,to ensure that nos operate linearly for up to 6 hours , as no - derived nitrate and nitrite accumulate . the assay kit can accurately measure as little as 1 pmol / millil ( ~1 millim ) no produced in aqueous solution . in these studies , the enzyme nitrate reductase was used to convert all nitrate to nitrite , then griess reagent employed to quantify nitrite levels , with the generation of a nitrite standard , as recommended by the supplier . after collection , blood was centrifuged and the plasma separated , under low lighting conditions , then stored at 80c until performance of the analyses . for determination of circulating tnf- and il-6 levels , using elisa assays , samples were diluted 1 : 10 in the assay diluent , as specified by the manufacturer ( r&d , uk ) . absorbance was plotted in a standard curve , and data expressed as the content of tnf- ( ng ) or il-6 ( pg ) per ml of plasma . unless otherwise stated , all reagents were purchased from sigma - aldrich , poole , uk . of 5 animals per group for the analyses in non - lethal endotoxaemia and , initially , 79 per group , then 12 per group , for lethal endotoxaemia survival , series i and ii , respectively . chi - square and kaplan - meier tests were used for the lethality studies . in all cases , a p < 0.05 was taken as significant . since cbl has been shown to prevent nf-b activation in a non - immune challenge model , and activation of nf-b leads to inos induction , we first looked at the effects of the five principally occurring , intracellular cbls , ( cncbl , hocbl , gscbl , and the two mammalian enzyme cofactors for mu and ms , respectively , adocbl and mecbl ) , on lps - induced nf-b activation in vitro , using a canonical reporter assay . although the various incoming forms of cbl are all reduced or dealkylated soon after cell entry , prior to ms / mu cofactor formation , the different incoming forms affect both the rate and ratio of formation of the two known active cofactors , adocbl / mecbl [ 6062 ] . theoretically , this variability in cbl cofactor formation may impact on the effects of cbls in immune challenge with respect to nf-b activation . raw 264.7 macrophage cells , stably transfected with nf-b luciferase reporter construct , were preincubated for 1 h with increasing concentrations ( 110100 m ) of cbls . upon lps stimulation , none of the five cbls significantly affected or inhibited lps - driven nf-b activation at 1 h , with no significant inhibitory effect on nf-b at a later time point ( 24 h ) . cncbl alone slightly stimulated nf-b activity , but only at the 1 h time point and when tested at the concentration of 1 m ( table 3 ) . in two separate experiments , each performed in triplicate , raw246.7 cells , stably transfected with nf-b luciferase reporter construct , were seeded in 96-well plates and preincubated for 1 h with increasing concentrations ( 110100 m ) of individual cbls , followed by stimulation with e. coli lps ( 1 g ) . at 1 h , and 24 h , following lps in the respective experiments , cells were processed for measurement of luciferase activity . basal values of fluorescence were 2.60 0.38 and 3.77 0.31 for 1 h and 24 h incubation , respectively . p < 0.05 versus lps alone . as there was no observable difference between the effects of alkyl and non - alkyl cbls on nf-b in vitro , we chose to focus these first investigations in vivo principally on hocbl , as a clinically licensed cbl form , known to be partially converted on cell entry to the two cbl cofactors , mecbl and adocbl , for ms and mcm , respectively . furthermore , at supraphysiological doses of 5 g i.v . , hocbl , as a clinical cyanide antidote , has shown remarkable protection against corollary inflammation ( analogous to the inflammation seen in sirs , sepsis , and septic shock ) , that goes beyond merely acting as a magnet for cn . we therefore next decided to see if the lethality survival protection also conferred by hocbl in a sepsis / endotoxaemia mouse model was reproducible in a different strain and in a more acute endotoxaemia model . some groups of animals were alternatively treated with the relatively novel , intracellular cbl , glutathionylcobalamin ( gscbl ) [ 61 , 64 ] whose clinical effects are untested in sepsis , or with n - acetyl - cysteinyl - cobalamin ( nac - cbl ) , a synthetic cobalamin , used as a non - endogenously occurring , thiol cbl comparison . to gain some information on potential clinical dosage , all cbls were tested in two distinct , high dosing regimes , with or without prophylactic pretreatment . in a severe sepsis protocol ( lps+d - gal ) , using c57bl/6 mice , we administered a relatively low dose of cbls ( 0.2 mg / kg i.p . ) , equivalent to a maximal concentration of approximately 1 m ( considering a total blood volume of 2.5 ml in the mouse , this concentration being well within the range tested in vitro ) . h prior to lps+d - gal and then given in repeated doses at + 1 , + 2 , + 6 , and + 22 h after lps+d - gal . alternatively , a high dose cbl protocol ( 40 mg / kg i.p . ) was administered only twice , at + 2 and + 22 h after lps+d - gal , to assess its potential as a rescue regimen . the urine of all cbl - treated animals was red , within 1 h of administration , an indicator of rapid , high , systemic cbl saturation ( data not shown ) . lps+d - gal mice rapidly reached 88.9% mortality by 8 h. this did not change further up to 24 h. animals treated with the relatively low - dose regimen of gscbl or nac - cbl , were protected in the early , 48 h time frame ( figure 2(a ) ) . during this period all cbl - treated animals also exhibited less huddling and pilo - erection ( data not shown ) . at 8 h after lps+d - gal , all relatively low - dose cbl treatments afforded 25% survival , p < 0.05 versus lps+d - gal alone ( figure 2(a ) ) . however , only low - dose hocbl treatment maintained this level of protection up to 24 h ( figure 2(a ) ) . ( indeed , as regards the long - term outcomes , 8 h seemed to be a watershed time point at which the outcome was determined for all groups . ) paradoxically , in view of its early protective effects at the lower dose , the high - dose gscbl regimen was less protective within the first 8 h. high - dose nac - cbl , which again provided some degree of protection in the first 6 h , was not significantly different from controls at 24 h. in contrast , high - dose gscbl and hocbl , despite the lesser protection of the former in the first hours , offered a consistent 28.60% survival up to 24 h , p < 0.01 versus lps+d - gal alone ( figure 2(b ) ) . later , at 72 h following lps+d - gal , in the gscbl high - dose group , mortality was equal to that observed in the nac - cbl high dose group , 85.72% , close to that of lps+d - gal control animals , though this increase in mortality was a late event : with 28.60% survival to 54 h in this group , perhaps indicative of the general cbl protective trend . nonetheless , the 25% and 28.60% , respectively , of mice that were alive at 24 h , in each of the two distinct , low- or high - dose , hocbl - treated groups , exhibited continued survival up to 72 h , /p < 0.05 versus lps+d - gal alone / p < 0.01 versus lps+d - gal alone ( figures 2(a ) and 2(b ) ) and beyond ( data not shown ) . ii . since these initial endotoxaemia studies might be considered underpowered , we repeated the lethality survival experiments using larger groups of mice ( n = 12 ) and focussing on hocbl alone , as having previously shown the most consistent protective effects . this time , given the trend towards improved survival seen at the higher hocbl dose , two distinct ultra - high doses of hocbl ( 40 mg / kg and 80 mg / kg ) were tested , with a more concise dose / time frame , + 2 h and + 4 h only for the 40 mg / kg , and , in the case of the 80 mg / kg dose , a single bolus administration at + 2 h. the significant survival advantage of hocbl treatment results demonstrated over 5 days , in figures 3(a ) and 3(b ) , not only shows that our hocbl data is consistently reproducible , but also that increasing the dose of hocbl significantly increases survival , from 25% up to 33.333% : p < 0.01 for hocbl ( 80 mg / kg ) by using the kaplan - meyer test . by comparison , at 24 h in the lps - only group , there was 90% mortality . to gain information about the mechanisms behind the consistent protection afforded by hocbl , and to observe any potential impact on the nos , then the expression of inflammatory mediator genes in liver and lung was analysed , in both the pro- and anti - inflammatory phases of the immune response , at the 4 h and 24 h time points . the early effects of hocbl treatment on enos mrna appeared organ dependent , with significant promotion of enos mrna in the lung and attenuation in the liver ( figures 4(a ) and 4(b ) ) . for enos , in lps - only animals we observed a decrease in the lung of 2.9 0.1 , whereas there was an increase of 2.1 0.1-fold change in lps+hocbl - treated animals ( figure 4(a ) ) . paradoxically , in the liver of lps - only treated animals , there was an increase of enos expression of up to ~15-fold compared to up to ~4-fold change only in lps+hocbl treated animals ( figure 4(b ) ) . liver and lung inos and cox-2 gene expression levels were increased in lps - only treated animals when compared to that of pbs - only injected mice , whose value was set as 1 . however , as for enos , the effects of hocbl on inos expression were once more organ selective , failing to inhibit the rise in inos mrna in the lung , but attenuating it in the liver ( figures 4(c ) and 4(d ) ) . strikingly , in spite of hocbl 's failure to completely inhibit inos expression , hocbl was a consistent inhibitor of cox-2 mrna in both liver and lung , bringing its degree of expression back to and below that of pbs - injected mice ( figures 4(e ) and 4(f ) ) . hocbl treatment also had a consistent regulatory effect on il-1 expression , which was moderately and significantly decreased in lung and completely inhibited in liver ( figures 4(g ) and 4(h ) ) . to determine efficiency of translation of the post - lps increased nos mrna , we assessed enos and inos protein expression by western blot in 4 h liver samples . as predicted by our hypothesis that sepsis may involve a failure in translation of the nos , this revealed that whilst in the lps - only challenged group there was a significant depression of enos protein translation , that was at odds with its high mrna expression , hocbl significantly promoted enos protein translation , above the levels of both pbs control and lps - only treatment groups ( figure 5(a ) ) . a similar paradoxical pattern was observed in hepatic inos protein translation , with significant depression of inos protein translation in the lps - only challenged group , and promotion of inos protein in the hocbl+lps treated group ( figure 5(b ) ) . we confirmed that these effects of hocbl on nos protein promotion were not random or artifactual , but were specific to cbl , by repeating the lps non - lethal endotoxaemia experiment using either gscbl or nac - cbl treatment and performing western blots for enos / inos protein . once again , we observed a significant early promotion of enos / inos protein by these other cbls , when compared to lps only ( figures 5(a ) and 5(b ) ) . however , when nos activity was measured ( using a nitrite production assay ) both in the early post - lps challenge , pro - inflammatory phase and in the late anti - inflammatory , resolution phase , a further paradoxical result emerged , suggesting that hocbl may exert some post - translational modification of nos activity . levels of nitrite at 4 h showed an inverse relation to levels of nos protein , with significantly higher levels of nitrite in the lps - only enos / inos - depressed group and significantly lower levels of nitrite being generated in the hocbl / enos / inos - promoted group ( figure 6(a ) ) . ( that this was a general , reproducible cbl effect was confirmed , as stated previously , by also doing the western blot with 4 h gscbl / nac - cbl - treated liver samples and also running the nos activity assay with both thiol - cbl treated samples . ) here we again observed a correlation between gscbl / nac - cbl promoted high nos protein in the western blots and decreased nitrite in the nos activity assay ( figure 7 ) . at 24 h following lps , levels of nos - derived nitrite , as measured in tissue samples , were even higher than at 4 h in both the lps - only and hocbl - treated groups . nevertheless , hocbl consistently showed relatively less nitrite production than lps only , in both lung and liver tissue samples ( figures 6(b ) and 6(c ) ) . since high inos expression / protein and no activity in the sepsis literature are associated with high tnf- , il-6 , and ensuing toxicity , we evaluated how hocbl might impact upon systemic levels of tnf- and il-6 triggered by lps . levels of il-6 protein were not significantly lower in the hocbl - treated group ( figure 8(a ) ) however , hocbl treatment significantly ( p < 0.05 ) attenuated the post - lps - induced increase in circulating plasma tnf- , as measured at the 4 h time point , ( ~50% reduction : figure 8(b ) ) . consistent with its effects on plasma tnf- , hocbl also showed some protection from the inhibitory effects of lps on tnf- mrna in the lung and significant attenuation of tnf- mrna in liver ( figures 8(c ) and 8(d ) ) . in the resolution phase of the immune response to lps , the effects of hocbl treatment on nos expression displayed a degree of organ selectivity , though most notable with respect to inos expression . whilst hocbl did not change lps - induced enos mrna inhibition in the lung , it significantly attenuated its inhibition in the liver , from 75- to 58 fold - change , respectively , for lps only and lps+hocbl ( figures 9(a ) and 9(b ) ) . hocbl effects on inos mrna in the lung were more distinctive , with ~80% inhibition compared to lps - only ( vehicle group ) . in the liver , hocbl treatment attenuated the lps - induced inhibition of inos mrna by ~40% ( figures 9(c ) and 9(d ) ) . the consistent hocbl tissue inhibition of cox-2 mrna , seen at the early pro - inflammatory phase time point of 4 h , persisted at 24 h , showing a significantly greater degree of inhibition than lps - only : 7- versus 2.5-fold for lps - only in the lung ; 115- versus 50-fold for lps - only in liver ( figures 9(e ) and 9(f ) ) . given the early regulatory impact of hocbl on nos / no activity , and cox-2 , il-1 , and tnf- , we expected to see related downstream beneficial effects on expression of the late ( 18 h ) effector of endotoxaemia , high mobility group box 1 ( hmgb1 ) . this prediction was confirmed . where lps - only presented an inconsistent picture , hocbl treatment consistently inhibited hmgb1 mrna : in the lung , from an increase of 2.5 in lps - only to a near threefold decrease ( setting levels of expression even lower than those observed in the control group , taken as a value of 1 ) ; and in the liver , to a more significant degree , even beyond the remarkable inhibitory effect of lps - only ( figures 9(g ) and 9(h ) ) . to conclude the 24 h gene expression analyses , tissue levels of il-1 and tnf- mrna were also quantified using rt - pcr . in resolution , hocbl treatment significantly increased inhibition of hepatic il-1 ( in line with its inhibitory effect at 4 h ) and inhibition of tnf- in the lung , whilst also , paradoxically , decreasing lps inhibition of tnf- in the liver ( table 4 ) . of note was the fact that the late effects of hocbl on tnf- mirrored the degree of late inos expression , in both lung and liver , as , indeed , did lps - only ( figures 9(c ) and 9(d ) and table 4 ) . our studies present a picture of complex and far - reaching homeostatic regulation of the activation , expression , and translation of nos , no synthesis , and inflammatory mediators by hocbl during the immune response . we propose that this regulation accounts for the noted survivals in rodent endotoxaemia , both in our more acute , septic shock models i and ii ( a modest but significant 25%/28.60% , and 25%/33.333% , survival ) and in a previous sub - acute , sepsis model ( performed with cncbl / hocbl the regulation we show here may also explain the observed organ / tissue - protective effects of hocbl in the clinical treatment of cn poisoning and the ensuing shock , which appear to go beyond what may be expected from the cbl binding of cn alone . furthermore , given the supraphysiological , saturating doses of cbl used in our studies , if cbl had been acting just as an no scavenger , or even as a nos inhibitor , it seems unlikely that it would have permitted the increasing rise in nos activity ( as indexed by nitrite ) , both early and late , or that its effects would be so subtle , complicated , and ultimately beneficial . the thiol cbls , on the other hand , present some mysteries , about which one can only give speculative answers at this point . the survival advantage exhibited in the early hours by comparatively low dose gscbl / nac - cbl treated animals , but not with hocbl treatment , may be due to the known higher reduction potential / lability of thiol cbls , resulting in more rapidly available intracellular cbl . cbl is likely to be , at least in part , functionally deficient in sepsis , as indexed by the down - regulation of the cbl receptors , megalin and cubilin , in the kidneys of endotoxaemic mice , the kidney being known , as a cbl homeostatic regulator , to reduce its cbl uptake in states of cbl deficiency . pertinently , megalin is down - regulated via the lps - induced erk1/2 signaling pathway , through which , as noted earlier , both cbl and no achieve their coincidentally regulatory , beneficial effects . although the beneficial supply of some extra gsh , as a consequence of thiol cbl lability , might be proposed as an alternative explanation for early survival protection , it is noteworthy that , at the higher dose , gscbl showed no early hours protection . moreover , in a new series of endotoxaemia lethality , in vivo survival studies that compared protective effects of all endogenous cbls against those of gscbl and nac - cbl , we observed that though gscbl / nac - cbl still consistently conferred the same early / pre-8-hour protection compared to the 4 other cbls , paradoxically , both thiol cbls repeatedly produced poor long - term survival outcomes , equivalent to or worse than lps - only , whereas hocbl / cncbl / mecbl / adocbl all consistently showed significant protection , with cncbl / mecbl in particular , producing even better survival results than the current hocbl - centred study ( brancaleone , dalli et al . gscbl is certainly known to produce a more rapid early increase in ms activity ( together with fourfold greater formation of adocbl ) , when compared to hocbl . such an increase in ms activity , normally rapidly deactivated by oxidative stress [ 6870 ] , would increase synthesis of the methyl donor , s - adenosylmethionine ( sam ) , which inhibits lps - induced gene expression by modulating histone methylation . whilst this may have initial short - term benefits , as observed , there are also negative long - term consequences from excessive inhibition of the necessary , pro - inflammatory gene expression at this stage . this may be why ms expression and activity are transiently decreased by 25% and 30% early on in the normal pro - inflammatory phase of the immune response to lps , as well as to allow for gsh synthesis modulation . the consequent , equally paradoxical failure of the other thiol cbl , nac - cbl , to significantly increase survival beyond 8 h , at both high- and low - dose protocols , may also be attributable to the fact that nac is particularly unstable as a cbl ligand and may therefore have acted independently of cbl as nac alone . further , though nac can ( 1 ) act as an antioxidant by increasing gsh levels ( 2 ) it can equally act as a pro - oxidant , increasing disulfides , gssg , and ( 3 ) is counter - indicated in sepsis , since , whilst nac enhances phagocytosis , it also suppresses the bactericidal respiratory burst in icu patients , with potential negative outcomes . indeed , consistent with this independent , paradoxical observation , additional data from our in vivo endotoxaemia model shows that in the early 4 h phase of the immune response nac - cbl , in contrast to hocbl / gscbl , significantly increases circulating pmn , specifically granulocytes , yet decreases the intensity of cd11b expression ( see supplementary data available online at http://dx.doi.org/10.1155/2013/741804 ( and researchgate ) ) . the adhesion molecule / complement receptor , cd11b , is a marker of neutrophil activation , and its high expression normally correlates with a strong respiratory burst . it is conceivable then that such nac - promoted suppressive effects ultimately outweighed the very early benefits in survival protection conferred by nac - cbl . the initial in vitro observation that all major endogenous cbls do not inhibit nf-b activation , even at 24 h , may appear surprising , particularly in view of the cbl beneficial outcomes in vivo . however , it has previously been shown in vivo that early inhibition of nf-b in immune challenge increases and prolongs inflammation , and that persisting late nf-b activation ( 24/48 h ) permits its resolution . moreover , this failure to inhibit nf-b by cbl during inflammation , with positive outcomes , has now also been observed by a group who were aware of our findings and , independently , in a cbl cancer model by marguerite et al . further , since activation of nf-b is linked to induction of inos , and since inadequately low levels of no and , indeed , inos gene knockout in mice and nos inhibitors in clinical trials [ 79 , 80 ] have been implicated in sepsis morbidity and mortality , we adopted the hypothesis that nos translation may , in contradiction to the common view , actually be depressed in sepsis and nos catalytic activity uncoupled or malfunctioning . previously observed high no in sepsis is believed to comprise a greater ratio of the more toxic no species , such as peroxynitrite , onoo- , which inhibits enos , as opposed to more antioxidant / cytotoxic no forms , such as s - nitrosothiols / gsno [ 84 , 85 ] or no itself ; although overhigh levels of gsno also have negative effects in sepsis - like inflammation . since cbl is known to promote gsh [ 3 , 8688 ] , whose synthesis is induced simultaneously with that of inos , it should theoretically alter the ratio of gsno / no to onoo- and related species , so that the more positive actions of no predominate ( figure 10 ) . therefore , we predicted that cbl would not inhibit inos expression and translation early on . this proved correct , with significant hocbl ( and gscbl / nac - cbl ) early promotion of both inos and enos proteins and significantly lower lps - only inos / enos protein . since enos is known to be depressed in sepsis , with adverse cardiovascular consequences , this early effect of cbl may have positive clinical implications . there is an apparent contradiction in the data showing relatively low inos / enos mrna leading to strikingly high protein translation in the lps+hocbl treated animals , in direct contrast to the lps - only group , where strikingly high inos / enos mrna yielded much lower levels of inos / enos protein . that this is not an artefact , but a specific cbl effect , also observed by others , is seen by the comparable inverse results achieved with the two , thiol cbls . ( these collective cbl results were also mirrored by decreasing nitrite / nos activity , in inverse proportion to the ascending levels of hocbl / gscbl / nac - cbl inos protein . ) we propose that this paradox may actually be an index of cbl / nos regulation in endotoxaemia and that the high mrna levels in lps - only animals may be due to the observed phenomenon of relaxed control of rna synthesis when sam / methyl groups are deficient , as in folate or cbl deficiency , thus consequential on the functional cbl deficiency of endotoxaemia / sepsis , and more permanent ms inactivation by lps , discussed earlier . furthermore , it is theoretically possible that abnormal cell function in sepsis may result in much of the mrna produced by the lps - only group being masked and unavailable for efficient translation . it is possible also that the translated protein may be unstable and degrade at a faster rate . this is certainly known to be the case with enos mrna in hypoxia and in the presence of high tnf- , both characteristic of sepsis . in contrast , given cbl 's impact on the two coenzymes , ms and mu , upon cbl treatment , the septic cell should afford a degree of metabolic normality and thus economic efficiency in transcription / translation . a remarkable study , over half a century ago , demonstrated that cbl is capable of reactivating a diversity of key enzymes after acute oxidant stress , most of them also negatively affected in sepsis , including glucose-6-phosphate - dehydrogenase , lactate dehydrogenase , lysine , ornithine , and glutamic decarboxylase . this last is critical for the supply of alpha - ketoglutarate in the krebs cycle , which is depressed in sepsis , with consequent lower atp production that is associated clinically with increased mortality . in turn , supply of alpha - ketoglutarate determines the availability of l - arginine , glutamate , and glutamine , all of which also decrease in sepsis , with adverse consequences , in the case of l - arginine especially for the nos . observed low levels of l - arginine in sepsis are associated with increased reactive nitrogen species , including high onoo- , and reactive oxygen species production during nos catalysis , interfering with the normal , more beneficial no cell signalling , necessary for the efficient resolution of the pro - inflammatory phase of the immune response . inos - derived no is known to have a direct regulatory correlation to levels of tnf- [ 100 , 101 ] . this inos / no / tnf- regulation seems operative here with hocbl treatment , not lps - only . since hocbl permitted a moderate rise in no ( at least , as measured by nos nitrite end - products ) , in tandem with moderate levels of tnf- mrna and protein , and since cbl status has an inverse relation to tnf- levels , it seems reasonable to conclude that such cbl / tnf- regulation occurs downstream of hocbl / nos / no regulation . additional evidence for this in our studies may be seen even in the resolution phase of the immune response , where hocbl - related levels of inos expression showed a direct correlation to those of tnf- , even to the degree of inhibition , in both lung and liver tissues ( figures 9(c ) and 9(d)/table 3 ) . importantly , however , in the early pro - inflammatory phase hocbl inos / no regulation does not completely inhibit tnf- : 50% reduction being observed in our experiments . this is a critical point as anti - tnf- mab treatment increases sepsis mortality in the clinic , since some degree of tnf- production and consequent early pro - inflammatory signalling is essential for an effective immune response . also noteworthy , and crucially consistent with such hocbl / inos / tnf- regulation , il-6 is essential for induction of acute phase proteins , whilst simultaneously also decreasing pro - inflammatory cytokines and increasing anti - inflammatory factors . il-6 regulation of pro - inflammatory factors includes regulation of tnf- and il-1 , expression of the latter also determining that of cox-2 , involved in arachidonic acid - derived prostaglandin and leukotriene synthesis . il-1 is rapidly expressed ( ~15 min post lps ) , whereas inos , which may be induced by il-1 , is not fully expressed until 6 h after lps . we have observed that treatment with high doses of endogenous cbls ( hocbl / gscbl ) promotes inos mrna expression as early as 2 h following lps ( unpublished data ) , possibly fast forwarding the immune response . this , together with the hocbl - promoted high nos protein , controlled rise in no synthesis , and thence a moderate production of tnf-/il-6 may form a feedback loop accounting for the tight hocbl regulation of il-1 , and consequently also cox-2 , as seen at 4 h after lps ( figure 10 scheme ) . cobalamin - promoted nos / no early regulation of tnf-/il-6/il-1/cox-2 seems also to be consistent with , and accounts for , the later inhibition of hmgb1 mrna . if expressed at > 18 h and then released extracellularly , hmgb1 can trigger further late release of tnf- , il-1 , and inflammatory products from cox-2 , inos , and excessive ros and rni species , leading to pathology . it is known that the nervous system can modulate circulating tnf- levels via release of acetylcholine by the vagus nerve . but our studies show that cbl essential for acetylcholine synthesis is the first known endogenous inhibitor of late hmgb1 mrna expression . both nicotine and ethyl - pyruvate have been used to block extracellular release of hmgb1 but , in addition to the fact that hocbl appears to impact on hmgb1 much further upstream , at least in tissues , neither drug is endowed with the safety profile of cbl and , more pertinently , neither is known to exert such a central , endogenous regulation of the immune response . but , on the evidence of the general anti - inflammatory regulation observed in our studies , we predict that extracellular release of hmgb1 , from macrophages and pmn , should be negligible with hocbl treatment . the theory that cbl may impact on the nos indirectly , through the contribution of its two known mammalian coenzymatic functions to nos substrate and cofactor assembly and , indeed , to assembly of the nos protein itself , may further explain our findings , including the cbl - promoted high nos protein ( figure 10 scheme ) . furthermore , a deficiency of any of the nos substrates and cofactors ( the likely result of cbl functional deficiency in endotoxaemia ) is known to result in less tightly coupled nos activity and increased free radical generation [ 112 , 113 ] , with a corollary increase in inflammatory mediators and prolonged period of nos activity , indexed by our observed higher lps - only nos nitrite levels . ( in a forthcoming study we will also analyse more exactly how cbl may shift the ratio of no / gsno / onoo and related species ) . it may also be that cbl , as adocbl and its radical , takes a direct , active part in nos catalysis , as a third mammalian cbl cofactor . from this perspective , the high nos protein seen with high cbl administration may be a classical instance of the cofactor promoting coenzyme assembly . such a central , direct cbl / nos , catalytic interaction would further reduce excess production of toxic forms of no , as well as superoxide and other related ros and rni species . the consequent , more precise , pro- and anti - inflammatory signalling should again result in a shorter , more effective period of nos activity , thus lower detectable nitrite levels ( as seen ) with the beneficial signalling and antioxidant effects of no predominant . however , direct cbl scavenger interactions , in discrete intracellular compartments , with primarily toxic rnis , such as onoo-/onooh / no2 , with which cbl interacts ex vivo , , may also play a part and can not be ruled out as contributing to a more complex picture behind our observed results ( figure 10 ) . these novel observations on the mechanism behind cobalamin protection in endotoxaemia suggest that we may be looking at the ideal natural , selective and collective regulator of the nos , and thence of cytokines and other pivotal factors , in immune challenge and sepsis . in fact , it is now accepted that anti - inflammatory therapies ( based on blocking a specific mediator ) fail toutcourt in sepsis and that a more modulatory approach , which regulates the homeostatic inflammatory response , ( in itself beneficial ) , could be successful . thus , our findings may have significant clinical implications , not only for the treatment of sepsis , but also for other analogous inflammation - driven conditions , such as cancer and malaria , where nos / no deregulation , and consequent loss of control over key inflammatory mediators , are equally pathogenic [ 115 , 116 ] .
background . nos/no inhibitors are potential therapeutics for sepsis , yet they increase clinical mortality . however , there has been no in vivo investigation of the ( in vitro ) no scavenger , cobalamin 's ( cbl ) endogenous effects on nos/no / inflammatory mediators during the immune response to sepsis . methods . we used quantitative polymerase chain reaction ( qpcr ) , elisa , western blot , and nos griess assays , in a c57bl/6 mouse , acute endotoxaemia model . results . during the immune response , pro - inflammatory phase , parenteral hydroxocobalamin ( hocbl ) treatment partially inhibits hepatic , but not lung , inos mrna and promotes lung enos mrna , but attenuates the lps hepatic rise in enos mrna , whilst paradoxically promoting high inos / enos protein translation , but relatively moderate no production . hocbl / nos/no regulation is reciprocally associated with lower 4 h expression of tnf- , il-1 , cox-2 , and lower circulating tnf- , but not il-6 . in resolution , 24 h after lps , hocbl completely abrogates a major late mediator of sepsis mortality , high mobility group box 1 ( hmgb1 ) mrna , inhibits inos mrna , and attenuates lps - induced hepatic inhibition of enos mrna , whilst showing increased , but still moderate , nos activity , relative to lps only . experiments ( lps+d - galactosamine ) hocbl afforded significant , dose - dependent protection in mice conclusions . hocbl produces a complex , time- and organ - dependent , selective regulation of nos/no during endotoxaemia , corollary regulation of downstream inflammatory mediators , and increased survival . this merits clinical evaluation .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion 5. Conclusions
PMC3987790
autoantibodies are a hallmark in the diagnosis of many systemic autoimmune rheumatic diseases ( sard ) including idiopathic inflammatory myopathies ( iim ) ( reviewed in [ 1 , 2 ] ) . most of those autoantibodies are directed to intracellular proteins , including nuclear and cytoplasmic antigens , and based on their specificity , autoantibodies in iim can be grouped into myositis specific autoantibodies ( msa ) and myositis associated autoantibodies ( maa ) ( reviewed in [ 13 ] ) . the presence of msa and maa has become a key feature for classification and diagnosis of iim and they are increasingly used to define clinically distinguishable iim subsets . among the msa , autoantibodies against aminoacyl - trna synthetases ( ars ) were detected in 2535% of iim patients . other autoantibodies in iim are directed to the signal recognition particle ( srp ) , chromodomain helicase dna binding protein 4 ( mi-2 ) , sae / small ubiquitin - related modifier ( sumo-1 ) , mj / nuclear matrix protein 2 ( nxp2 ) , melanoma differentiation - associated gene 5 ( mda5)/clinically amyopathic dermatomyositis p140 ( cadm-140 ) , and transcription intermediary factor ( tif1- ) gamma ( p155/140 ) . anti - jo-1 antibodies are the most common , predominantly found in 1530% of patients with polymyositis ( pm ) and in 6070% of those with interstitial lung disease ( ild ) . autoantibodies directed towards other ars are less common , each reaching less than 5% prevalence in iim . msa and maa are commonly detected using immunoprecipitation ( ip ) or line immunoassays ( lia ) . muscle pain and weakness are common side effects of statins which are commonly used to reduce cholesterol levels . about 5% of statin users experience muscle pain and weakness during statin treatment . in 2010 , antibodies to 3-hydroxy-3-methylglutaryl coenzyme a reductase ( hmgcr ) have been identified in patients with autoimmune necrotizing myopathies associated with statin use [ 57 ] . recently , a significant difference between statin - exposed and statin - unexposed anti - hmgcr positive patients has been found . therefore , diagnostic tests are needed to aid in the diagnosis of this severe clinical condition [ 9 , 10 ] . this study aimed to compare different technologies for the detection of anti - hmgcr antibodies and analyze the clinical phenotype and autoantibody profile of the patients and to investigate the epitope specificity of anti - hmgcr antibodies . a total of 20 samples from myositis patients positive for anti - hmgcr antibodies ( see table 1 ) using a research addressable laser bead assay ( albia , rouen , france ) identified in a previous study and 20 negative controls ( age and sex matched ) were collected and tested using various methods . to verify the specificity of the quanta lite hmgcr elisa a total of 824 controls diagnoses of the patients were established based on the respective disease classification criteria and as previously described . patient data was anonymously used in keeping with the latest version of the helsinki declaration of human research ethics . the first antigen was obtained from a commercial source ( sigma ) and consists of the hmg - coa reductase catalytic domain expressed in e. coli and fused to gst protein with a final molecular weight of 76 kda ( including the fusion protein ) . the hmgcr dna was cloned into the piex / bac-3 vector using homo sapiens hmgcr and transcript variant 1 ( nm_000859.2 ) amino acids 427888 . the clone is n - terminal 10x histidine tagged and expressed in sf9 cells with a molecular weight of 51 kda . the cells were grown to 24 10 cells / ml in sf900 ii sfm medium and infected using 20 ml of hmgcr baculovirus per 1 they were incubated at 27c for 108 hrs while rotating at 140 rpm . the cells were harvested by centrifuging at 8000 rpm for 15 minutes using an slc-6000 rotor . the cell pellets were washed with pbs , centrifuged at 4000 rpm and the pellets were stored at 80c prior to extraction . the hmgcr antigen was extracted using 1 m nacl , 20 mm tris , 0.25% chaps , and 10 mm imidazole buffer , ph 8.0 . the sonicated mixture was centrifuged at 30,000 rpm for 30 minutes using a 50.2ti rotor . the supernatant was collected and run over a ni++ nta imac column equilibrated on the extraction buffer . the column was washed using 1 m nacl , 20 mm tris , 0.25% chaps , and 140 mm imidazole buffer , ph 8.0 and eluted using 1 m nacl , 20 mm tris , 0.25% chaps , and 400 mm imidazole buffer , ph 8.0 . the elution was collected and buffer exchanged using a g25 sec equilibrated using 1 m nacl , 10 mm tris , 0.25% chaps , and 0.09% nan3 buffer , ph 8.0 . the hmgcr antigen was quantified using the calculated extinction coefficient and a280 absorbance measured using a uv / visible spectrophotometer and stored at 80c . in - house antigen ( lot # alo38 ) was compared to sigma antigen ( hmg - coa reductase h7039 059k4055 ) via sds page / western blot . both antigens were loaded to a 15-well 412% bis - tris prepact polyacrylamide gel ( life technologies , carlsbad , california ) , at 0.5 g per well . a seeblue plus2 prestained mes ladder ( life technologies ) was run in lane 1 for molecular weight determination . electrophoresis was performed using a mini blot gel box and mes running buffer ( life technologies ) . proteins were run at 200 volts for 45 minutes using a biorad model 200/2.0 power supply . the remaining samples were then transferred to a nitrocellulose membrane using a life technologies iblot transfer unit . the membrane was then cut into 8 strips with each containing 1 lane of each antigen . strips were then incubated in hrp sample diluent ( inova 508551 ) for 30mins followed by incubation with the appropriate patient samples at a 1 : 100 dilution for 1 hr . the strips were then washed with hrp wash ( inova 508552 ) 4 5 min and incubated with a goat anti - human secondary antibody diluted 1 : 3000 ( jackson immuno research ) in hrp sample diluent for 1 hr . strips were washed 4 5 min in di water then developed with bcip / nbt ( moss , inc . ) . the quanta flash hmgcr ( research use only ) assay is a novel cia that is currently used for research purposes only and utilizes the bio - flash instrument ( biokit s.a . , barcelona , spain ) , fitted with a luminometer , as well as all the hardware and liquid handling accessories necessary to fully automate the assay . the quanta flash assay for this study prior to use , the reagent pack containing all the necessary assay reagents is gently inverted thirty times and the sealed reagent tubes are then pierced with the reagent pack lid . small amounts of the diluted patient serum , the beads , and the assay buffer are all combined into a second cuvette , mixed , and then incubated for 9.5 minutes at 37c . the magnetized beads are sedimented using a strong magnet in the washing station and washed several times followed by addition of isoluminol conjugated anti - human igg and again incubated for 9.5 minutes at 37c . the isoluminol conjugate is oxidized when sodium hydroxide solution and peroxide solutions ( triggers ) are added to the cuvette , and the flash of light produced from this reaction is measured as relative light units ( rlus ) by the bio - flash optical system . the rlus are proportional to the amount of isoluminol conjugate that is bound to the human igg , which is in turn proportional to the amount of anti - hmgcr antibodies bound to the antigen on the beads . a five - point calibration curve was used to convert optical density values into units . the cut - off was defined as the 99% percentile of level in a previous internal study based on disease controls . the titration of anti - hmgcr antibodies was performed using a luminex - based immunoassay , as described elsewhere . briefly , the recombinant human hmgcr catalytic domain was coupled to fluorescent bioplex cooh - microspheres ( biorad , hercules , ca ) with the bioplex amine coupling kit according to manufacturer 's protocol . a 10 l volume containing 1,250 beads was incubated with patient 's serum , in 96-well plates for 2 h. beads were collected by filtration and washed before adding biotinylated mouse anti - human igg ab . after 1 h incubation and washing , anti - hmgcr ab titers were calculated from the mean fluorescent intensity by comparison with a calibrator consisting in a human positive serum whose titer was arbitrarily set to 100 arbitrary units ( a.u / ml ) . the threshold of positivity of this assay is 20 au / ml . all patients were also tested using assays for the detection of antibodies to extractable nuclear antigens ( ena , bmd , and thermo fisher ) and myositis related antibodies ( scleroderma and myositis profile , d - tek , belgium ) . autoantibodies were detected by indirect immunofluorescence on hep-2000 cells ( reference sa2014-ro , immunoconcepts , sacramento , ca , usa ) . sera were tested at 1/80 screening dilution in pbs buffer , using a fitc - coupled antibody against human igg ( h + l ) . on these cells , the fluorescence pattern suggestive for anti - hmgcr antibodies is a finely granular cytoplasmic staining on a minority ( 3% or less ) of cells with perinuclear reinforcement . autoantibodies to various peptides were studied using pepperchip technology ( pepperprint gmbh , heidelberg , germany ) [ 15 , 16 ] . peptide arrays were blocked using blocking buffer ( rockland blocking buffer mb-070 ( 60 min before the first assay ) . sera were diluted 1 : 1000 in incubation buffer ( pbs , ph 7.4 with 0.05% tween 20 and 10% rockland blocking buffer ) and incubated for 16 h at 4c and shaking at 500 rpm . arrays were then washed ( 2 1 min after each assay with washing buffer ( pbs , ph 7.4 with 0.05% tween 20 ) . secondary antibody ( f(ab')2 goat anti - human igg ( h + l ) dylight680 ) diluted 1 : 5000 was added and incubated 30 min . identified epitopes were synthesized as soluble peptides , coated to elisa plates and tested with anti - hmgcr positive samples . by comparing the pepperprint sequence reactivity data to public domain structures of hmgcr the reactive sequences that were likely to be accessible to antibodies were determined to be gympipvgvagpl , 748gynahaanivtai , mattegclvastn , tdkkpaainwieg , clvastnrgcrai , and rgksvvceavipa . peptides were synthesized by biosynthesis ( san diego , ca ) as biotinylated constructs and tested via streptavidin elisa assay . the data was statistically evaluated using the analyse - it software ( version 1.62 ; analyse - it software , ltd . , leeds , uk ) . spearman 's correlation and cohen 's kappa agreement test were carried out to analyze the agreement between portions . as the first step , the inova hmgcr antigen was compared to the sigma antigen using western blot analysis to analyze the size , purity , and the reactivity pattern of both antigens . the western blot shows one anti - hmgcr positive sample for both inova and sigma antigens . using the sigma antigen , in contrast , using the inova antigen , only one distinct band is recognized and stained ( ~55 kda ) . next , the antigens were compared by elisa and the results obtained with the 40 samples were highly correlated ( = 0.80 , see figure 1 ) . subsequently , anti - hmgcr antibodies were detected using albia , elisa , and cia and all three assays showed qualitative agreements of 100% ( see figure 2 ) . in addition , the levels of anti - hmgcr antibodies also showed significant correlation : elisa versus albia , = 0.84 ( 95% confidence interval , 0.720.91 ) , albia versus cia , = 0.89 ( 95% ci , 0.800.94 ) , and elisa versus cia , = 0.86 ( 95% ci , 0.750.92 ) . briefly , their mean age was 54.4 years ( range from 16 to 84 ; standard deviation 21.1 years ) and 16/20 ( 80.0% ) were females . in 18/20 ( 90.0% ) of the patients , a diagnosis of necrotizing myopathy was established . the two remaining patients were not diagnosed at the last clinical follow - up but are highly suspected to suffer from myositis . the mean age at disease onset was 48.5 ( sd 20.1 years , range 1283 years ) . nine out of 20 ( 45% ) anti - hmgcr positive patients were on statin . all patients with anti - hmgcr antibodies were negative for all autoantibodies tested ( ss - a , ss - b , sm , rnp , jo-1 , scl-70 , centromere , mi2 , pm / scl , ku , pl-7 , pl-12 , and srp ) . testing various controls showed high specificity ( 99.3% ) . 3/518 apparently healthy individuals and 3/117 patients with sicca syndrome were positive ( see figure 3 ) ; they all had low titers of anti - hmgcr antibodies . to investigate the staining pattern of anti - hmgcr antibodies a strongly reactive patient sample was used to stain hep-2 cells . on these cells , the fluorescence pattern suggestive for anti - hmgcr antibodies is a finely granular cytoplasmic staining on a minority ( 3% or less ) of hep-2000 cells with perinuclear reinforcement ( see figure 4 ) . several potential epitopes were identified using the solid phase peptide arrays ( see figure 5 ) . a total of six sequences were selected based on surface exposure ( gympipvgvagpl , gynahaanivtai , mattegclvastn , tdkkpaainwieg , clvastnrgcrai , and rgksvvceavipa ) , synthesized as soluble peptides and tested with positive sera and controls by elisa . reactivity found by solid phase peptide arrays could not be confirmed ( data not shown ) . first described in 2010 , anti - hmgcr antibodies represent a promising biomarker to aid in the diagnosis and treatment decision of idiopathic inflammatory necrotizing myopathies ( iinm ) . the present study is the first to compare different methods for the detection of anti - hmgcr antibodies ( albia , elisa , and cia ) , two of them manufactured in a precommercial setting and one based on albia . both , the elisa and the cia , use a 63 kda fragment of hmgcr which has previously been described as the epitope containing region of the antibodies . in addition , we studied the coexistence of anti - hmgcr antibodies with other msa and maa and found that in our patients anti - hmgcr antibodies were the only detectable autoantibody . our data on the association between statin use and anti - hmgcr antibodies confirms previous results . we also confirm that the majority of patients with anti - hmgcr antibodies have iinm . anti - hmgcr antibodies might become available as a single assay on a fully automated analyzer , as elisa and/or as part of multiparameter assays . a recent study showed that the majority of patients with and without statin exposure , including those with self - limited statin intolerance , do not develop anti - hmgcr antibodies . therefore , anti - hmgcr antibodies are highly specific for those with an iim . to further analyze the specificity especially against healthy and diseased controls , we performed a specificity study by elisa . using various disease controls ( n = 824 ) , very high specificity ( 99.3% ) of the elisa test . therefore , anti - hmgcr antibodies detected by elisa , especially those moderate and high titers , are highly indicative of iinm . recently , an increasing prevalence of iinm was reported . in about half of the identified patients ( 45% ) the majority of those patients were statin users ( 67% ) compared to 18% in the group with detectable autoantibodies ( i.e. , jo-1 , srp , pm / scl , and ro52 ) . future studies are mandatory to investigate the pathophysiological mechanism of anti - hmgcr antibodies and the root cause for the increasing prevalence iinm . although we could not find any other autoantibody in our patients with anti - hmgcr reactivity , we can not rule out that the patients have autoantibodies that have not been tested in our study ( i.e. , anti - tif1 families , anti - mda5 , anti - nxp2 , and so forth ) . autoantibodies to intracellular antigens ( referred to as antinuclear antibodies ) are commonly tested using iif on hep-2 cells to aid in the diagnosis of systemic autoimmune diseases including myositis . consequently , we wanted to study if anti - hmgcr antibodies decorate certain structure on hep-2 cells . we found that patients with anti - hmgcr antibodies frequently stain cytoplasmic structures , however , not in all cells . in this context it is important to point out that the sensitivity for antibodies to cytoplasmic antigens ( i.e. , jo-1 or ribosomal p ) is limited . in summary , further studies are needed to ( 1 ) confirm the observed staining pattern , to ( 2 ) analyze the pattern on slides from different manufacturers and to ( 3 ) assess the reliability ( sensitivity ) of iif hep-2 for the detection of anti - hmgcr antibodies . using solid phase peptide synthesis , we found several peptides reacting with anti - hmgcr antibodies contained in a serum of a patient with statin - associated necrotizing myopathies . in order to confirm the reactivity using a different method , we selected surface exposed epitope sequences for synthesis of soluble peptides . although we were unable to confirm the peptide reactivity using soluble hmgcr derived peptides , it can not be ruled out that anti - hmgcr antibodies bind linear epitopes . however , based on our data it is likely that conformational structures play a role in the formation of the major epitope . further studies are needed to analyze the nature of the epitope on hmgcr and to address the reason for the discrepant results between the two methods for the detection of antibodies to synthetic peptides . different technologies including phage or bacterial display , synthetic peptides , or recombinant proteins might prove useful . anti - hmgcr antibodies are strongly associated with iinm and to a lesser extent with statin use . testing for anti - hmgcr antibodies might prove useful in the diagnosis of iim and in the differentiation between self - limited and persistent statin associated myopathy which requires long - term immunosuppressive treatment .
diagnostic tests are needed to aid in the diagnosis of necrotizing myopathies associated with statin use . this study aimed to compare different technologies for the detection of anti - hmgcr antibodies and analyze the clinical phenotype and autoantibody profile of the patients . twenty samples from myositis patients positive for anti - hmgcr antibodies using a research addressable laser bead assay and 20 negative controls were tested for autoantibodies to hmgcr : quanta lite hmgcr elisa and quanta flash hmgcr cia . all patients were also tested for antibodies to extractable nuclear antigens and myositis related antibodies . to verify the specificity of the elisa , 824 controls were tested . all three assays showed qualitative agreements of 100% and levels of anti - hmgcr antibodies showed significant correlation : spearman 's rho > 0.8 . the mean age of the anti - hmgcr antibody positive patients was 54.4 years , 16/20 were females , and 18/20 had necrotizing myopathy ( two patients were not diagnosed ) . nine out of 20 anti - hmgcr positive patients were on statin . all patients with anti - hmgcr antibodies were negative for all other autoantibodies tested . testing various controls showed high specificity ( 99.3% ) . anti - hmgcr antibodies are not always associated with the use of statin and appear to be the exclusive autoantibody specificity in patients with statin associated myopathies .
1. Introduction 2. Materials and Methods 3. Results 4. Discussion 5. Conclusions
PMC3630340
the prevalence of prostate cancer in korea quadrupled between 2002 and 2008 , with the highest increased incidence rate in total forms of malignancy . the incidence of prostate cancer in korea increased up to 24.8 per 100,000 men in 2009 in comparison with 13 per 100,000 men in 2008 . certain environmental elements have had an effect on the increased rate of prostate cancer , including the transition to western dietary habits among koreans and an aging population because of the rise in average life expectancy . in addition to environmental causes , the medical development of laboratory diagnoses and prostate - specific antigen screening campaigns by the korean urological association and other health organizations have helped to raise public awareness about the increase in prostate cancer in koreans . despite the rapid increase in prostate cancer incidence in korea , no published multicenter data on practical and clinical changes in korean patients with prostate cancer are available . in the united states , a database application project known as the cancer of the prostate strategic urologic research endeavor ( capsure ) was initiated in 1995 to establish a web - based database for longitudinal observations of prostate cancer patients in natural settings . the project began with 10 participating healthcare centers and increased to 26 centers in 1 year . currently , capsure is one of the most powerful prospective study groups for prostate cancer in the world consisting of approximately 14,000 registered prostate cancer patients . the most recent database study in asia , the japan study group for prostate cancer ( j - cap ) , was developed in 2001 . the j - cap database comprises 17,872 prostate cancer patients from prospective studies and was developed to improve patient care . against this background , the multicenter korean prostate cancer database ( k - cap ) was created in 2011 by combining several urologic healthcare institutions into a nationwide multicenter database for prospective studies about prostate cancer . the purpose of k - cap is to gather basic information about korean prostate cancer patients and analyze the clinical and oncological outcomes of prostate cancer to improve patient care . the purpose of this article was to declare the establishment of k - cap , to provide urologists with an overview of the k - cap methodology , and to present pilot test results from the first umbrella database comprising patient information from three participating institutions ( gangnam severance hospital , seoul st . participating institutions include asan medical center , samsung medical center , seoul national university bundang hospital , seoul st . any institution that wants to participate in k - cap must first obtain approval from their ethics committees ( or institutional review board ) . for every eligible institution , all patients with newly diagnosed prostate cancer will be registered in the k - cap web - based electronic server . all registered patient information will be updated periodically . after a diagnosis of prostate cancer , patients are invited by their urologists to join the study . all patients with biopsy - proven prostate cancer are offered enrollment in the study , regardless of disease stage , severity , or type of treatment . included in the database is all of the physician - gathered general information about a patient , including sociodemographic data . in addition , all of the patient 's clinical information at diagnosis is gathered , including diagnostic imaging , laboratory test results , pre - existing and postdiagnosis comorbidities , medical history , treatment types ( e.g. , active surveillance , androgen deprivation / hormonal medications , brachytherapy , cryotherapy , external beam radiation , hormone refractory and chemotherapy agents , and radical prostatectomy ) , neoadjuvant and adjuvant treatments , and other nontreatment - related medications . for patients who have undergone radical prostatectomy , detailed perioperative variables are measured and registered , including procedure types , intraoperative or postsurgical complications , surgical pathology results , and clinical and oncological survival outcomes . the patient files for the k - cap database include approximately 1,000 required clinical variables . the observational study of prostate cancer patients is the ultimate goal of the k - cap database . accordingly , the web - based electronic case report form ( e - crf ) was developed by using the startrial system , which is a commercially developed electronic data capture system . this e - crf system can be accessed over the internet on a 24-hour basis by coordinators and investigators at eight participating clinics ( suppl . the system was developed at the catholic university of korea , south korea , and was previously tested with the help of staff at other participating sites . the database system was implemented with a microsoft sql server running on microsoft nt servers and was programmed with java . this was accomplished by directly partnering the electronic databases of the study 's participating institutions with the registered patient files . all legal requirements for privacy protection were respected in this procedure . according to the privacy rules of k - cap , cases for registered patients are to be updated once every 3 months in terms of test dates , changes in treatment , and progress data as a matter of course for maintaining the k - cap patient forms until death or patient withdrawal . follow - up with respect to morbidity and death from prostate cancer is assured by cooperation with national statistics . when patients die , the date , location , and cause of death are recorded in the database . after completion of the data control for registered patients , analytic reports are prepared for prospective studies and research to improve the care of prostate cancer patients . these data summaries evaluate clinical occurrences , patient quality of life , economic impact , and oncological outcomes and compare types of treatment by stage and practice among patients with prostate cancer in korea . the complex studies use well - sorted variables from the k - cap database , together with adjustment methods including case controls , standardization , regression trees , and multivariate regression approaches for optimal analysis . the entry form contains five categories : patient demographics , clinical parameters , pathological parameters , other treatment parameters ( hormone therapy or radiotherapy ) , and parameters associated with follow - up . access given to an investigator allows extraction of only his or her patient 's data . the data format for data extraction includes csv format , ms excel ( xls format ) , or ms access ( accdb format ) . the system manager is able to select data by accessing the sql server system directly using the sql program ( suppl . 2 ) . to ensure the quality of the database , it is most important that accurate data be provided by the investigators . however , if a strange code is entered into the database , the database system automatically checks the input code and will not process the input of the strange data set . investigators are also able to compare their own raw data with data in the k - cap database at anytime , because each institution 's data can be extracted if the investigator has access rights . a pilot test was performed to determine whether to use the standard e - crf system based on the limited perioperative information of prostate cancer patients who underwent radical prostatectomy . between january 2006 and december 2010 , 858 prostate cancer patients who underwent radical prostatectomy at three institutions ( gangnam severance hospital , seoul st . mary 's hospital , or seoul national university bundang hospital ) were registered in the e - crf system of k - cap . the database collected preoperative variables , including patient age , prostate - specific antigen level at diagnosis , preoperative gleason score , clinical stage , and the initial procedure chosen for radical prostatectomy . postoperative variables were also registered , including pathologic gleason score and stage . for each patient , the t stage was determined in accordance with tumor - node - metastasis ( tnm ) categories , as published in 2010 ( american joint committee on cancer , 7th edition ) . all of the pathologic parameters that influence the determination of tnm staging were registered in detail . the first step was to gather the prostate cancer database files from each institution using the excel format and to send these files to an e - crf system manager controlling the data input / output of the web - based database . subsequently , the manager converted the coded excel files for use in the e - crf system . finally , we were able to obtain outcomes from the multicenter e - crf database according to several conditions that we initially established to fit the goals of k - cap . participating institutions include asan medical center , samsung medical center , seoul national university bundang hospital , seoul st . any institution that wants to participate in k - cap must first obtain approval from their ethics committees ( or institutional review board ) . for every eligible institution , all patients with newly diagnosed prostate cancer will be registered in the k - cap web - based electronic server . participating institutions include asan medical center , samsung medical center , seoul national university bundang hospital , seoul st . eligible institutions include all urologic institutions in korea . any institution that wants to participate in k - cap for every eligible institution , all patients with newly diagnosed prostate cancer will be registered in the k - cap web - based electronic server . after a diagnosis of prostate cancer , patients are invited by their urologists to join the study . all patients with biopsy - proven prostate cancer are offered enrollment in the study , regardless of disease stage , severity , or type of treatment . included in the database is all of the physician - gathered general information about a patient , including sociodemographic data . in addition , all of the patient 's clinical information at diagnosis is gathered , including diagnostic imaging , laboratory test results , pre - existing and postdiagnosis comorbidities , medical history , treatment types ( e.g. , active surveillance , androgen deprivation / hormonal medications , brachytherapy , cryotherapy , external beam radiation , hormone refractory and chemotherapy agents , and radical prostatectomy ) , neoadjuvant and adjuvant treatments , and other nontreatment - related medications . for patients who have undergone radical prostatectomy , detailed perioperative variables are measured and registered , including procedure types , intraoperative or postsurgical complications , surgical pathology results , and clinical and oncological survival outcomes . the patient files for the k - cap database include approximately 1,000 required clinical variables . the observational study of prostate cancer patients is the ultimate goal of the k - cap database . accordingly , the web - based electronic case report form ( e - crf ) was developed by using the startrial system , which is a commercially developed electronic data capture system . this e - crf system can be accessed over the internet on a 24-hour basis by coordinators and investigators at eight participating clinics ( suppl . the system was developed at the catholic university of korea , south korea , and was previously tested with the help of staff at other participating sites . the database system was implemented with a microsoft sql server running on microsoft nt servers and was programmed with java . this was accomplished by directly partnering the electronic databases of the study 's participating institutions with the registered patient files . all legal requirements for privacy protection were respected in this procedure . according to the privacy rules of k - cap , cases for registered patients are to be updated once every 3 months in terms of test dates , changes in treatment , and progress data as a matter of course for maintaining the k - cap patient forms until death or patient withdrawal . follow - up with respect to morbidity and death from prostate cancer is assured by cooperation with national statistics . when patients die , the date , location , and cause of death are recorded in the database . after completion of the data control for registered patients , analytic reports are prepared for prospective studies and research to improve the care of prostate cancer patients . these data summaries evaluate clinical occurrences , patient quality of life , economic impact , and oncological outcomes and compare types of treatment by stage and practice among patients with prostate cancer in korea . the complex studies use well - sorted variables from the k - cap database , together with adjustment methods including case controls , standardization , regression trees , and multivariate regression approaches for optimal analysis . the entry form contains five categories : patient demographics , clinical parameters , pathological parameters , other treatment parameters ( hormone therapy or radiotherapy ) , and parameters associated with follow - up . access given to an investigator allows extraction of only his or her patient 's data . the data format for data extraction includes csv format , ms excel ( xls format ) , or ms access ( accdb format ) . the system manager is able to select data by accessing the sql server system directly using the sql program ( suppl . 2 ) . to ensure the quality of the database , it is most important that accurate data be provided by the investigators . however , if a strange code is entered into the database , the database system automatically checks the input code and will not process the input of the strange data set . investigators are also able to compare their own raw data with data in the k - cap database at anytime , because each institution 's data can be extracted if the investigator has access rights . after a diagnosis of prostate cancer , patients are invited by their urologists to join the study . all patients with biopsy - proven prostate cancer are offered enrollment in the study , regardless of disease stage , severity , or type of treatment . included in the database is all of the physician - gathered general information about a patient , including sociodemographic data . in addition , all of the patient 's clinical information at diagnosis is gathered , including diagnostic imaging , laboratory test results , pre - existing and postdiagnosis comorbidities , medical history , treatment types ( e.g. , active surveillance , androgen deprivation / hormonal medications , brachytherapy , cryotherapy , external beam radiation , hormone refractory and chemotherapy agents , and radical prostatectomy ) , neoadjuvant and adjuvant treatments , and other nontreatment - related medications . for patients who have undergone radical prostatectomy , detailed perioperative variables are measured and registered , including procedure types , intraoperative or postsurgical complications , surgical pathology results , and clinical and oncological survival outcomes . the patient files for the k - cap database include approximately 1,000 required clinical variables . the observational study of prostate cancer patients is the ultimate goal of the k - cap database . accordingly , the web - based electronic case report form ( e - crf ) was developed by using the startrial system , which is a commercially developed electronic data capture system . this e - crf system can be accessed over the internet on a 24-hour basis by coordinators and investigators at eight participating clinics ( suppl . the system was developed at the catholic university of korea , south korea , and was previously tested with the help of staff at other participating sites . the database system was implemented with a microsoft sql server running on microsoft nt servers and was programmed with java . this was accomplished by directly partnering the electronic databases of the study 's participating institutions with the registered patient files . according to the privacy rules of k - cap , cases for registered patients are to be updated once every 3 months in terms of test dates , changes in treatment , and progress data as a matter of course for maintaining the k - cap patient forms until death or patient withdrawal . follow - up with respect to morbidity and death from prostate cancer is assured by cooperation with national statistics . when patients die , the date , location , and cause of death are recorded in the database . after completion of the data control for registered patients , analytic reports are prepared for prospective studies and research to improve the care of prostate cancer patients . these data summaries evaluate clinical occurrences , patient quality of life , economic impact , and oncological outcomes and compare types of treatment by stage and practice among patients with prostate cancer in korea . the complex studies use well - sorted variables from the k - cap database , together with adjustment methods including case controls , standardization , regression trees , and multivariate regression approaches for optimal analysis . the entry form contains five categories : patient demographics , clinical parameters , pathological parameters , other treatment parameters ( hormone therapy or radiotherapy ) , and parameters associated with follow - up . access given to an investigator allows extraction of only his or her patient 's data . the data format for data extraction includes csv format , ms excel ( xls format ) , or ms access ( accdb format ) . the system manager is able to select data by accessing the sql server system directly using the sql program ( suppl . to ensure the quality of the database , it is most important that accurate data be provided by the investigators . however , if a strange code is entered into the database , the database system automatically checks the input code and will not process the input of the strange data set . investigators are also able to compare their own raw data with data in the k - cap database at anytime , because each institution 's data can be extracted if the investigator has access rights . a pilot test was performed to determine whether to use the standard e - crf system based on the limited perioperative information of prostate cancer patients who underwent radical prostatectomy . between january 2006 and december 2010 , 858 prostate cancer patients who underwent radical prostatectomy at three institutions ( gangnam severance hospital , seoul st . mary 's hospital , or seoul national university bundang hospital ) were registered in the e - crf system of k - cap . the database collected preoperative variables , including patient age , prostate - specific antigen level at diagnosis , preoperative gleason score , clinical stage , and the initial procedure chosen for radical prostatectomy . postoperative variables were also registered , including pathologic gleason score and stage . for each patient , the t stage was determined in accordance with tumor - node - metastasis ( tnm ) categories , as published in 2010 ( american joint committee on cancer , 7th edition ) . all of the pathologic parameters that influence the determination of tnm staging were registered in detail . the first step was to gather the prostate cancer database files from each institution using the excel format and to send these files to an e - crf system manager controlling the data input / output of the web - based database . subsequently , the manager converted the coded excel files for use in the e - crf system . finally , we were able to obtain outcomes from the multicenter e - crf database according to several conditions that we initially established to fit the goals of k - cap the multicenter k - cap test database , which contains data for a total of 858 prostate cancer patients who underwent radical prostatectomy at three institutions , represents a single source for comprehensive information about the patients and the disease . a java program was developed to upload excel data into the e - crf database by matching each column name . if the uploaded excel file executes the java program , the data is automatically entered into the designated column . the first step is to compare the number of records between the uploaded excel file and the sql server database . the second step is to transform data from the sql server into excel format and then compare it with the original data set . we confirmed that the process of converting data to the e - crf system from excel files is easy and exact and that output data from the web - based database system are quickly retrievable according to many weighted limitations . complete treatment histories and patient information allow for comparison of different outcomes . as the number of patients enrolled in the database increases , the database will help to categorize patients and identify historical controls for various purposes . in existing research about prostate cancer , patients have been selected with indiscriminate consideration of treatment type or stage , regardless of the varying aims of specific studies . for example , the life expectancy of korean prostate cancer patients with multiple bone metastases is not yet known . in addition , we do not yet know the exact clinical course of korean prostate cancer patients after treatment . even though we reviewed the results of many studies and made conclusions about disease progression [ 7 - 12 ] , the results were from single - institution cohort studies with small sample sizes . furthermore , despite the many studies of prostate cancer in koreans , most of our knowledge is still based on studies from a western database of prostate cancer . thus , we needed a prostate cancer database of our own , and we believe that k - cap is that database . however , under a system of longitudinal observation , the collection of visible results for analysis will require more time than is required for studies focusing on retrospective design . we expect that at least 5 years will be required to obtain tangible and qualitative results from the k - cap database . if the initial results are positive , however , lasting products will exist in the database on an ongoing basis . in the case of capsure , the first published results about trends in prostate cancer treatment ( 2003 ) were published 7 years after initial development of the capsure project . currently , a pubmed search with the term " capsure " delivers more than 100 relevant articles , including articles on the j - cap project in japan . achievement of this long - term plan requires verification of the web - based database system as the first step toward the establishment of k - cap . the pilot test with prostate cancer patient databases from three institutions showed the feasibility of a web - based system for k - cap . the important checkpoint is that the results of the pilot test are not intended to be used to make analytical conclusions about the specific subjects from the three institutions . the intent of the pilot test was to ensure that the k - cap database system was operating accurately before other participating institutions provided proprietary data . as this article goes to press , patient data from the remaining institutions in the korean healthcare system are being registered . we estimate that approximately 4,000 cases from participating institutions will be registered as the basic background data for the k - cap database ( 620 patients from asan medical center , j.h.h . , c.s.k . ; 1,400 patients from samsung medical center , h.m.l . ; 340 patients from seoul national university bundang hospital , s.k.h . the patients with newly diagnosed prostate cancer from the participating institutions will be registered , including the required entry information for k - cap . according to the patients ' privacy agreement , basic information about the patients will be encrypted by using a program unique to the k - cap database and will remain unknown to our team and even to the database manager . thus , the privacy of the patients registered in k - cap is guaranteed . developing a uniform format for the k - cap database was one of the main concerns of the project staff at each of the participating institutions . for example , because of different policies at each institution , differences in preoperative prostate volume measurements ( by transrectal ultrasonography ) existed between the participating institutions . some institutions measured prostate volume preoperatively in all patients , but the procedure was performed at different clinical stages at some institutions . furthermore , each institution had different postoperative follow - up periods over which to conduct patient orientation surveys , such as the international index of erectile function questionnaire . in addition to policy differences between the institutions , internal problems concerning a uniform database format also existed . some pathologists reported the precise weights of the tumors in grams , whereas others reported approximate volumes using relative measurements ( e.g. , v1 , v2 , and v3 ) . importantly , we previously identified several problems , such as those mentioned above , and have discussed solutions intended to achieve a uniform format for the database . we will continue to revise and develop the k - cap format in an effort to provide the flexibility needed to achieve the long - term goals of the database . another important consideration in the management of the k - cap database is how best to maintain continuous updates about patients . it is difficult to predict how many new prostate cancer patients from each participating institution will be enrolled in k - cap . likewise , it is difficult to update k - cap data periodically if the number of enrolled patients is too large , even before the k - cap settings are ready . however , it is well known that the design of a longitudinal observational study ensures that the quality of the data is more important than the quantity of the data . to solve this problem , we enrolled a limited number of prostate cancer patients representative of a particular institution rather than all of the prostate cancer patients at that institution . after determining the appropriate k - cap settings through trial and error , we intend to enroll all prostate cancer patients in the k - cap database and to include patients from any institution that wants to participate in k - cap . one last consideration is the importance of k - cap 's collaboration with capsure and j - cap . k - cap is intended to be used not only for domestic investigation but also for international participation and collaboration . even though k - cap is still in the preparatory stage , we have already received valuable advice and support from the working staff and chairmen of capsure and j - cap . collaboration will best be achieved if the format of the k - cap database is analogous to that of the other databases . the k - cap database was developed considering the original excel database of capsure and j - cap . the capsure database has many variables , for which each treatment of prostate cancer was equally taken into account . this probably reflects the different trends in prostate cancer treatment between the united states and japan . the ability to reference these different databases will help to enrich the k - cap database , so that our database can cover many fields . in this article , we announced the establishment of k - cap on behalf of all the participants in the k - cap project . the main purpose of k - cap is to establish and maintain a large database of randomized clinical trials and observational longitudinal studies of korean prostate cancer patients . several retrospective studies about prostate cancer in korea have been published in domestic and international journals . we believe that mutual cooperation is needed to establish the k - cap project as being equal to the capsure and j - cap projects in the united states and japan , respectively . although our beginnings have been humble , we are confident that our dedication and ongoing efforts will result in k - cap becoming a world - renowned longitudinal observation database , along with capsure and j - cap . this article announces the development and establishment of k - cap as the first database for comprehensive data collection about prostate cancer patients in korea for the purposes of research and improved patient care . this study tested the web - based system of the k - cap database to analyze coded excel files from three institutions . the system operated precisely , and the pilot test verified that the web - based database system is suitable for k - cap . the system processes will run successfully as long as sufficient and updated data is continuously provided to the system manager . as soon as possible , complete statistical results of registered prostate cancer patients will be reported for basic background data .
purposethe purpose of this article was to announce the establishment of the multicenter korean prostate cancer database ( k - cap ) and to provide urologists with details about k - cap 's methodology.materials and methodsthe initial participating k - cap institutions include five medical centers in korea . first , we registered prostate cancer patients who underwent radical prostatectomy as the basic background data . k - cap is poised to combine these initial observational longitudinal studies with those of other eligible institutions as the database grows . all current prostate cancer patients in korea are able to be registered into the web - based database system and thereby have a role in several observational studies . the structure of the database for k - cap was developed by matching it with the respective data from different studies . the operability of the k - cap database system was verified by using the existing databases from three participating institutions.resultsthe analysis of clinicopathologic characteristics of patients with the use of the web - based database was successfully conducted . we confirmed the accurate operation of the web - based database system without any difficulties.conclusionswe are announcing the establishment of k - cap the first database of comprehensive observational longitudinal studies about prostate cancer in korea . the database will be successfully maintained by sufficiently and continuously updating all patient data covering several treatments . complete statistical results for registered prostate cancer patients are forthcoming for the basic background data to establish the database . even though much trial and error are expected during the development process , we expect that k - cap will eventually become one of the most powerful longitudinal observation databases .
INTRODUCTION MATERIALS AND METHODS 1. Organization of K-CaP 1) Participating institutions (in alphabetical order) 2) Eligible institutions 2. Data collection, entry, follow-up, and retrieval 1) Data collection 2) Entry 3) Follow-up 4) Data retrieval 5) Quality control 3. The e-CRF system for K-CaP RESULTS DISCUSSION CONCLUSIONS SUPPLEMENTARY MATERIALS
PMC5332557
fusion of haploid cells to form a diploid zygote is the defining event of sexual reproduction in eukaryotes ( lillie , 1913 ) . in organisms from every eukaryotic taxon , the plasma membranes of gametes of opposite sex or mating type come into intimate contact and then fuse to form the zygote ( bianchi et al . very little is known about the molecular mechanisms of the membrane fusion reaction between gametes , and a bona fide fusion protein has not been formally identified . the best candidate to date is the ancient gamete plasma membrane protein hap2 , whose presence in green algae , higher plants , unicellular protozoa , cnidarians , hemichordates , and arthropods ( cole et al . , 2014 , ebchuqin et al . , 2014 , johnson et al . , 2004 , kawai - toyooka et al . , 2014 , liu et al . , 2008 , mori et al . , 2006 , steele and dana , 2009 ) indicates it was likely present in the last eukaryotic common ancestor ( leca ) ( wong and johnson , 2010 ) . hap2 was first identified in a screen for male sterility in the flowering plant arabidopsis thaliana ( johnson et al . , 2004 ) and later under the name gcs1 ( mori et al . , 2006)as a sperm - specific protein shown to be required at an unidentified step in sperm - egg fusion ( mori et al . , 2006 , a screen for genes essential for gamete fusion in the green alga chlamydomonas independently uncovered hap2 , showing that it is expressed only in minus gametes and is exclusively present on an apically localized membrane protuberance termed the minus mating structure ( liu et al . , 2008 ) ( see figure 1a for a diagram of chlamydomonas fertilization ) . studies in chlamydomonas and plasmodium ( the pathogen causing malaria in humans ) revealed that hap2 mutant gametes in both organisms were fully capable of robust adhesion to gametes of the opposite mating type or sex , but merger of the lipid bilayers was abrogated ( liu et al . , 2008 ) . in both organisms , adhesion relies on proteins that are species - limited : fus1 in chlamydomonas plus gametes and its unidentified receptor in minus gametes ( misamore et al . , 2003 ) , and p48/45 in plasmodium berghei gametes ( van dijk et al . , 2001 ) . based on these findings , which have since been confirmed in arabidopsis thaliana ( mori et al . , 2014 ) and the ciliated protozoan tetrahymena thermophila ( cole et al . , 2014 ) , a model emerged positing that hap2 , a single - pass transmembrane protein , functions after species - limited adhesion in the membrane fusion process between gametes ( liu et al . , 2008 ) . furthermore , in all of these organisms , hap2 is required in only one of the two gametes , raising the possibility that it may function similarly to fusion proteins of enveloped viruses ( wong and johnson , 2010 , harrison , 2015 ) . to understand the function of hap2 at the molecular level we carried out concerted bioinformatic , functional , and x - ray structural analyses of hap2 from chlamydomonas reinhardtii . initial bioinformatic analyses identified weak similarity to class ii fusion proteins , revealing a segment within a cysteine - rich portion of hap2 that could potentially correspond to the fusion loop . we demonstrate by mutational analysis and fusion - blocking antibodies targeting this segment that it has elements that are essential for hap2 function . finally , we show that the recombinant hap2 ectodomain is monomeric , but inserts into liposomes by concomitantly forming trimers , the x - ray structure of which revealed a class ii fusion protein fold in the typical trimeric post - fusion hap2 has 16 conserved cysteine residues with a signature distribution in the ectodomain ( figure 1b ) . early alignments of hap2 family members identified a characteristic 50 aa domain ( residues 352399 in chlamydomonas hap2 ) with several conserved residues that was designated the hap2/gcs1 pfam domain ( pf10699 ) ( see http://pfam.xfam.org/family/pf10699 ) ( finn et al . , 2016 ) . a previous mutagenesis analysis in chlamydomonas failed to identify functional properties in the pf10699 domain , as the mutant proteins tested either were not transported to the mating structure , or were nearly indistinguishable from wild - type in their ability to support fusion with plus gametes ( liu et al . , 2015 ) . database searches for additional conserved regions using the hhpred protein homology detection server ( sding et al . , 2005 ) indicated that a cysteine - rich region in the n - terminal half of the ectodomain exhibited weak similarity to class ii fusion proteins . in particular , hhpred identified a polypeptide segment in c. reinhardtii hap2 ( 170204 , sqvwddtfgsskertranldcdfwsdpldiligrk ) that fell in the fusion loop region of the flavivirus envelope protein e in the resultant amino - acid - sequence alignment ( figure s1 ) . analysis of hap2 orthologs showed that the sequence in this region is highly variable , with a number of deletions and insertions and is framed at each side by relatively conserved segments : amino acids ( aa ) 159167 upstream ( including conserved cysteines 57 ) and aa 208219 downstream ( including conserved cysteine 9 ) ( figures 1b and 1c ) . only amino acids r185 and c190 ( in bold in the sequence above ) within the identified segment are conserved , suggesting that they may play a role in hap2 function . to investigate the functional importance of this segment we transformed chlamydomonas wild - type ( wt ) or mutant hap2 transgenes carrying an influenza virus hemagglutinin tag ( ha ) into a fusion - defective , hap2 mutant strain ( liu et al . , 2008 ) and assessed hap2-ha expression and trafficking to the mating structure as well as fusion of the transformed hap2 minus gametes with wt plus gametes . hap2-ha was detected in hap2 minus gametes transformed with wt hap2-ha as the expected doublet in sds - page / immunoblotting ( figure 1d ) , the upper form of which was present on the cell surface as assessed by its sensitivity to protease treatment of live gametes ( liu et al . , 2008 ) . all mutant proteins were expressed at levels similar to wild - type hap2-ha ( figure 1d ) , trafficked to the cell surface as assessed by their sensitivity to trypsin treatment of live gametes ( examples shown in figure 1e ) , and localized at the mating structure ( example shown in figure 1f ) . thus , any defects in gamete fusion could be ascribed directly to the functional properties of the mutant hap2 proteins . hap2 with a deletion of residues 184tra186 ( hap2-tra - ha ) , which includes the conserved r185 was non - functional and failed to rescue fusion in the hap2 mutant when mixed with wild - type plus gametes ( figure 1d ) . a mutant hap2 with a lysine sustituted for the conserved r185 ( the hap2-r185k - ha mutant ) was fully functional , whereas the hap2-r185a - ha or hap2-r185q - ha mutants failed to rescue fusion ( figure 1d ) . a reverse - order hap2-ra185 - 86 mutant ( hap2-r185a - a186r - ha ) also was non - functional , indicating that a positively charged residue at position 185 is essential for the fusion activity . finally , hap2 minus gametes expressing hap2-f192a - w193a - ha were impaired in fusion , although fusion was not abolished by these mutations , indicating that these nearby aromatic residues also play a role in hap2 fusion function ( figure 1d ) . thus , the segment hap2170 - 204 , bounded by a pair of conserved cysteines , contains residues that are dispensable for protein expression , folding and localization , but are essential for the membrane fusion activity . in an independent approach to examine the function of the hhpred - identified region , we generated a rabbit antibody against a synthetic peptide , hap2168 - 190 , spanning the functionally important r185 residue . the affinity - purified antibody ( -hap2168 - 190 ) immunoprecipitated epitope - tagged hap2-ha from lysates of hap2-ha minus gametes ( figure 2a ) , confirming its reactivity with hap2 . to test whether -hap2168 - 190 interfered with gamete fusion , we incubated minus gametes with undiluted antibody , mixed them with plus gametes , and determined the percentage of gametes that had fused to form zygotes pre - incubation of minus gametes with -hap2168 - 190 had no effect on motility or adhesion but inhibited gamete fusion by over 75% , whereas pre - incubation with a control igg had no effect on fusion ( figures 2b and 2c ) . antibody dilution resulted in a loss of fusion - blocking activity , suggesting a low concentration of hap2-specific antibodies in the polyclonal mixture , probably due to a low immunogenicity of the synthetic peptide . pre - incubation of plus gametes with the antibody did not affect their ability to fuse ; and the fusion - blocking activity of -hap2168 - 190 was neutralized by pre - incubation with the hap2168 - 190 peptide , but not with a control peptide ( figure 2c ) , further documenting the specificity of the antibody . finally , immunolocalization experiments using an anti - ha antibody showed that hap2-ha on -hap2168 - 190-treated minus gametes remained at the mating structure ( figure 2d ) , indicating that -hap2168 - 190 did not alter hap2 localization , but directly interfered with its function . these functional studies with chlamydomonas mutant gametes and the anti - peptide antibody indicated that in its native conformation on live gametes , the 168190 segment of hap2 is accessible at the protein surface and its integrity and availability are essential for fusion of gametes . we used a drosophila expression system to produce a soluble form of c. reinhardtii hap2 aa 23592 ( comprising almost the entire ectodomain , figure 1b ) and purified it to homogeneity ( see star methods ; figure s2 ) . analysis by size - exclusion chromatography ( sec ) and multi - angle static laser light - scattering ( malls ; figure s2 ) showed that the protein behaved as a monomer in solution ( fraction labeled hap2e ) , but had a tendency to oligomerize with time ( especially under high ionic strength conditions ) to elute at a volume corresponding roughly to hexamers ( hap2eh fraction in figure s2 ) . the purified protein from the hap2e monomeric fraction efficiently neutralized the fusion - inhibition potential of the -hap2168 - 190 antibody ( figure 2c ) , indicating that at least part of the 168190 segment is exposed in hap2e and accessible to the antibody . to detect membrane insertion , we incubated the recombinant protein with liposomes of a standard lipid composition ( see star methods ) and monitored binding by co - flotation on a sucrose gradient ( figure 3e ) followed by immunoblot detection of lipid - inserted hap2e using a monoclonal antibody raised against hap2e ( mab k3 ; see figure s2 ; star methods ) . we observed efficient co - flotation of the monomeric hap2e fraction ( figure 3e ) , but not of the multimeric hap2eh fraction ( not shown ) . electron microscopy analysis showed that hap2e decorated the liposome surface as projecting rods about 12 nm long ( figures 3b3d ) , which are similar to those formed by viral class ii fusion proteins in their trimeric , post - fusion form , such as the alphavirus e1 protein ( gibbons et al . , 2003 ) . the size and shape of membrane - bound hap2e suggested that it had also oligomerized upon membrane insertion . of note , 3-fold symmetry was apparent in some top views of unbound proteins present in the background ( figure 3d , arrowheads ) . we confirmed that hap2e had indeed trimerized upon membrane insertion by detergent - solubilizing it from the liposomes and analyzing it by native page ( figure 3f ) and by sec ( figure s3 ) . these results indicated that hap2e behaves similarly to the alphaviruses and flaviviruses class ii proteins , with membrane insertion concomitant with trimerization of a monomeric pre - fusion form ( klimjack et al . 2002 ) , except that hap2e did not require an acidic environment for lipid binding and trimerization . this difference is in line with the ability of hap2 to induce gamete fusion in the extracellular environment , whereas alpha- and flaviviruses require the acidic environment of an endosome for fusion . because the co - flotation experiments demanded relatively high amounts of purified recombinant protein for detection , it was impractical to assess flotation inhibition by the undiluted polyclonal -hap2168 - 190 antibody . but we tested the effect of the mutations described in figure 1 by recombinantly expressing the mutant hap2 ectodomains . these experiments showed that the mutations that impaired gamete fusion also affected the co - flotation capacity of the mutant hap2e ( figure 3 g ) . the r185k hap2e mutant co - floated with liposomes as efficiently as wild - type , whereas those in which the charge at position 185 was removed the r185a and r185q mutants co - floated poorly ( figure 3 g ) . co - flotation with the liposomes was also essentially abrogated with the hap2e-tra and hap2e - f192a - w193a mutants . the combination of the in vivo ( figures 1 and 2 ) and in vitro ( figure 3 ) analyses indicates that hap2 has the capacity to directly interact with membranes and that altering the conserved residues of the hhpred - identified segment affects this interaction . details of the crystallization and structure determination are described in the star methods , and the crystallographic statistics are listed in table s1 . we determined the x - ray structure by the single isomorphous replacement with anomalous scattering ( siras ) method using a ptcl4 derivative . the experimental electron density map allowed the tracing of 442 amino acids out of 569 in the hap2e expression construct ( aa 23592 ; figure 1b ) from amino acids 24 to 581 with internal breaks at several disordered loops ( listed in the star methods ) . the atomic model of c. reinhardtii hap2 revealed a trimer with unambiguous structural homology to class ii fusion proteins , featuring the three characteristic sheet - rich domains , termed i , ii , and iii , arranged in a hairpin conformation typical of class ii fusion proteins in their post - fusion form ( bressanelli et al . , 2004 , modis et al . , 2004 , willensky et al . , 2016 ) the overall shape of the hap2e trimer is totally compatible with the projections from the liposomes observed in the electron micrographs , with the predicted membrane interacting region at the tapered end of the rods ( figures 3c and 3d ) . the monomeric hap2e ( figure s2 ) used to grow the crystals obviously underwent the same conformational change to form trimers observed during insertion into liposomes ( figure 3 ) . a similar oligomeric rearrangement during crystallization was reported previously for the dengue virus e protein under acidic ph conditions ( klein et al . , 2013 , nayak et al . , 2009 ) . in the case of hap2 , acidification is not required ; the trigger for the rearrangement to the post - fusion form is not known . the comparison with class ii fusion protein trimers of known structure defines the membrane - facing side ( top in figure 4 ) and the membrane distal end ( bottom side ) of the hap2 trimer , in agreement with the shape observed in the electron micrographs ( figures 3b3d ) . the dali server ( holm and park , 2000 ) yielded z scores ranging from 9 to 16 for up to 343 c atoms of class ii fusion proteins of viral and cellular origin ( table 1 ) , in the same range as those obtained in previous comparisons among known class ii fusion proteins ( prez - vargas et al . , 2014 ) . domain i in hap2 is a sandwich of about 200 aa with ten strands in the two apposed sheets : the a0b0i0h0g0 sheet is buried and the j0c0d0e0f0 sheet is exposed in the trimer , and we refer to them as inner and outer sheets ( figures 5a and 5d ) . a specific feature of the hap2 domain i is that the a0b0 -hairpin is long and projects out of the inner sheet to augment the outer sheet of the adjacent subunit , where strand a0 runs parallel to j0 ( figure s5 ) . these additional inter - subunit main chain -interactions are likely to confer extra stability to the post - fusion trimer . at the membrane - distal , bottom face of the trimer , the outer sheet projects very long loops , which are disordered in the crystal ( dashed tubes in figures 4 , 5 , s4 , and s5 ) . the c0d0 loop projection displays six additional cysteine residues ( presumably forming three disulfide bonds ) in an insertion that is present essentially only in algal hap2 ( liu et al . , 2008 ) ( figure 5d ) ; the e0f0 loop projection is also an insertion with four potential n - linked glycosylation sites . domain iii is made of about 130 aa and has an immunoglobulin - like fold , with seven strands in two sheets ( labeled abe and gfcc ; figures 5a , 5d , and s4 ) . although the sheets are longer than in other class ii fusion proteins ( figure 4 ) , giving domain iii a more elongated bean shape , it is in a similar location at the side of the trimer ( figure 4 ) as in other class ii fusion proteins of known structure . core , which is composed of domains i and ii arranged parallel to each other and interacting along their length about the 3-fold molecular axis . the latter central trimer interaction is postulated to exist during formation of an extended trimeric intermediate 2010 ) , in which the fusion loops are inserted in the target membrane ( plus gamete here ) , while the c - terminal tm segment is anchored in the minus gamete mating structure , at the opposite end . the final collapse into the post - fusion hairpin conformation of each protomer in the trimer brings domain iii to the sides of this core , projecting the downstream stem and tm regions toward the fusion loop ( liao and kielian , 2005 ) , in a fusogenic rearrangement of the protein analogous to an umbrella folding inside out . in this final , post - fusion location , domain iii buries an area of about 2,100 of its surface , divided roughly equally in contacts with the same and with the adjacent subunit ( intra- and inter - subunit contacts ) . the observed contacts therefore can only form after the assembly of the central trimer core , in line with the proposed clamping role of domain iii , and resulting in irreversible trimerisation , as proposed for other class ii fusion proteins ( liao and kielian , 2005 , prez - vargas et al . , 2014 ) . domain ii is the largest domain ( roughly 250 aa in total ) and is made by two distinct segments emanating from domain i : the d0e0 and h0i0 strand connections of the outer and inner sheets of domain i , respectively ( figure s4 ) . as in all class ii proteins , the domain i proximal region of domain ii has a central sheet ( aefg ; figures 5 and s4 ) flanked by additional short helices . the distal tip of domain ii contains sheet bdc , with the strands running parallel to the molecular 3-fold axis at the distal end of the d0e0 segment . the connection between strands c and d at the tip of domain ii ( the cd loop ) was shown to be the fusion loop in the viral proteins ( reviewed in kielian and rey ) . the bdc sheet normally packs against the ij -hairpin ( the distal end of the h0i0 segment ) , which in hap2 maps to the conserved hap2/gcs1 pfam domain pf10699 ) ( figure 5a ) . although in hap2 strands i and j are absent , we still refer to this region as the ij loop ( figures 5 and s4 ) . the cd loop in hap2 is 40 aa long and has an intervening short -helix in the middle ( 0 ) , in contrast to the standard class ii fusion proteins from the arthropod - borne viruses such as flaviviruses and alphaviruses ( 1015 residues long ) . in this respect , hap2 resembles the rubella virus class ii fusion protein e1 , which has 50 aa in between stands c and d , with a couple of short intervening helices as well as an additional strand which results in two separate fusion loops ( dubois et al . , 2013 ) . the presence of the 0 helix in hap2 also results in two loops ( loops 1 and 2 in figures 5b and 5c ) , which project outward . although disordered in the crystal , these two loops are in position to project non - polar residues into the target membrane . the hhpred alignment was indeed quite close and pointed correctly to the cd loop ( figure s1 ) . the hap2168 - 190 peptide used for immunization spans loop 1 all the way to the end of helix 0 , and the fact that the resulting polyclonal antibody blocks fusion is in line with this region being exposed at this end of the molecule . the hap2 crystal structure is therefore compatible with the mutagenesis data in this region , and with the effect of the -hap2168 - 190 antibody . moreover , a peptide derived from the corresponding region of tetrahymena thermophila hap2 was found to display properties typical of a fusion loop ( see related paper in current biology , pinello et al . , 2017 ) , in line with our findings with c. reinhardtii . the structure shows that the functionally important residue r185 is at the n terminus of the 0 helix and its side chain points away from the two exposed loops and toward the core of the hap2 trimer . r185 makes a salt bridge and bidentate hydrogen bonds with the side chain of the strictly conserved e126 in strand b ( figures 5b and 5c ) . furthermore , the r185 side chain is at the core of a network of interactions stabilizing the main - chain conformation of the ij loop . this network involves main chain atoms of f376 , g382 and r385 , together with the strictly conserved q379 side chain , which are part of the pf10699 signature segment that allowed the identification of hap2 in widely disparate organisms . the structure now shows that the conserved pattern of disulfide bonds of this signature element is required for the ij loop to adopt a convoluted fold acting as a framework underpinning the cd loop by interaction with the conserved r185 so that the two fusion loops project out at the membrane - interacting region . the latter region , in contrast , is variable and has multiple deletions and insertions in the various orthologs , some insertions being quite long ( figure 1c ) . it is possible that the differences in membrane - interacting regions of hap2s across the broad spectrum of eukaryotic organisms reflect evolutionary adaptations required for fusion with different target gametes . we note that the fusion loop in viral class ii proteins is in general the most conserved segment of the protein within orthologs from a given virus genus , most likely because the same residues are also required for inter - subunit interactions in their pre - fusion form on infectious particles . but this comparison is not necessarily informative , since the analyzed hap2 proteins span eukaryotic taxa that are much more distantly related than are viruses within a given genus . the pre - fusion conformation of hap2 remains unknown , however , and it is possible that the non - polar residues of the two fusion loops are maintained unexposed until the time of fusion . the conserved interaction of the r185 side chain with that of the e126 residue in the tip domain may allow exposure of the two fusion loops only after a conformational change . in this context , the fusion - inhibiting -hap2168 - 190 antibody would bind only after a conformational change that exposes the fusion loop , but this remains to be explored . the identical topological arrangement of secondary , tertiary , and quaternary structure elements of hap2 with the viral class ii proteins indeed , the probability of convergence to the observed complex fold from independent origins , to result in two proteins displaying exactly the same topological arrangement throughout the entire ectodomain ( as shown in figure s4 ) is extremely low and can be considered negligible . nature is parsimonious , and once a protein required for a complex function such as membrane fusion becomes available , the corresponding gene is used over and over again , most likely transferred via horizontal gene exchanges . this concept is supported by the observation that only three structural classes of viral fusion proteins have been observed so far , in spite of the enormous variety of known viruses . as one example , the membrane fusion proteins of the herpesviruses , rhabdoviruses and baculoviruses , which are otherwise totally unrelated viruses , were shown to be homologous by structural studies ( heldwein et al . , 2006 , kadlec et al . , 2008 , roche et al . , 2006 ) . these fusion proteins most certainly derived from a distant common ancestor ( class iii proteins ) ( backovic and jardetzky , 2011 ) , whose genes must have been acquired via horizontal exchanges . within eukaryotic organisms , only a few other cell - cell fusion proteins have been positively identified . although the myoblast myomaker , a seven - pass transmembrane protein that governs fusion of myoblasts to form myotubes ( millay et al . , 2013 , millay et al . , 2016 ) , has no obvious relation to viruses , the proteins involved in the two other cell - cell fusion events that have been characterized at the molecular level are virus related . cytotrophoblast fusion in mammals during placenta formation ( blaise et al . , 2003 , holm and park , 2000 ) and epidermal cell - cell fusion in nematodes to form syncytia ( mohler et al . , 2002 ) both use fusion proteins also found in viruses . in the first case , the class i fusion protein involved is clearly derived from an endogenous retroviral element ( denner , 2016 ) . it is also possible that a similar process involving retrotranscription may have taken place in the case of the class ii caenorhabditis elegans fusion protein eff-1 ( prez - vargas et al . , 2014 ) , as retroviruses of nematodes have been found to have an envelope protein related to that of the phleboviruses ( frame et al . , 2001 , , 2000 ) , which have class ii fusion proteins ( dessau and modis , 2013 ) . these observations suggest that retro - transcription , followed by integration into the genome , may have been an important pathway for gene exchanges between viruses and eukaryotic cells . hap2 is present in organisms responsible for several of the globe s most devastating human diseases , including plasmodium , trypanosoma , and toxoplasma . many arthropods that are vectors of human diseases or are agricultural pests , such as insects and ticks , also possess hap2 . a strategy that used recombinant plasmodium hap2 fragments to induce transmission - blocking immunity in mice has been reported previously ( blagborough and sinden , 2009 , miura et al . , 2013 ) , but the expression systems were not sufficient for viable clinical developments . our results suggest that the use of a peptide spanning the hap2 fusion loops as immunogen might be sufficient to induce transmission - blocking immunity , similar to the antibodies we obtained here against chlamydomonas hap2 . in conclusion , our data now open the way to a full mechanistic characterization of gamete fusion induced by hap2 and raise new questions , including the identification of the trigger for the hap2 fusogenic conformational change , the structure of the pre - fusion form(s ) of hap2 , and its organization on the mating structure of the minus gamete membrane . evolution through hundreds of millions of years may have led the different taxa to develop alternative solutions , and the metastable pre - fusion form of hap2 may be organized differently in the multiple organisms in which it is present , as shown for viruses of different families . in contrast , the post - fusion conformation described here appears as a universal feature of class ii fusion proteins . reagent or resourcesourceidentifierantibodiespolyclonal rabbit -hap2168 - 190 peptide abthis papern / amonoclonal mouse -hap2e ab k3this papern / achemicals , peptides , and recombinant proteinshap2168 - 190 ( sssqvwddtfgsskertranldc)yenzym antibodiesn / ahap2control ( ctqpprppwpprpppappps)yenzym antibodiesn / acommon lab reagentsn / an / adeposited datac . melanogaster : cell line s2 : s2thermo - fishercat # r690 - 07experimental models : organisms / strainschlamydomonas reinhardtii : strain 21 gr : mt+chlamydomonas culture collectioncc-1690chlamydomonas reinhardtii : strain 40d4 : hap2 mating type minus mutantchlamydomonas culture collectioncc-4552chlamydomonas reinhardtii : strain hap2-ha : hap2-ha plasmid - rescued 40d4 strainchlamydomonas culture collectioncc-5295recombinant dnapmt / bip / twinstrepkrey et al . - 592twinstrepthis papern / asoftware and algorithmsxdskabsch , 1988http://xds.mpimf - heidelberg.mpg.de / pointlessevans , 2006http://www.ccp4.ac.uk / ccp4collaborative computational project , 1994http://www.ccp4.ac.uk / autosharpvonrhein et al . , 2010https://www.globalphasing.commolprobitychen et al . , 2010http://molprobity.biochem.duke.edu / hhpredsding et al . , 2005https://toolkit.tuebingen.mpg.de / hhpreddaliholm and park , 2000http://ekhidna.biocenter.helsinki.fi / dali_server / start requests for resources and reagents should be directed to flix a. rey ( felix.rey@pasteur.fr ) . wild - type chlamydomonas strains 21 gr ( mating type plus ; mt+ ; cc-1690 ) , 40d4 ( hap2 mating type minus mutant ( liu et al . , 2015 ) ) and hap2-ha ( hap2-ha plasmid - rescued hap2 strain ( liu et al . , 2015 ) ) were used and are available from the chlamydomonas culture collection . vegetative growth of cells and induction of gametogenesis in gamete medium ( m - n ) were as described before ( liu , et al . gametes were activated with dibutyryl - camp ( db - camp ) by incubation with 15 mm db - camp and 0.15 mm papaverine for 0.5 hr in n - free medium ( ning et al . , 2013 ) . gamete fusion was assessed by determining the number of cells that had formed zygotes after being mixed with wild - type plus gametes for 30 min and was expressed as percent fusion using the following equation : ( 2 number of zygotes)/[(2 number of zygotes)+(number of unfused gametes ) ] 100 ( liu et al . , 2015 ) . a polyclonal antibody against hap2 peptide sssqvwddtfgsskertranldc ( aa 168 - 190 ) was made in rabbits by yenzym antibodies ( san francisco , ca ; -hap2168 - 190 ) , under oversight by their institutional review board . the antibody was purified on a peptide - conjugated affinity column prepared by the company . to assay for antibody inhibition of gamete fusion , gametes that had been activated with dibutyryl - camp for 30 min ( misamore et al . , 2003 ) were washed once with gamete medium and the activated gametes ( 2 10 cell / ml ) were incubated with 110 ug / ml peptide antibody ( pre - dialyzed in gamete medium ) for 2 hr followed by mixing with gametes of the opposite mating type gametes for the times indicated in the figure legends . the number of zygotes ( detected as cells with 4 rather than 2 cilia ) was determined by phase - contrast microscopy . at least 100 cells were counted each time with at least two counts per sample in at least two separate experiments . a rabbit igg ( sigma ) was used as a control antibody . to test the capacity of peptides or hap2e to neutralize the fusion - inhibiting properties of -hap2168 - 190 , the antibody ( 220 ug / ml ) that had been dialyzed in gamete medium was incubated overnight with control peptide ctqpprppwpprpppappps ( 200 ug / ml ) ( this peptide is encoded by cre03 g176961 whose transcripts are specific to minus gametes and upregulated during gamete activations ( ning et al . , 2013 ) , hap2168 - 190 peptide ( 200 ug / ml ) , or hap2e protein ( 200 ug / ml ) overnight and then added to activated gametes as above to determine percent gamete fusion . immunofluorescent staining of gametes was performed as described previously ( belzile et al . , 2013 ) . the gametes were fixed in ice cold methanol ; the primary antibody was rat anti - ha ( roche ) and the secondary was alexa fluor 488 goat anti - rat ( invitrogen ) . images were captured in dic or fitc channels as described previously ( liu et al . , 2015 ) . for hap2-ha localization after pre - incubation in -hap2168 - 190 ( see below ) , db - camp - activated gametes were incubated with the antibody for 2 hr , washed three times with gamete medium , and stained with ha antibody as above . for immunoprecipitation , gametes were disrupted by incubation at 4c for 30 min in ripa buffer ( 20 mm tris , 150 mm naci , 1% np-40 , 0.5% deoxycholate , 0.1% sds ) containing a proteinase inhibitor cocktail ( roche ) . the samples were centrifuged at 12000 rpm for 30 min , the supernatants were subjected to immunoprecipitation with -hap2168 - 190 and protein a agarose beads using methods described previously ( liu et al . , 2010 ) and the immunoprecipitated proteins were subjected to sds - page and immunoblotting ( liu et al . , 2015 ) . for trypsin treatment 5x10 live gametes / ml were incubated in 0.05% freshly prepared trypsin for 20 min at room temperature , diluted 10-fold with n - free medium , centrifuged , and resuspended in fresh n - free medium containing 0.01% chicken egg white trypsin inhibitor . for immunoblotting , the remaining cells were washed twice more with n - free media containing 0.01% chicken egg white trypsin inhibitor before analysis by sds - page and immunoblotting ( misamore et al . , 2003 ) . tra184 - 6 , r185a , r185k , r185q , ra185 - 6ar and fw192 - 3aa mutant forms of chlamydomonas hap2 ( gi:288563868 ) were performed using standard pcr methods . the pcr fragments were inserted into the hap2-ha plasmid at bglii / nrui sites using in - fusion dry - down pcr cloning kit ( clontech ) . plasmids were transformed into the 40d4 hap2 mutant using electroporation and selected on paromomycin plates ( liu et al . , 2015 ) . colonies were selected based on pcr identification of the transgene and confirmation of hap2-ha expression by immunoblotting ( liu et al . , 2008 ) . codon - optimized synthetic cdna corresponding to a soluble c - terminally truncated version of the hap2 ectodomain ( hap2e ) comprising residues 23 - 592 from chlamydomonas reinhardtii was cloned into a modified drosophila s2 expression vector described previously and transfection was performed as reported earlier ( krey et al . , 2010 ) . for large - scale production , cells were induced with 4 m cdcl2 at a density of approximately 7x10 cells / ml for 8 days , pelleted , and the soluble ectodomain was purified by affinity chromatography from the supernatant using a streptactin superflow column followed by size exclusion chromatography using a superdex200 column in 10 mm bicine ph9.3 . purified hap2e ectodomain was subjected to sec using a superdex 200 column ( ge healthcare ) equilibrated with the indicated buffers . online malls detection was performed with a dawn - heleos ii detector ( wyatt technology , santa barbara , ca , usa ) using a laser emitting at 690 nm . online differential refractive index measurement was performed with an optilab t - rex detector ( wyatt technology ) . data were analyzed , and weight - averaged molecular masses ( mw ) and mass distributions ( polydispersity ) for each sample were calculated using the astra software ( wyatt technology ) . balb / c mice were immunized subcutaneously with 10 g of recombinant hap2e in complete freund s adjuvant and boosted five times with the same antigen dose in incomplete freund s adjuvant . mice splenocytes were fused to p3u1 myeloma cells and growing hybridomas were selected in an elisa test on plates coated with 1 g / ml hap2e . monoclonal antibody k3 was purified using protein g hitrap columns ( ge healthcare ) according to the manufacturer s instructions followed by sec in pbs using a sdx200 column . dope ( 1,2-dioleoyl - sn - glycero-3-phosphoethanolamine ) , dopc ( 1,2-dioleoyl- sn - glycero-3-phosphocholine ) , cholesterol and sphingomyelin were purchased from avanti polar lipids . liposomes were freshly prepared by the freeze thaw and extrusion method using molar ratios of 1/1/3/1 of dope / dopc / cholesterol / sphingomyelin . 0.7 m purified hap2e were mixed with 8 mm liposomes and incubated 1h at 25c in 100 l pbs . samples were then adjusted to a final concentration of 20% sucrose , overlayed with a 5%60% sucrose gradient ( in pbs ) and centrifuged overnight at 4c at 152.000 x g. fractions from the top , middle and bottom of the gradient were analyzed by immunoblotting using specific anti - hap2 monoclonal antibodies and the bands quantified using the genetools syngene software . the percentage of hap2e in either fraction was calculated as the ratio between hap2e in individual fractions and total hap2e ( sum of hap2e in top and bottom fractions ) . purified hap2e ( c. reinhardtii ) mixed with liposomes was spotted on glow discharged carbon grids ( cf300 , ems , usa ) , negatively stained with 2% phosphotungstic acid ( pta ) ph 7.4 , analyzed with a tecnai g2 bio - twin electron microscope ( fei , usa ) and imaged with an eagle camera ( fei , usa ) . for cryo - electron microscopy liposomes alone or liposomes mixed with purified hap2e were applied on a glow discharged lacey carbon grid ( agar scientific , uk ) . samples were plunge - frozen in liquid ethane using an automated system ( leica emgp , austria ) and visualized on a tecnai f20 electron microscope operating at a voltage of 200 kv . image frames were recorded in low - dose mode on a falcon ii direct electron detector ( fei , usa ) . purified recombinant hap2e in pbs and hap2e from the top fraction of the sucrose gradient solubilized with 4% chaps were independently purified by size - exclusion chromatography using a superdex 200 increase column . elution fractions corresponding to the respective proteins were further analyzed on a 4%16% native gradient gel using the nativepage novex bis - tris gel system ( invitrogen ) followed by silver staining . crystals of hap2e were obtained using in situ proteolysis as described before ( dong et al . , 2007 ) . briefly , subtilisin dissolved in 10 mm tris ph8 , 30 mm nacl at 10mg / ml was added to protein solution ( 12 - 14 mg / ml in 10 mm bicine , ph 9.3 ) on ice immediately prior to crystallization trials in a 1/100w : w ratio . crystals of hap2e were grown at 293k using the hanging - drop vapor - diffusion method in drops containing 1 l protein / protease solution mixed with 1 l reservoir solution containing 100 mm hepes ph7.5 , 2% 2-propanol , 100 mm sodium acetate , and 12%14%w / v peg 8000 . diffraction quality rod - like crystals appeared after 1 week and were flash - frozen in mother liquor containing 30% ( v / v ) mpd . data collection was carried out at the swiss light source ( px i ) , the esrf ( id30a-3 ) , and the synchrotron soleil ( proxima1 ) . data were processed , scaled and reduced with xds ( kabsch , 1988 ) , pointless ( evans , 2006 ) and programs from the ccp4 suite ( collaborative computational project , 1994 ) . a single - wavelength anomalous dispersion ( sad ) dataset was collected from a single crystal of hap2e from c. reinhardtii soaked for 6 hr in 2 mm k2ptcl4 solution in cryo buffer . data were collected at the liii edge of platinum ( 1.072 ) on a single crystal using low - dose ( 0.5 mgy per 360 ) , high - redundancy ( 5 360 ) fine -sliced collection strategy using five crystal orientations by means of a high - precision multi - axis prigo goniometer ( weinert et al . , 2014 ) . an initial set of experimental phases was obtained by the single isomorphous replacement method using autosharp ( vonrhein et al . , 2007 ) with the platinum derivative and a highly isomorphous native dataset . starting phases were improved by consecutive cycles of manual building and combination with phases derived from molecular replacement using the partial model as search model in phaser ( mr - sad ) ( mccoy et al . , 2007 ) . after building an initial poly - alanine model accounting for 50% of the c atoms these phases were further refined using the anomalous signal of a highly redundant sulfur - sad dataset collected at a wavelength of 2.06641 on crystals of the native protein following a similar collection strategy as mentioned above ( weinert et al . , 2014 ) . model building was performed using coot ( emsley et al . , 2010 ) , and refinement was done using autobuster ( bricogne et al . , 2010 ) with repeated validation using molprobity ( chen et al . , 2010 ) . the final model includes amino acids 24 to 581 ( see linear diagram in figure 1b ) , with internal breaks at loops 69 - 97 , 152 - 156 , 167 - 182 , 194 - 205 , 238 - 283 and 330 - 345 , corresponding to disordered loops that are marked with a gray background in the c. rheinhardtii sequence in figure 5d ( top sequence ) and as dashed tubes in the ribbon diagrams ( figures 4 and 5a5c ) . clear electron density was observed for one n - linked and one o - linked glycan chain ( attached to n497 and t577 in domain iii ) . data are presented as mean sd unless otherwise indicated in figure legends and experimental repeats are indicated in figure legends . the accession number for the atomic coordinates and structure factors reported in this paper is pdb : 5mf1 . designed the mutant hap2 proteins and selected the peptide used to produce the fusion - blocking antibody . w.j.s . , y.l . , and w.l . performed the gamete fusion assays with hap2 mutants and antibodies and interpreted them with w.j.s . collected data and determined the hap2 x - ray structure and g.b . helped devise a collection strategy for derivative x - ray data .
summarysexual reproduction is almost universal in eukaryotic life and involves the fusion of male and female haploid gametes into a diploid cell . the sperm - restricted single - pass transmembrane protein hap2-gcs1 has been postulated to function in membrane merger . its presence in the major eukaryotic taxa animals , plants , and protists ( including important human pathogens like plasmodium)suggests that many eukaryotic organisms share a common gamete fusion mechanism . here , we report combined bioinformatic , biochemical , mutational , and x - ray crystallographic studies on the unicellular alga chlamydomonas reinhardtii hap2 that reveal homology to class ii viral membrane fusion proteins . we further show that targeting the segment corresponding to the fusion loop by mutagenesis or by antibodies blocks gamete fusion . these results demonstrate that hap2 is the gamete fusogen and suggest a mechanism of action akin to viral fusion , indicating a way to block plasmodium transmission and highlighting the impact of virus - cell genetic exchanges on the evolution of eukaryotic life .
Introduction Results and Discussion STARMethods Author Contributions
PMC3927176
numerous short and thick hairs set obliquely in the fibers of orbicularis oculi , corrugator and frontal part of occipito frontalis are inserted in the dermis of eye brows . the head of the eyebrows overlies the frontal sinus and the tail is usually in the region of zygomatiocofrontal suture . it is an appendage of the hair bearing scalp rather than an extension of facial tissue . the head of the eyebrows overlies the frontal sinus and the tail is in the region of the zygomaticofrontal suture . the female eye brow is more arched and rest slightly higher than male eyebrow , which usually rests at the level of superior orbital rim . male eye brows are more irregular . the natural direction of hair of the eyebrow is varied . inferoateral hair direction is found in the upper and lateral parts . in medial and lower eyebrows play an important function in facial identification and may be at least as important as the eyes . human beings have a single eye brow above each eye , but this article presents a case report of a child with double eye brows on the left side . the present case report is about a 6-year - old girl presented to department of pediatric and preventive dentistry , guru nanak institute of dental science and research , kolkata for routine dental check - up . there was no relevant medical history with normal built , gait and intelligence . on facial examination , there was no gross asymmetry of face or any abnormal swelling . dermatological examination revealed that the eyebrows on the both the sides were sparser on the medial sides when compared with the lateral sides . the eye brows were free from any pathology except a double layer of eyebrow was present above the left eye . second layer of eye brow was just present above the first layer [ figures 1 and 2 ] . family history revealed no systemic disease or any extra layer of eye brow in either of maternal or paternal individual . no cosmetic / aesthetic or therapeutic treatment has been received by the child for double eyebrow . routine laboratory tests ( complete blood count , liver function test , urine analysis , abdominal ultrasound ) were all normal . considering the clinical features , a diagnosis of double eyebrow was concluded . the eyebrow is a transverse elevation of hair , which starts medially just inferior to the orbital margin and ends laterally above the orbital margin . they are formed by the transverse elevation of the superciliary ridge of the frontal bone . the superficial muscles of the head develop as mesodermal laminae which begin at the second branchial arch . from infraorbital lamina orbicularis , oculi , corrugator , these laminae join above the eye and form the interdigitating muscular structure of the brow . at 8 - 10 week of fetal development formation of primitive hair starts as a focal crowding of basal cell nuclei in the fetal epidermis . when the basal cell germ enlarges it becomes asymmetric and extends obliquely downward as a solid column . contracting the orbital sections of orbicularis oculi lowers the eyebrows and contracting the corrugators supercilia muscle draws the eyebrows together medially . there are three types of hair found in the eyebrow : ( 1 ) fine vellus hair ; ( 2 ) slightly larger and lightly pigmented hair and ( 3 ) large terminal hair known as the supercilia . the fine hairs form an effective moisture barrier to keep sweat from running downward into the eye . main function of eyebrows is to protect eyes and prevent flowing of salty sweat to eyes . the position and curvature of the eyebrow allows it to shields the eyes from bright light and it is an effective barrier to liquids running from the forehead into the eye . abundant sensory innervations are present in the large hairs of the eyebrow , which are very sensitive to tactile stimulation . the eye brows also function to depict the expression of an individual , like the depression of the medial portion of the eyebrow depicts anger or concern . eye brows abnormalities have a close relation with genomic disorders . in facial esthetics , sexual dimorphism , emotional expression and nonverbal communication eyebrows recent research suggests that eyebrows play an important function in facial identification and may be at least as important as the eyes . main physical function is to prevent flowing of salty sweat to eyes ; henceforth protection of eyes is the main function of eyebrows . eyebrows variation found in various syndromes such as chr1p36.33 microdeletion syndrome , chr2q21 - 23 microdeletion , mowat - wilson syndrome , chr3q26.3-q27 microdeletion , with sparse and broad - based eyebrow , chr7p15.3 duplication : extreme sparseness of the lateral portion of the eyebrows , chr9q34.3 terminal deletion , arched eyebrows and synophrys , chr10q22.3 - 23.2 duplication , with medial flaring eyebrows etc . ip 36.33 microdelation has a close relationship with deep - set eyes and horizontal eyebrows . eye brows abnormalities are useful diagnostic aids of chromosomal phenotype syndrome along with syndromic learning disability and developmental delay . suggested are diagnostic sign of genomic disorder . according to them array based comparative genomic hybridization cause multisystemic developmental diseases in human beings along with learning disability and developmental delay along with learning disability and developmental delay are responsible for most genomic disorder along with craniofacial skeletal and behavioral changes . the study done by berkenstadt et al . observed partial duplication of the eyebrows with other anomalies in a 7-year - old son . there was excess hair on the forehead and long eyelashes as well as excessive wrinkling of the periorbital skin when the eyes were closed . he had bilateral syndactyly involving the second to the fourth fingers and the second and third toes . gross - kieselstein and har - even also observed the same disorder in brother and sister of north african jewish descent . we could not detect any systemic disorder in our case with physical and laboratory investigations . it is clear that , new reports are still needed to enhance our knowledge about this rare entity .
eye brows are essential for esthetic and functional purposes . various kinds of eye brows are found in human species . protective function is one of the important functions of eye brows . double eye brow is a very rare condition found in human . this case report describes one of the rare cases of double eye brow .
INTRODUCTION CASE REPORT DISCUSSION
PMC3377138
the classic symptom of spinal canal stenosis is pseudoclaudication or neurogenic claudication.16 the classification of spinal stenosis proposed by arnoldi et al . ( 1976 ) remains useful.7 patients typically complain of pain , paresthesia , weakness , or heaviness in the buttocks radiating into lower extremities with walking or prolonged standing , relieved with forward bending and sitting . patients with severe spinal canal stenosis either do not improve with conservative measures or have frequent recurrent symptoms . the long term outcomes of decompressive surgery for relief of pain and disability are still unclear . this study evaluates the outcome of surgical management of secondary degenerative lumbar canal stenosis and analyzes the effect on different outcome variables using the joa score . this prospective study was conducted at our hospital between august 2002 to october 2010 after obtaining clearance from the institutional ethical committee . during this period , 46 patients of degenerative lumbar canal stenosis were deemed eligible for operative treatment based on the inclusion and exclusion criteria out of which 14 patients did not consent and thus a total of 32 patients underwent surgical treatment . patients who had posture - related radicular pain with claudication distance less than 100 m and who could not carry out their routine daily activities were assessed with magnetic resonance imaging ( mri ) [ figure 1a and 1b ] . surgery was performed if the central canal diameter on mri was found to be less than or equal to 10 mm . spinal instability was assessed using flexion and extension lateral radiographs using posner 's criteria.8 patients with primary bony canal stenosis , traumatic lumbar canal stenosis , stenosis due to tumors and infection , and patients not medically fit for surgery due to comorbidities were excluded from the study . instrumentated stabilization was done in all cases with preoperative instability and when laminectomy was done at more than one level . all procedures were performed by senior orthopaedic surgeon . according to this protocol , laminectomy with decompression was done in 2 cases , laminectomy and disectomy was done in 23 patients , laminectomy , disectomy with instrumented stabilization was done in 5 cases , and laminectomy , disectomy with posterior lumbar interbody fusion was performed in 2 patients . pre and posttreatment assessment of the patients was done according to joa evaluation system for low back pain . the joa score was determined by direct questioning to assess subjective symptoms , clinical signs , and restriction of activities of daily living . the recovery rate of the patients following treatment was calculated by using the description of hirabayashi et al . recovery rate was classified using a four - grade scale : excellent , > 90% ; good , 7589% ; fair , 5074% ; and poor , below 49%.9 ( a ) preoperative ap and ( b ) lateral x - rays of a 52 years old male patient who presented with secondary degenerative lcs at l3 - 4 , l4 - 5 with retrolisthesis of l4 over l5 . his preoperative joa score was 5 ( a ) preoperative t2 sagittal mri section of the same patient showing degenerative lcs l3 - 4 , l4 - 5 with degenerated disc at l2 - 3 , l3 - 4 , l4 - 5 . discs at l3 - 4 and l4 - 5 were found to be soft and bulging intraoperatively and hence were removed along with posterior decompression , laminectomy and pedicle screw fixation from l2 to l5 , ( b ) preoperative t2 axial mri section showing large herniated disc at l3 - 4 . the patient attained joa score of 28 at 3 months followup which was maintained till last followup of 7 months preoperative and postoperative joa scores at immediate , 3 month , 6 month , and 1 year followup were compared using wilcoxon 's test for nonparametric data . preoperative and postoperative joa scores at immediate , 3 month , 6 month , and 1 year followup were compared using wilcoxon 's test for nonparametric data . complete data of all the 32 patients along with their joa scores are presented [ table 1 ] . distribution of patients in all the variables of joa scoring system was assessed before and after treatment [ table 2 ] . clinical details and outcome of all patients distribution of variables of joa score in surgically managed lcs patients ( preoperative vs. three months postoperative ) preoperatively , 71.87% of patients ( n=23 ) presented with continuous severe low backache , 25% ( n=8 ) with occasional severe low backache , and 3.13% ( n=1 ) presented with occasional mild low backache . three months postoperatively , 62.5% patients ( n=20 ) had no back pain and 37.5% ( n=12 ) had occasional mild low back pain . most of the patients ( 87.5% , n=28 ) had presented to us with posture related severe leg pain , but postoperatively 96.87% patients ( n=31 ) had no leg pain . all patients had preoperative claudication distance less than 100 m , but 93.75% patients ( n=30 ) had normal gait with walking distance more than 500 m and no claudication symptoms postoperatively . the most common level of involvement was l4-l5(81.82% patients , n=27 ) followed by l5-s1(54.55% patients , n=18 ) . 93.74% patients ( n=30 ) had abnormal straight leg raising test [ 46.87% patients ( n=15 ) had straight leg raising positive below 30 and 46.87% patients ( n=15 ) had between 30 and 70 ] , but postoperatively all patients had normal straight leg raising test . sensations were diminished in l4 dermatome in 3 patients , l5 dermatome in 14 patients and s1 dermatome in 8 patients . overall , 20 patients ( 62.5% ) had shown sensory disturbance preoperatively , but postoperatively 19 of these 20 patients recovered normal sensory function . motor weakness was present in 15 patients ( 46.87% ) preoperatively , but postoperatively only 6 patients ( 18.75% ) showed motor deficit . overall , 93.75% patients ( n=30 ) in our study showed improvement in all variables of the joa scoring system postoperatively . at 3 month followup , 18.75% ( n=6 ) patients showed excellent outcome , 62.50% ( n=20 ) showed good outcome , and 18.75% ( n=6 ) patients showed fair outcome [ table 3 ] . at 6 month followup , 38.46% patients ( n=10 ) showed excellent outcome and 61.54% patients ( n=16 ) showed good outcome . at 1 year followup , 64.00% patients ( n=16 ) showed excellent outcome and 36.00% patients ( n=9 ) showed good outcome . at final followup , 64% patients ( n=16 ) showed excellent outcome , 28% ( n=7 ) showed good outcome , and 8% ( n=2 ) showed fair outcome . outcome of the patients improved as the time after surgery increased till 1 year and was sustained thereafter till the last followup ( only two patients showed decrease in their recovery rate which was due to prolapsed disc or canal stenosis at a different level ) . statistically significant improvement was seen in all variables except running and lifting heavy weight [ table 2 ] . outcome of 32 surgically managed patients assessed by pre and posttreatment ( joa ) score on comparison of preoperative and three months postoperative joa scores using wilcoxon 's test for nonparametric data , p value is < 0.001 which meant that outcomes were extremely significant postoperatively . further , joa scores significantly improved even postoperatively till 1 year ( p<0.05 ) . after 1 year , the joa scores did not change significantly with time till the last followup . in our study , 40.63% ( n=13 ) patients were in the age group 5059 years , followed by 4049 year age group , and the average age was 45.1 years , with similar age and sex distribution reported by others.10 64% patients showed excellent and 28% showed good outcome at the end of 1 year followup , while ganz et al . ( 1990 ) reported almost similar result showing 86% good outcome in their series of 33 patients treated by decompressive surgery . in their patients whose preoperative symptoms were relieved by postural changes , the success rate was 96% compared to only 50% in those unchanged by postural changes.11 weinstein et al . ( 2010 ) showed that patients with degenerative spondylolisthesis and spinal stenosis treated surgically showed substantially greater improvement in pain and function during a period of 2 years than those treated nonsurgically.12 in our study , finally , 62.5% patients had no back pain and 37.5% had occasional mild pain , 96.87% had no leg pain , 93.75% had normal gait , 100% had normal straight leg raising , and 95% had sensory improvement ( i.e. 19 patients out of 20 who presented with sensory impairment ) . ( 1991 ) with average leg pain improvement of 82% and average back pain improvement of 71%.13 weinstein et al . ( 2010 ) in their prospective multicentre sport study of 654 patients concluded that patients with symptomatic spinal stenosis treated surgically compared to those treated nonoperatively maintain substantially greater improvement in pain and function through 4 years . all patients in their study were surgical candidates with a history of at least 12 weeks of neurogenic claudication or radicular leg symptoms and spinal stenosis without spondylolisthesis ( as confirmed on imaging ) . they were enrolled in either a randomized cohort ( 289 patients ) or an observational cohort ( 365 patients ) at 13 u.s . spine clinics and were treated by either standard decompressive laminectomy ( 414 patients ) or usual nonsurgical care ( 240 patients).14 no patient in our series had poor result . this could be due to the fact that all patients underwent at least a 12 weeks trial of adequate conservative treatment and were only operated after clinicoradiological correlation of their symptoms with imaging was confirmed . decompressive laminectomy was also adequately supplemented with pedicle screw fixation ( in cases of preoperative instability or when more than one level laminectomy was performed ) and/or posterior lumbar interbody fusion ( in cases of degenerative listhesis ) and/or discectomy ( cases with a soft bulging disc ) . the failure of surgery to completely relieve pain in these two patients may be attributed to the widespread degeneration . outcome was also affected due to some variables in scoring system like running and heavy weight lifting in which female patients and patients over 50 years scored less despite being free of pain . getty ( 1980 ) personally reviewed 31 patients ( age range 18 to 75 years ) who had been treated surgically for lumbar spinal stenosis between 1968 and 1978 and followed them for an average of 42 months . in 28 ( 90% ) patients , degenerative changes in the lumbar spine had been the principal etiological factor ; the other 3 had idiopathic developmental lumbar spinal stenosis . in 17 ( 55% ) patients , the result was classified as good , although a total of 26 ( 84% ) patients were satisfied . this compares well with our study in which 93.75% patients ( n=30 ) were satisfied with their surgery . good results of operation for lumbar spinal stenosis in series of getty were characterized by rapid resolution of pain in the leg . the most important reason for failure to relieve symptoms in his series was stated to be inadequate decompression.15 postacchini et al . ( 1992 ) noted bone regrowth in 88% of 40 patients who had laminectomy or laminotomy for spinal stenosis at an average of 8.6 years of followup . bone regrowth was noted in all patients with associated spondylolisthesis.16 we did not observe any case of bone regrowth in our series . a possible explanation could be that interbody fusion was done in all cases of listhesis and pedicle screw fixation to supplement any case with preoperative instability or multilevel decompression in our series . moreover , we performed a wide laminectomy with medial facetectomy in all our patients as compared to narrow laminotomy in some cases of postacchini et al . postacchini ( 1999 ) described that 7080% of patients of lumbar canal stenosis had satisfactory result from surgery , but the outcome tended to deteriorate in the long term.17 our outcomes improved postoperatively till 1 year but therafter neither showed any improvement or deterioration till last followup . the authors are of the opinion that operative treatment in patients of degenerative lumbar canal stenosis yields excellent long term functional results as observed on the basis of joa scoring system provided that patients are properly selected and decompressive surgery is performed simultaneously addressing the associated instability or listhesis . all activities of daily living which were assessed using joa score showed significant improvement except for running and lifting heavy weight .
background : the long term outcomes of decompressive surgery on relief of pain and disability in degenerative lumbar canal stenosis are unclear . the aim of our study was to evaluate the outcome of surgical management of secondary degenerative lumbar canal stenosis and to analyze the effect on outcome variables using japanese orthopaedic association ( joa ) score.materials and methods : thirty - two patients of degenerative lumbar canal stenosis managed surgically were included in this study . laminectomy ( n=2 ) , laminectomy with disectomy ( n=23 ) , laminectomy and disectomy with instrumental stabilization ( n=5 ) , and laminectomy , disectomy with posterior interbody fusion ( n=2 ) were performed . joa scoring system for low backache was used to assess the patients . the recovery rate was calculated as described by hirabayashi et al . ( 1981 ) . surgical outcome was assessed based on the recovery rate and was classified using a four - grade scale : excellent , improvement of > 90% ; good , 7589% improvement ; fair , 5074% improvement ; and poor , below 49% improvement . the patients were evaluated at 3 months , one year and at last followup.results:at 3-month followup , 18.75% patients showed excellent outcome , 62.50% patients showed good outcome , and 18.75% showed fair outcome . at 1-year followup , 64% patients showed excellent outcome and 36% patients showed good outcome . at > 1 year followup ( average 34.2 months , range : 2110 months ) , 64% patients showed excellent outcome , 28% showed good outcome , and 8% showed fair outcome . no patient had poor outcome . outcome of the patients improved as the time after surgery increased till 1 year and was sustained thereafter till the last followup.conclusion:operative treatment in patients of degenerative lumbar canal stenosis yields excellent results as observed on the basis of joa scoring system . no patient got recurrence of symptoms of nerve compression .
I M Statistical analysis R D