text
stringlengths
0
6.48M
meta
dict
Q: optimize tween() transition in d3.js I have a set of items: [ { label: 'item1', color: 'yellow' }, { label: 'item2', color: 'red' }, { label: 'item3', color: 'blue' }, { label: 'item4', color: 'green' } ] There's a main transition which moves all items from left to right. Inside this main transition, I want to apply individual transitions to each item (this transition goes up and down). This last transition is a bit special because each item is composed of a text and a circle and these two parts move independently from each other. Here's a jsfiddle showing you an example: http://jsbin.com/juyoyuzuta/1/edit?js,output (sorry if the example is a bit ugly, but you can get the idea) When I profile this code in my browser it's seems like the browser is doing a lot of painting/rendering. I'm wondering if there's some optimizations I can do to do less paintings/rendering. And more generally if there's a better way to do it. A: I think you're going to be getting into micro-optimization land after this point. You're doing it the way I would tackle the problem, here are some things you could try: You could attempt to modify the position of the circles / labels directly rather than applying a transform. My gut feeling is if you're getting any hardware acceleration that you're probably going to lose it doing this (CSS transforms are a pretty standard thing, modify cx of a circle is not). You could reduce the amount of movement, those circles look like they're just buzzing with energy, maybe slow the thing down and move the circles up/down a bit less frequently. Aim for a smoother transition and I don't think you'd lose anything. But in essence, you're doing a very paint heavy operation and not a lot else. So it's fully expected that profiling the code is going to point to a lot of rendering and DOM manipulation. You're basically doing this over a fairly tight loop.
{ "pile_set_name": "StackExchange" }
Q: How do I respond to Mod question? I flagged an answer as abusive because of the derogatory language the author used to reference another user (the answer has since been edited but you can see the original text in the edit history). I was using this post as my guide for what constitutes 'abuse'. The flag reviewer responded with a question to me: I don't want the reviewer to think I'm an idiot and I want them to know why I flagged it as I did (I don't want to argue that it was declined, that's within their right). How do I respond and answer the question? A: Basically flags are for things where the normal tools cannot fix the issue. In this case, an edit handled it which is why the flag was declined. Keep in mind, when we review flags we see the current post and not the original. The behavior was both rude and offensive so I've suspended the account in question temporarily.
{ "pile_set_name": "StackExchange" }
Correlation of clinical and pathologic evaluation of scarring alopecia. Differentiating scarring and nonscarring alopecia poses a diagnostic dilemma for clinicians, with histopathology used to distinguish. The extent to which dermatologists are able to clinically classify alopecia has not been evaluated. A retrospective study of pathology reports on 458 patients was used to calculate a kappa coefficient to correlate clinical presence of scarring or nonscarring alopecia to histopathologic presence of scarring or nonscarring. A multivariate analysis was performed to assess for associations with scarring. The kappa correlation coefficient was 0.59 (P < 0.0001), indicating moderate agreement and varied by race and sex. There were 15 times higher odds of making the clinical diagnosis of scarring alopecia (OR 14.64 95% CI [8.64-24.18]; P < 0.001), and this increased with age. These results suggest that clinical exam is moderately reliable in distinguishing between scarring and nonscarring alopecia. Our results highlight the need for education and diagnostic schemata for evaluation of alopecia based on gender and in skin of color.
{ "pile_set_name": "PubMed Abstracts" }
Radiation Dosimetry for Ureteroscopy Patients: A Phantom Study Comparing the Standard and Obese Patient Models. To determine the effect of obesity on radiation exposure during simulated ureteroscopy. A validated anthropomorphic adult male phantom with a body mass index (BMI) of approximately 24 kg/m(2), was positioned to simulate ureteroscopy. Padding with radiographic characteristics of human fat was placed around the phantom to create an obese model with BMI of 30 kg/m(2). Metal oxide semiconductor field effect transistor (MOSFET) dosimeters were placed at 20 organ locations in both models to measure organ dosages. A portable C-arm was used to provide fluoroscopic x-ray radiation to simulate ureteroscopy. Organ dose rates were calculated by dividing organ dose by fluoroscopy time. Effective dose rate (EDR, mSv/sec) was calculated as the sum of organ dose rates multiplied by corresponding ICRP 103 tissue weighting factors. The mean EDR was significantly increased during left ureteroscopy in the obese model at 0.0092 ± 0.0004 mSv/sec compared with 0.0041 ± 0.0003 mSv/sec in the nonobese model (P < 0.01), as well as during right ureteroscopy at 0.0061 ± 0.0002 and 0.0036 ± 0.0007 mSv/sec in the obese and nonobese model, respectively (P < 0.01). EDR during left ureteroscopy was significantly greater than right ureteroscopy in the obese model (P = 0.02). Fluoroscopy during ureteroscopy contributes to the overall radiation dose for patients being treated for nephrolithiasis. Obese patients are at even higher risk because of increased exposure rates during fluoroscopy. Every effort should be made to minimize the amount of fluoroscopy used during ureteroscopy, especially with obese patients.
{ "pile_set_name": "PubMed Abstracts" }
(SportsNetwork.com) - Bruce Arians was a candidate for the vacant Philadelphia Eagles job, but the powers that be decided Chip Kelly, with no professional experience, was better equipped. It didn't take long for Arians to find a job, however, and he will lead the Arizona Cardinals into the Philadelphia Sunday to battle the Eagles. Arians, who gained notoriety by filling in for an ill Chuck Pagano with the Indianapolis Colts last season, has guided the Cardinals to a surprising 7-4 start and has them on the brink of ending a three-year postseason layoff. The Cardinals are currently in the hunt for a wild card spot since it appears the Seattle Seahawks will take home the NFC West title. The Eagles are also flirting with the idea of making a playoff run in Kelly's first season at the helm and are even with the Dallas Cowboys for first place in the NFC East. The Cowboys own the tiebreaker after beating the Eagles a few weeks back at Lincoln Financial Field, where the Cardinals hope to find similar success. Arizona has won four in a row, including three straight after the bye, and hammered the Colts, 40-11, on Sunday in the desert. A 26-yard touchdown reception by Larry Fitzgerald and a 22-yard interception return for a score by Karlos Dansby highlighted a 20-point second quarter for the Cardinals, who racked up 410 yards of offense and hope to gain some more attention in the competitive NFC. "I think this was a respect game," said Cardinals quarterback Carson Palmer, who threw for 314 yards with a pair of TD passes to Fitzgerald. "I don't think we are well respected throughout the league, and that's not anybody's fault but our own. But I think we are better than people think." The Cardinals deserve a lot of credit with wins over Detroit, Carolina and Indianapolis. Still, veteran defensive lineman Darnell Dockett feels Arizona is slighted each week. "You know what's funny?" Dockett said. "Whenever the Cardinals win it's always what the other team didn't do. It's never what we forced other teams to do. We understand that. Everyone says, 'They didn't beat nobody, they didn't beat nobody." Well, the Colts aren't nobodies. They, too, are in the mix for a playoff spot and just got waxed by their former assistant coach in Arians, whose squad last won four in a row to start the 2012 season and hasn't prevailed in five straight games since the St. Louis Cardinals won six in a row during the 1977 campaign under head coach Don Coryell. The Cardinals hope to keep the momentum going and keep pressing for a chance at reaching the playoffs. "We are in the hunt," Arizona cornerback Jerraud Powers said. "That's all you can ask for. It seems like everyone is believing in one another." Meanwhile, the Eagles are feeling just as confident with three straight wins and last won four in a row to close out the 2011 campaign at 8-8. Eagles quarterback Nick Foles was still slinging the ball for the University of Arizona and completed 387-of-560 passes for 4,334 yards and 28 touchdowns for the Wildcats. The second-year pro, who was recently crowned the starter, is now lighting up opposing defenses at a rapid rate and is building confidence in himself, with the coaches and his teammates. "He has the utmost confidence in himself," said Eagles wide receiver Riley Cooper. "He is a great quarterback and he should. He is the general out there. We are all listening to him. He does a great job." Cooper has been the beneficiary of Foles' resurgence and is second on the Eagles with 592 yards on 31 catches. His seven touchdown receptions is tied with DeSean Jackson. Foles, though, is the reason for the success and has 1,554 yards with 16 TD passes and no interceptions for a 128.0 rating. His rating ranks first in the NFL and has six games this season with a rating of 100 or better. During the Eagles' winning streak, Foles' QB rating is 152.8. He has thrown 199 consecutive passes without an interception and his 9.6 yards per attempt is first in the league. Foles has the Eagles flying high and helped them halt a 10-game home losing streak with last week's 24-16 win over the Washington Redskins. He threw for 298 yards and did not have a touchdown pass. He didn't need to because LeSean McCoy ran for 77 yards and a pair of touchdowns. "I feel like we've put ourselves in a good position," Foles said. "That's all you can ask for, especially heading into the bye week. Especially with winning our last game at home, (we're) moving in the right direction and just continuing to change the atmosphere here. The coaches here are doing a great job and the players are doing a great job of buying in, so I think that we have to keep going in that direction and keep leaning on each other, and we'll see what happens." The New York Giants did no favors by losing to the Cowboys on Sunday, so it's a tight rope to walk in the division. Speaking of walking the line, Michael Vick said Foles deserves to remain the start for how well he's played. Vick is no fool and believes in the idiom if it's not broke, don't fix it. "In all honesty, in all fairness, how can you take a guy out of the game who's been playing so well?" Vick said. "I've been in this stage before, and I know what it's like. I understand the position that this team is in, and the one thing I never want to do is be a distraction or put our team or our coaches in a position where they feel like they're not doing the right thing or I feel like they're not doing the right thing." Philadelphia has been doing the right thing offensively and is averaging 33.3 points and 453 yards during the winning streak. Jackson, Cooper and McCoy have been Philly's big three in recent weeks, while Jackson is 15 yards shy of the third 1,000-yard season of his career. AFter three straight weeks of rushing for no more than 55 yards, McCoy had 155 yards on 25 carries at Green Bay, then ran for 77 yards on 20 carries versus the Redskins to surpass running backs coach Duce Staley for third on the team's all-time rushing list with 4,875 yards. McCoy also reached the 1,000-mark for the third time in his career and didn't know it until it was brought up to him. "I did know this though, I had 10 yards to pass Duce," McCoy said. "So I knew that was going to happen." The shifty back averages about half of 10 yards (4.7 ypg) for an Eagles squad looking for revenge versus the Cardinals after last season's 27-6 loss on Sept. 23 at University of Phoenix Stadium. Philadelphia's most recent victory in the series was a 48-20 home rout on Thanksgiving Night of 2008, and the Eagles have taken five of the last nine regular-season bouts since 2000 with their former division rivals. Including the 2008 NFC Championship, which the Cardinals won by a 32-25 count to advance to their only Super Bowl, Arizona has won three in a row against the Eagles. WHAT TO WATCH FOR The Eagles give up a league-worst 300.1 passing yards per game and are 31st in total yards allowed (417.9). So what can they expect from Fitzgerald, one of the top wide receivers in the game? Fitzgerald caught nine passes for 114 yards and a touchdown in the last meeting with the Eagles and has played them four times in his career, posting 26 catches for 418 yards and six TDs. No matter who is covering Fitzgerald, whether it's Cary Williams, Brandon Boykin or Bradley Fletcher, who is bothered by a pectoral muscle and is questionable Sunday, the Eagles will experience some trouble. Palmer gave some insight on what it's like to cover Fitzgerald, who eclipsed the 11,000-yard mark in his career. He is the youngest to reach that mark. "Any time you have Larry 1-on-1 down in the red zone, it's not a good matchup for the other team," Palmer said. Floyd can be an issue, too, so it's important for an improved Eagles defense to force Palmer into making mistakes. Palmer's known for erratic play at times, but the former Heisman Trophy winner can also shred defenses. Just ask the Colts, who failed to pick off a pass. Williams, though, is impressed with how well the unit is playing. "We're getting there and I also think we all understand that a lot of work is ahead to get to where we need to be," said Williams. "The play up front has improved, which impacts everyone. I think that confidence is the big reason. Everybody understands what the coaches are looking for and we've all worked hard studying that. You see the results. We all feel like it's improving." Speaking of play up front, give credit to big men Vinny Curry, Fletcher Cox and Cedric Thornton. They have combined for eight sacks, while Curry and linebacker Connor Barwin have four apiece. LB Trent Cole and Cox each have posted three sacks. The Cardinals have some muscle of their own on defense and sack master John Abraham has seven so far. Defensive end Calais Campbell has 5 1/2 sacks and Dockett owns 4 1/2, while veteran linebacker Karlos Dansby leads the Cardinals with 88 tackles. Dansby registered five stops against Indianapolis and an interception. Look for him to disrupt Philadelphia's run game by shooting the gaps. Colts quarterback Andrew Luck was rattled often in Sunday's game. "They created a bit of a hornet's nest," Luck said of the defensive pressure. Don't sleep on Arizona's secondary either with rookie Tyrann Mathieu, stud Patrick Peterson, Powers and Yeremiah Bell. They, too, will have their hands full with Jackson, Cooper and tight end Brent Celek. Celek hasn't been much of a factor lately, which is why he could be a pest Sunday. OVERALL ANALYSIS The Cardinals seem to give the Eagles fits lately and will do so again at the Linc. The Nick Foles show has to have some sort of commercial break and it will be good to get that out of the way now instead of later. A loss won't impact much in the NFC East because it's still wide open. Arizona is fighting with the Eagles and many other teams in a push toward the postseason and it will be up to Palmer and the defense to make that happen. The Eagles do have some nice weapons, but the Cardinals can counter that with their impressive defense. Cardinals defensive coordinator Todd Bowles performed the same role in a limited capacity for Philadelphia last season, so expect his familiarity with the opponent to come in handy.
{ "pile_set_name": "Pile-CC" }
Himashree Roy Himashree Roy is a female Indian Athlete who won a bronze medal in the 100 meters women's relay race along with Merlin K Joseph, Srabani Nanda and Dutee Chand in the 22nd Asian Athletics Championships which concluded on July 9, 2017. She was born in Kolkata, West Bengal on 15 March 1995. Career She won the silver medal in women's 4x100m relay race along with N. Shardha, Sonal Chawla and Priyanka in the National Open athletics championships 2018 where they represented the Indian Railways. Himashree Roy timed 11.60 seconds to set a record in women's 100 metres on 5 August 2018 in the 68th State Athletics Championships, at the Salt Lake Stadium while representing the Eastern Railway Sports Association (ERSA). She won the bronze medal in women's 100m final in 84th All India Railway Athletics Championship, 2017. Himashree Roy, MG Padmini, Srabani Nanda and Gayathri Govindaraj won the bronze medal for women's 4x100m relay race in the second leg of the 2015 Asian Grand Prix Games, held in Thailand. She also won the gold medal in the women's 4×100 metre relay with teammates Dutee Chand, Srabani Nanda and Merlin K Joseph while representing the Indian Railways in the 55th National Open Athletic Championship, 2015. References Category:1995 births Category:Sportswomen from West Bengal Category:Indian female sprinters Category:21st-century Indian women Category:Living people
{ "pile_set_name": "Wikipedia (en)" }
Club night 7:30pm Prevention Of Obesity Club. Every Wednesday Onwards and then 8:00pm Back to School. Drag out you plimmos, school tie and shorts, for a trip back in time to your school days and dancing until 11:00pm.
{ "pile_set_name": "Pile-CC" }
Half-Quantum Vortices in an Antiferromagnetic Spinor Bose-Einstein Condensate. We report on the observation of half-quantum vortices (HQVs) in the easy-plane polar phase of an antiferromagnetic spinor Bose-Einstein condensate. Using in situ magnetization-sensitive imaging, we observe that pairs of HQVs with opposite core magnetization are generated when singly charged quantum vortices are injected into the condensate. The dynamics of HQV pair formation is characterized by measuring the temporal evolutions of the pair separation distance and the core magnetization, which reveals the short-range nature of the repulsive interactions between the HQVs. We find that spin fluctuations arising from thermal population of transverse magnon excitations do not significantly affect the HQV pair formation dynamics. Our results demonstrate the instability of a singly charged vortex in the antiferromagnetic spinor condensate.
{ "pile_set_name": "PubMed Abstracts" }
# coding=utf-8 import typing from pyramid.config import Configurator import transaction from tracim_backend.app_models.contents import FOLDER_TYPE from tracim_backend.app_models.contents import content_type_list from tracim_backend.config import CFG from tracim_backend.exceptions import ContentFilenameAlreadyUsedInFolder from tracim_backend.exceptions import EmptyLabelNotAllowed from tracim_backend.extensions import hapic from tracim_backend.lib.core.content import ContentApi from tracim_backend.lib.utils.authorization import ContentTypeChecker from tracim_backend.lib.utils.authorization import check_right from tracim_backend.lib.utils.authorization import is_contributor from tracim_backend.lib.utils.authorization import is_reader from tracim_backend.lib.utils.request import TracimRequest from tracim_backend.lib.utils.utils import generate_documentation_swagger_tag from tracim_backend.models.context_models import ContentInContext from tracim_backend.models.context_models import RevisionInContext from tracim_backend.models.revision_protection import new_revision from tracim_backend.views.controllers import Controller from tracim_backend.views.core_api.schemas import FolderContentModifySchema from tracim_backend.views.core_api.schemas import NoContentSchema from tracim_backend.views.core_api.schemas import SetContentStatusSchema from tracim_backend.views.core_api.schemas import TextBasedContentSchema from tracim_backend.views.core_api.schemas import TextBasedRevisionSchema from tracim_backend.views.core_api.schemas import WorkspaceAndContentIdPathSchema from tracim_backend.views.swagger_generic_section import SWAGGER_TAG__CONTENT_ENDPOINTS try: # Python 3.5+ from http import HTTPStatus except ImportError: from http import client as HTTPStatus SWAGGER_TAG__CONTENT_FOLDER_SECTION = "Folders" SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS = generate_documentation_swagger_tag( SWAGGER_TAG__CONTENT_ENDPOINTS, SWAGGER_TAG__CONTENT_FOLDER_SECTION ) is_folder_content = ContentTypeChecker([FOLDER_TYPE]) class FolderController(Controller): @hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS]) @check_right(is_reader) @check_right(is_folder_content) @hapic.input_path(WorkspaceAndContentIdPathSchema()) @hapic.output_body(TextBasedContentSchema()) def get_folder(self, context, request: TracimRequest, hapic_data=None) -> ContentInContext: """ Get folder info """ app_config = request.registry.settings["CFG"] # type: CFG api = ContentApi( show_archived=True, show_deleted=True, current_user=request.current_user, session=request.dbsession, config=app_config, ) content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG) return api.get_content_in_context(content) @hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS]) @hapic.handle_exception(EmptyLabelNotAllowed, HTTPStatus.BAD_REQUEST) @hapic.handle_exception(ContentFilenameAlreadyUsedInFolder, HTTPStatus.BAD_REQUEST) @check_right(is_contributor) @check_right(is_folder_content) @hapic.input_path(WorkspaceAndContentIdPathSchema()) @hapic.input_body(FolderContentModifySchema()) @hapic.output_body(TextBasedContentSchema()) def update_folder(self, context, request: TracimRequest, hapic_data=None) -> ContentInContext: """ update folder """ app_config = request.registry.settings["CFG"] # type: CFG api = ContentApi( show_archived=True, show_deleted=True, current_user=request.current_user, session=request.dbsession, config=app_config, ) content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG) with new_revision(session=request.dbsession, tm=transaction.manager, content=content): api.update_container_content( item=content, new_label=hapic_data.body.label, new_content=hapic_data.body.raw_content, allowed_content_type_slug_list=hapic_data.body.sub_content_types, ) api.save(content) api.execute_update_content_actions(content) return api.get_content_in_context(content) @hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS]) @check_right(is_reader) @check_right(is_folder_content) @hapic.input_path(WorkspaceAndContentIdPathSchema()) @hapic.output_body(TextBasedRevisionSchema(many=True)) def get_folder_revisions( self, context, request: TracimRequest, hapic_data=None ) -> typing.List[RevisionInContext]: """ get folder revisions """ app_config = request.registry.settings["CFG"] # type: CFG api = ContentApi( show_archived=True, show_deleted=True, current_user=request.current_user, session=request.dbsession, config=app_config, ) content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG) revisions = content.revisions return [api.get_revision_in_context(revision) for revision in revisions] @hapic.with_api_doc(tags=[SWAGGER_TAG__CONTENT_FOLDER_ENDPOINTS]) @check_right(is_contributor) @check_right(is_folder_content) @hapic.input_path(WorkspaceAndContentIdPathSchema()) @hapic.input_body(SetContentStatusSchema()) @hapic.output_body(NoContentSchema(), default_http_code=HTTPStatus.NO_CONTENT) def set_folder_status(self, context, request: TracimRequest, hapic_data=None) -> None: """ set folder status """ app_config = request.registry.settings["CFG"] # type: CFG api = ContentApi( show_archived=True, show_deleted=True, current_user=request.current_user, session=request.dbsession, config=app_config, ) content = api.get_one(hapic_data.path.content_id, content_type=content_type_list.Any_SLUG) with new_revision(session=request.dbsession, tm=transaction.manager, content=content): api.set_status(content, hapic_data.body.status) api.save(content) api.execute_update_content_actions(content) return def bind(self, configurator: Configurator) -> None: # Get folder configurator.add_route( "folder", "/workspaces/{workspace_id}/folders/{content_id}", request_method="GET" ) configurator.add_view(self.get_folder, route_name="folder") # update folder configurator.add_route( "update_folder", "/workspaces/{workspace_id}/folders/{content_id}", request_method="PUT" ) configurator.add_view(self.update_folder, route_name="update_folder") # get folder revisions configurator.add_route( "folder_revisions", "/workspaces/{workspace_id}/folders/{content_id}/revisions", request_method="GET", ) configurator.add_view(self.get_folder_revisions, route_name="folder_revisions") # get folder revisions configurator.add_route( "set_folder_status", "/workspaces/{workspace_id}/folders/{content_id}/status", request_method="PUT", ) configurator.add_view(self.set_folder_status, route_name="set_folder_status")
{ "pile_set_name": "Github" }
context-contactform In order to be able to process your contact data, all fields marked with an asterisk (*) must be completed. Your messages and questions* Your query should relate to an existing ARBURG machine. Machine type Machine number Contact details Title* First name* Surname* Company/Institution* Customer no. Street, number* Postcode* Town* Country* Federal state/region Telephone number* E-mail* We guarantee you that we do not store any more data than is required for the services provided. We do not pass this data on to third parties outside the ARBURG organisation. All information will, of course, be treated confidentially. How can we contact you? Callback Consultant visit E-mail Captcha Abschicken Flow straightener Combined processing of PBT and LSR The combination of different materials to create multi-functional components in a single production step is one of the domains of plastics processing. Flow straighteners in shower heads are a good example: thanks to the elastic LSR nozzles integrated in the solid PBT main body, scale can be removed with ease. Flow straighteners for shower heads are produced, inspected and packaged fully automatically in a production cell built around an electric two-component injection moulding machine from the ALLDRIVE series. This enables efficient high-volume production of this complex hard/soft combination. Basic specifications Technology MachineFlow straighteners for shower heads are produced from PBT and LSR on an electric two-component ALLROUNDER 570 A, which is equipped with two size 170 injection units. Two pre-moulded parts and two finished components are produced in a cycle time of 40 s in a consistently high part quality.to electric machines Robotic systemTransfer technology is the ideal answer for hard/soft combinations such as the flow straightener. The vertical MULTILIFT V robotic system turns the pre-moulded parts over in the mould. In addition, it removes the finished parts and transfers them first to a cooling station with integrated visual inspection, then on to a packaging system.to linear robotic systems Process Multi-component injection mouldingFor the combined processing of thermoplastics and liquid silicone in a single mould, the necessary thermal separation presents a challenge: while the PBT needs to be cooled, the LSR cross-links at high temperatures. For a fully automated process, transfer technology offers the ideal solution.to the process Liquid silicone injection moulding (LSR)For sprueless part production, both the PBT main body and the LSR nozzles of the flow straightener are directly injected using hot and cold-runner technology. As a result, no non-recyclable waste is generated. Efficient use of materials is also ensured by an optimised emptying system for the LSR dosing unit.to the process Industry Technical injection mouldingIn technical injection moulding, high-precision components can be produced for a wide variety of applications. The production of flow straighteners is an example of how sophisticated mould technology and complex workflows can be comprehensively automated.to the industry
{ "pile_set_name": "Pile-CC" }
Changes in axonal physiology and morphology after chronic compressive injury of the rat thoracic spinal cord. The spinal cord is rarely transected after spinal cord injury. Dysfunction of surviving axons, which traverse the site of spinal cord injury, appears to contribute to post-traumatic neurological deficits, although the underlying mechanisms remain unclear. The subpial rim frequently contains thinly myelinated axons which appear to conduct signals abnormally, although it is uncertain whether this truly reflects maladaptive alterations in conduction properties of injured axons during the chronic phase of spinal cord injury or whether this is merely the result of the selective survival of a subpopulation of axons. In the present study, we examined the changes in axonal conduction properties after chronic clip compression injury of the rat thoracic spinal cord, using the sucrose gap technique and quantitatively examined changes in the morphological and ultrastructural features of injured axonal fibers in order to clarify these issues. Chronically injured dorsal columns had a markedly reduced compound action potential amplitude (8.3% of control) and exhibited significantly reduced excitability. Other dysfunctional conduction properties of injured axons included a slower population conduction velocity, a longer refractory period and a greater degree of high-frequency conduction block at 200 Hz. Light microscopic and ultrastructural analysis showed numerous axons with abnormally thin myelin sheaths as well as unmyelinated axons in the injured spinal cord. The ventral column showed a reduced median axonal diameter and the lateral and dorsal columns showed increased median diameters, with evidence of abnormally large swollen axons. Plots of axonal diameter versus myelination ratio showed that post-injury, dorsal column axons of all diameters had thinner myelin sheaths. Noninjured dorsal column axons had a median myelination ratio (1.56) which was within the optimal range (1.43-1.67) for axonal conduction, whereas injured dorsal column axons had a median myelination ratio (1.33) below the optimal value. These data suggest that maladaptive alterations occur postinjury to myelin sheath thickness which reduce the efficiency of axonal signal transmission.In conclusion, chronically injured dorsal column axons show physiological evidence of dysfunction and morphological changes in axonal diameter and reduced myelination ratio. These maladaptive alterations to injured axons, including decrease in myelin thickness and the appearance of axonal swellings, contribute to the decreased excitability of chronically injured axons. These results further clarify the mechanisms underlying neurological dysfunction after chronic neurotrauma and have significant implications regarding approaches to augment neural repair and regeneration.
{ "pile_set_name": "PubMed Abstracts" }
Q: String com lixo de memória Estou com alguns problemas ao trabalhar com arquivos e funções, o código que estou fazendo deveria imprimir uma string no arquivo, porém essa string está com lixo, e não imprime o que deve apesar de ser usada normalmente. http://pastebin.com/JtGTDSeL #include <stdio.h> #include <stdlib.h> #include <conio.h> #include <string.h> #define MAIOR_ID 0 int top_IDS(int ID_DNA, char *linhas[]) { FILE *arquivo; arquivo=fopen("MEL_PIOR.txt","a"); fflush(stdin); fputs(linhas,arquivo); fprintf(arquivo,"\t%i\n",ID_DNA); fclose(arquivo); return 0; } int calcular_peso(char lin_1[],int *qtd_g, int *qtd_c) { int i=0; int soma_pesos=0,I_D=0; int qt,qt2; gets(lin_1); for(i=0; i<10; i++) { if(lin_1[i]=='A' || lin_1[i]=='T') soma_pesos+=3; else soma_pesos+=7; } qt = *qtd_g; qt2 = *qtd_c; I_D=(soma_pesos+qt+qt2); printf("\tI_D: %i\n",I_D); if(I_D<50) printf("\tTem propencao a doencas cardiacas\n"); else if(I_D>50) printf("\tTem propencao a doencas respiratorias\n"); else printf("\tNada se pode afirmar sobre suas propencao as doencas \n"); top_IDS(I_D,&lin_1); return 0; } int habilidades(int soma_guanina,int soma_adenina) { if(soma_adenina>10) printf("\tTem propensäo a atividades esportivas\n"); else if(soma_guanina>10) printf("\tTem propensäo a atividades artisticas\n"); else printf("\tNada se pode afirmar sobre suas habilidades\n"); return 0; } int cria_complementar(char ler_linha[],char complementar[]) { int contador=0; int qtd_A=0,qtd_T=0,qtd_C=0,qtd_G=0; int soma_AT=0,soma_CG=0; for(contador=0; contador<10; contador++) { if(ler_linha[contador]=='A') { complementar[contador]='T'; qtd_A++; } else if(ler_linha[contador]=='T') { complementar[contador]='A'; qtd_T++; } else if(ler_linha[contador]=='C') { complementar[contador]='G'; qtd_C++; } else { complementar[contador]='C'; qtd_G++; } } printf("\t"); soma_AT=(qtd_A+qtd_T); soma_CG=(qtd_C+qtd_G); for(contador=0; contador<10; contador++) { printf("%c",complementar[contador]); } printf("\n\tA:%i T:%i C:%i G:%i",soma_AT,soma_AT,soma_CG,soma_CG); printf("\n\tA:%i%% T:%i%% C:%i%% G:%i%%\n\n",(soma_AT*100)/20,(soma_AT*100)/20,(soma_CG*100)/20,(soma_CG*100)/20); habilidades(soma_CG,soma_AT); calcular_peso(ler_linha,&qtd_G,&qtd_C); return 0; } int main() { char bases[11],base_2[11]; FILE *arquivo; arquivo = fopen("DNAS.txt","r"); if(arquivo==NULL) { printf("O arquivo nao pode ser lido\n"); system("pause"); return 0; } while(fgets(bases,11,arquivo)!=NULL) { printf("\n"); fgetc(arquivo); printf("\t%s\n",bases); cria_complementar(bases,base_2); system("pause"); system("cls"); } fclose(arquivo); return 0; } A leitura do arquivo é na seguinte formatação: CGATGCATGC Várias linhas usando apenas ATCG e a impressão no arquivo é a mesma linha seguido de um número. A: Primeira coisa que eu reparei int top_IDS(int ID_DNA, char *linhas[]) { FILE *arquivo; arquivo=fopen("MEL_PIOR.txt","a"); fflush(stdin); fputs(linhas,arquivo); ... 0) Não verificas se o fopen() funcionou correctamente 0) o fflush(stdin); não parece ter ligação com o resto do código desta função 1) fputs() aceita uma string e um FILE*, mas linhas não é uma string Sem ler o resto do código não sei como solucionar. Liga o máximo de warnings que o teu compilador permite. Em princípio ele devia avisar que a chamada ao fputs() é incorrecta. Ou chamas fputs() com cada um dos elementos de linhas for (k = 0; k < nlinhas; k++) fputs(linhas[k], arquivo); Ou re-defines a função para aceitar uma string em vez dum array de strings int top_IDS(int ID_DNA, char *linhas)
{ "pile_set_name": "StackExchange" }
Introduction {#Sec1} ============ Small heat-shock proteins (sHSPs) are a diverse family of proteins that share a conserved ≈ 90-residue α-crystallin domain (ACD) that is flanked by variable N- and C-terminal regions (Basha et al. [@CR3]; Hilton et al. [@CR17]; McHaourab et al. [@CR28]). Although sHSPs are relatively small as monomers (12 to 42 kDa), the majority assemble into large oligomers. These range in size from 12 to \> 40 subunits, with some family members being monodisperse and others forming polydisperse ensembles (Basha et al. [@CR3]; Hilton et al. [@CR17]; McHaourab et al. [@CR28]). Found in all kingdoms of life, many sHSPs have been demonstrated in vitro to act as ATP-independent molecular chaperones with the ability to capture denaturing proteins in a partially unfolded form such that they can be reactivated by the cell's ATP-dependent chaperones. Recent reviews have described models for this canonical mechanism of sHSP chaperone action; however, details are derived primarily from in vitro studies with recombinant proteins and model interactors from non-homologous organisms (Haslbeck and Vierling [@CR16]; Treweek et al. [@CR40]). Thus, a major gap in our understanding of sHSP mechanism is the considerable lack of information about which substrates they protect in the cell. In order to investigate the properties of proteins that are sHSP interactors, we identified HSP16.6 from the single-celled cyanobacterium *Synechocystis* sp. PCC 6803 (hereafter *Synechocystis*) as an ideal system to interrogate. HSP16.6 is the only sHSP in *Synechocystis* (Giese and Vierling [@CR12]; Lee et al. [@CR25]). It is strongly induced at high temperature, and cells deleted for HSP16.6 (Δ16.6) grow normally at optimal growth temperature but are sensitive to heat stress (Giese and Vierling [@CR12], [@CR13]). The temperature-sensitivity phenotype of Δ16.6 cells has enabled studies of sHSP properties required for activity in vivo in a homologous system. Crucially, point mutations in the N-terminal domain were found to decrease heat tolerance in vivo, but to have no effect on the efficiency of chaperone function in assays with model substrates in vitro (Giese et al. [@CR14]). This observation emphasizes the need to identify native interactors of sHSPs and renders *Synechocystis* an excellent system with which to do so. We previously used immunoprecipitation and mass spectrometry (MS)-based proteomics to identify 13 proteins associated in vivo with HSP16.6 from *Synechocystis* cells that had been heat-stressed prior to cell lysis (Basha et al. [@CR2]). Notably, these 13 proteins were not detected in equivalent pull-downs from cells that had not been heat-stressed, or when recombinant HSP16.6 was added to heat-stressed Δ16.6 cells before lysis (to control for sHSP-protein interactions that might occur in the lysate, as opposed to during heat stress in vivo). Although these proteins were associated with the sHSP in the soluble cell fraction, they were also found in the insoluble cell fraction after heat stress (Basha et al. [@CR2]). All of these proteins, whose functions span a variety of cellular processes, including translation, transcription, secondary metabolism, and cell signaling, could be released from the immunoprecipitate by addition of DnaK, co-chaperones, and ATP (Basha et al. [@CR2]). In addition, one of these interactors, a serine esterase, when purified, was shown to be heat sensitive and to associate with HSP16.6 and thereby be protected from insolubilization (Basha et al. [@CR2]). While these data identified 13 proteins as potential interactors for canonical sHSP chaperone function, their relatively small number meant it was not possible to derive any common protein features that might dictate interaction with the sHSP. Here, we have extended the identification of HSP16.6-interactors to a total of 83 proteins by performing an affinity pull-down from heat-stressed *Synechocystis*. By performing rigorous bioinformatic analyses, we provide new insights into the primary and secondary structural properties of proteins that interact with sHSPs in the soluble cell fraction during stress. We also catalogue the functions of the interactors and compare these to sHSP interactors previously identified in two other prokaryotes, *Escherichia coli* and *Deinococcus radiodurans* (Bepperling et al. [@CR4]; Fu et al. [@CR10]). Our combined results indicate that sHSPs protect a specific yet diverse set of proteins from aggregation in the cell. Methods {#Sec2} ======= Affinity isolation of HSP16.6-interacting proteins {#Sec3} -------------------------------------------------- Isogenic *Synechocystis* strains were used in which the wild-type HSP16.6 gene had been replaced with a spectinomycin resistance gene (*aadA* gene) (ΔHSP16.6 strain) or with the spectinomycin gene and HSP16.6 carrying a Strep-tag II affinity tag (WSHPQFEK) on the C-terminus (HSP16.6-Strep strain) (Basha et al. [@CR2]). This HSP16.6-Strep strain had been shown previously to behave like wild type in assays of heat tolerance (Basha et al. [@CR2]), and recombinant HSP16.6-strep protein was equivalent to untagged protein in assays of chaperone activity in vitro (Friedrich et al. [@CR9]). Cells were grown in 50-mL cultures at 30 °C as described previously to *A*~730~ ≈ 0.2 (Basha et al. [@CR2]) and then subjected to treatment at 42 °C for 2 h followed by 1 h recovery at 30 °C, to allow accumulation of HSP16.6-Strep protein. Control samples were prepared directly after this treatment, while heat-stressed samples were treated for an additional 30 min at 46 °C. To control for interaction of HSP16.6-Strep protein during sample processing, recombinant HSP16.6-Strep protein was added to heat-stressed samples of the ΔHSP16.6 strain directly after heat treatment at a concentration matching that in heat-stressed cells. Cells were harvested, suspended in 1.5 mL lysis buffer (25 mM HEPES-KOH, 0.2 M NaCl, 0.5% Triton X-100, 5 mM ϵ-aminocaproic acid, 1 mM benzamidine, 1 μg mL^−1^ leupeptin, and 1 mM EDTA, pH 7.5), and opened as described previously (Basha et al. [@CR2]). The soluble fraction was mixed with 30 μL of Strep-Tactin resin (Sigma) at 4 °C for 2 h. Resin was washed six times in lysis buffer, and bound proteins were eluted using either sample buffer (for SDS-PAGE) or isoelectric focusing (IEF) rehydration buffer (for 2D gels) (7.0 M urea, 2.5 M thiourea, 2% CHAPS, 2% IPG buffer pH 3--10 NL (Amersham Biotech), and 3 mg mL^−1^ dithiothreitol). For 2D gel analysis, pH 3--10 NL first dimension strips (18 cm; Amersham Biotech) were rehydrated overnight at room temperature using 600 μL of sample in IEF rehydration buffer. IEF was carried out for 2 h at 150 V, 2 h at 300 V, 5 h at 500 V, and 7 h at 3500 V. The second dimension was separated by 11--17% SDS-PAGE for 30 min at 15 mA and then for 7 h at 25 mA. Samples were also separated by SDS-PAGE according to standard protocols, using 8% acrylamide gels in order to afford good separation of proteins above 100 kDa, which are typically not well resolved on the 2D system. Gels were silver stained according to a previous protocol (Rabilloud [@CR33])*.* Protein identification by means of mass spectrometry {#Sec4} ---------------------------------------------------- Proteins unique to the heat-stressed HSP16.6-Strep sample were excised from 1D or 2D gels and digested with trypsin, and peptides were prepared for MS as described previously (Basha et al. [@CR2]). Peptide extracts were introduced onto a 100-μm I.D. × 5-cm C18 column using an autosampler and separated with a 25-min gradient of 2--100% acetonitrile in 0.5% formic acid. The column eluate was directed into a Thermo Finnigan LCQ Deca ion trap mass spectrometer. The mass range scanned was 400 to 1500 *m*/*z*, and data-dependent scanning was used to select the three most abundant ions in each parent scan for tandem MS. Peptides were searched using SEQUEST and allowed for static modification of Cys (57 Da; iodoacetamidation), and differential modification of Met (16 Da; oxidation) was considered. X correlation cutoffs of 2.0 for 2+ ions, 3.0 for 3+ ions, and delta Xcorr \> 0.05 were applied, and data were sorted using DTASelect (Tabb et al. [@CR39]). The complete list of 83 proteins identified as HSP16.6 interaction partners from these and our previous experiments (Basha et al. [@CR2]) is given in Supplemental Table [1](#MOESM1){ref-type="media"}. For the purpose of comparisons and calculations, this set is considered to represent sHSP interactors and denoted *I*, where \|*I*\| = 83. Known protein-protein interactions (PPIs) from yeast-2-hybrid experiments are available for *Synechocystis* (Sato et al. [@CR38]). We identified all PPIs made by members of *I* (Supplemental Table [1](#MOESM1){ref-type="media"}), excluding PPIs that were not identified with multiple positive prey clones, in order to avoid false positives. Bioinformatic analyses {#Sec5} ---------------------- The *Synechocystis* sp. PCC 6803 genome (Kaneko et al. [@CR22]; Kotani et al. [@CR23]) was obtained from CyanoBase, [http://genome.microbedb.jp/cyanobase/](http://genome.microbedb.jp/cyanobase) (Nakamura et al. [@CR31]). A set *G* representing the genome, containing all proteins such that *I* ⊆ *G*, was created from the protein-coding sequences in the genome. Only proteins with estimated isoelectric point (pI) within the range 4--9.5 and mass *m* between 10 and 200 kDa, corresponding to the range of proteins that could be identified in either the 1D or 2D gels, were included (see [Supporting Information](#Sec13){ref-type="sec"}). This filtering resulted in *G* comprising 3021 proteins (i.e., \|*G*\| = 3021), which amounts to \> 80% of the proteins encoded in the genome. The mass, sequence length *n*~aa~, and abundance (absolute numbers *n*~*F*~ and frequencies *f*~*F*~ = *n*~*F*~/*n*~aa~) of various sequence features *F* were determined for every protein. These were DnaK-binding motifs; VQL, IXI, and \[I/L/V\]X\[I/L/V\] motifs (where X refers to any amino acid); charged (D,E,H,K,R), positive (H,K,R), negative (D,E), and hydrophobic (C,F,I,L,M,V,W) residues. DnaK-binding motifs were identified using a previously described algorithm (Van Durme et al. [@CR41]), and the other motifs were found through *regexp* pattern-matching using the Python Standard Library. Long-range disorder was predicted with IUPred (Dosztanyi et al. [@CR6], [@CR7]) using default parameters, and residues with a score \> 0.5 were considered unstructured. For the remainder, secondary structure was predicted from the sequences using the EMBOSS (Rice et al. [@CR34]) implementation of the GOR method. β-strands and β-turns were pooled together into "β-structures." Average abundances were calculated separately for *I* and *G*. Statistical significance testing and representation {#Sec6} --------------------------------------------------- A bootstrapping approach was employed to assess the statistical significance of any differences between *I* and *G*. First, a random subset, *R*, was taken from *G* by arbitrarily picking, with replacement, of 83 proteins (i.e., *R* ⊆ *G* and \|*R*\| = \|*I*\|). The mean, $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_R $$\end{document}$, was then calculated for the given quantity of interest *Q*, to allow comparison with $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_I $$\end{document}$, the mean calculated from *I* for the same quantity. This was repeated *N* times, after which the *p* value was calculated as the frequency by which $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_R\ge {\overline{Q}}_I $$\end{document}$ or $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_R\le {\overline{Q}}_I $$\end{document}$, in the respective cases of $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_I>{\overline{Q}}_G $$\end{document}$ and $\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\overline{Q}}_I<{\overline{Q}}_G $$\end{document}$. For each quantity, a total of *N* = 100,000 iterations was run, and the statistical significance was tested at the 0.01 level. Kernel density estimates were plotted for all quantities where a statistically significant difference was found. A Gaussian kernel with a bandwidth equal to 2% of the visible range was used in all cases and the amplitude was set such that the integrated density was equal to the number of proteins in each set. As such, the amplitudes are inversely proportional to the ranges along the *x*-axis, and their heights can thus differ substantially between distributions. Moreover, the *y*-axes' ranges were chosen to make the *I* and *G* distributions occupy the same visible area in the resulting plot. Biological function analysis {#Sec7} ---------------------------- A PANTHER Overrepresentation Test (release 20170413) against the GO Ontology database (release 20170926) was made for all proteins in *I*, using the *Synechocystis* reference list and the "GO biological process complete" annotation data set. Bonferroni correction was applied for multiple testing, and a *p* value cut-off of 0.05 was used to filter the results. Proteins that were not mapped to any entry in the reference list were added to the set of "unclassified" proteins. Enrichment was defined as *n*~*p*~/E(*n*~*p*~), where *n*~*p*~ is the number of proteins in *I* being ascribed to biological process *p*, and E(*n*~*p*~) is the expected number of such proteins based on their frequency in *G* and the size of *I*. Proteins that were assigned to the GO-class "biological process" but not to any of its subclasses were given the collective label "other biological process." Since a single protein can have multiple classifications, the sum of proteins in the different classes exceeds \|*I*\|. Protein BLAST (Altschul et al. [@CR1]) was used to find orthologs among the interactors identified for HSP16.6 in *Synechocystis*, IbpB in *E. coli* (Fu et al. [@CR10]), and HSP20.2 in *D. radiodurans* (Bepperling et al. [@CR4]). Three pairwise comparisons were made to define the overlap between the sets of interactors, where the list of interactors from one organism was used as the "database" and the list of interactors from the other as the "query." Using *E. coli* as the database yielded poorly annotated hits; hence, the primary database was set to be *Synechocystis* and the secondary database to be *D. radiodurans*. An *E* value cut-off of 10^−10^ was used for all BLAST searches, and whenever a protein in the query yielded several matches the one with the lowest *E* value was chosen. Lastly, the overlap between *E. coli* and *D. radiodurans* was used as a query against *Synechocystis* in order to find the overlap between the interactors in all three organisms. The triply overlapping set of proteins were also analyzed for an overrepresentation test in *Synechocystis* as described above, but without imposing a *p* value because of the small number of proteins in the query. Results {#Sec8} ======= Identification of proteins associated with HSP16.6 during heat stress in vivo {#Sec9} ----------------------------------------------------------------------------- To identify a larger number of HSP16.6-associated proteins than we did previously (Basha et al. [@CR2]), we developed a *Synechocystis* strain in which the wild-type *HSP16.6* gene was replaced with an *HSP16.6* gene modified to encode a Strep-tag II at the C-terminus. HSP16.6-Strep was shown to complement HSP16.6 in vivo in thermotolerance assays (Basha et al. [@CR2]), as well as functioning in vitro to protect model interactors from irreversible heat-denaturation (Friedrich et al. [@CR9]). The HSP16.6-Strep strain and an isogenic strain carrying wild-type HSP16.6 were subjected to mild heat stress to allow accumulation of the sHSP and then to a short, more severe heat stress to maximize association of thermally unstable proteins with the sHSP. The soluble cell fraction from control and heat-stressed cells of the HSP16.6-Strep and HSP16.6 strains was subjected to Strep-Tactin affinity chromatography and the recovered proteins compared by means of 2D electrophoresis (or, to examine high molecular mass proteins, by using 1D electrophoresis) (Fig. [1](#Fig1){ref-type="fig"}). Individual spots or bands unique to proteins affinity-purified with HSP16.6-Strep from the heat stress samples were excised and subjected to MS analysis.Fig. 1Identification of HSP16.6 interactors. **a** SDS-PAGE separation of proteins recovered in association with HSP16.6-Strep in cells grown at 30 °C and treated at 42 °C for 2 h plus 1 h recovery at 30 °C to allow sHSP accumulation (control sample, C) or further treated with an additional 30 min at 46 °C (heat-stressed sample, HS). To recover proteins in the high molecular mass range, separation was performed using an 8% acrylamide gel, and the position of molecular mass markers is indicated. Bands that were excised for analysis are annotated with red dashes. Double-width dashes indicate bands that gave hits for proteins associated with protein-folding processes. **b** 2D gel separation of samples prepared as described in **a**. The position of molecular mass markers and the acidic (+) and basic (−) sides of the silver-stained 2D gels are indicated. Spots that were excised and yielded the reported data are annotated with red circles (right panel). The ellipse in each panel indicates the spots due to HSP16.6 We identified a total of 72 proteins in these experiments which, when combined with others we had identified previously (Basha et al. [@CR2]), expanded to a total of 83. Notably, the proteins were recovered from the soluble fraction, so they do not represent those that underwent excessive aggregation, or associations with membranes and cytoskeletal elements that may have led to partitioning into the pellet. As such, these proteins represent potential sHSP interactors that have been prevented from insolubilization by interaction with HSP16.6. We denote this set of interactors *I*, representing a subset of the genome *G* detectable in our experiments. This allows us to test hypotheses about the features of these interactors to shed light on what distinguishes them from the other proteins in *Synechocystis*. Though many of the interactors have known PPIs, based on cross-referencing to genome-wide yeast-2-hybrid data (Sato et al. [@CR38]) (Supplementary Table [1](#MOESM1){ref-type="media"}), notably there are only three described pairwise PPIs within *I*, and all three of these are self-associations. To see if this low count was an artifact from our conservative approach of excluding PPIs that were identified with only one prey clone, we also tested including the latter, which presumably yields more false positives. This increased the number of pairwise PPIs within *I* to 12, including six self-interactions, which is still a small subset of *I*. Consequently, the proteins in *I* appear largely independent of each other in their interaction with HSP16.6, consistent with our affinity-isolate methodology being sensitive to stable interactors. Primary- and secondary-structure features of HSP16.6 interactors {#Sec10} ---------------------------------------------------------------- We first compared the average mass and sequence lengths of the interactors to the genome. We found that these were very different, with the interactors being about 60% larger on average (Table [1](#Tab1){ref-type="table"}, Fig. [2](#Fig2){ref-type="fig"}a, b). While this is informative about the interactor profile of HSP16.6, it also means that the absolute number, *n*~*F*~ of any feature *F*, is likely to be larger for the interactors. To account for this, all subsequent analyses are consequently focused on fractional quantities, *f*~*F*~, which are normalized by sequence length in order to reveal distinctive features for the proteins associated with HSP16.6.Table 1Comparison of various primary- and secondary-structure features between interactors of HSP16.6 in *Synechocystis* with the wider genome. Mean values obtained for the proteins in *I* and *G*, along with *p* values for the differences between themQuantityInteractors, *I*Genome, *Gp* value*m*/Da**57,86036,561\< 10** ^**−5**^*n* ~aa~**525336\< 10** ^**−5**^*f* ~DNAK~**0.01980.285\< 10** ^**−5**^*f* ~VQL~0.0003350.0002740.27*f* ~IXI~0.003490.003050.15*f* ~\[ILV\]X\[ILV\]~**0.03780.04260.002**pI5.225.630.036*f* ~Charged~**0.2520.2306.0∙10** ^**−5**^*f* ~+~0.1180.1150.24*f* ~−~**0.1340.114\< 10** ^**−5**^*f* ~H-phobic~**0.3090.3311.0∙10** ^**−5**^*f* ~d~0.0860.0580.015*f* ~β~**0.3550.415\< 10** ^**−5**^*f* ~α~**0.3830.3383.1∙10** ^*−5*^Bold text indicates statistically significant differences, defined as *p* \< 0.01Fig. 2Probability distributions of the statistically significant differences identified in Table [1](#Tab1){ref-type="table"}. **a**, **b** The distributions of protein mass (**a**) and sequence length (**b**) for *I* and *G*. The proteins in *I* are on average approximately 60% larger than those in *G*, both in terms of mass and sequence length. **c**, **d** Distributions of frequencies of \[I/L/V\]X\[I/L/V\] motifs (**c**) and DnaK-binding motifs (**d**). Both sequence features are less frequent and more narrowly distributed in *I*. **e**--**g** The fraction of hydrophobic (**e**), charged (**f**), and negative (**g**) residues. Charged residues are more frequent in *I*, which can be attributed to a higher fraction of negatively charged residues and a lower fraction of hydrophobic residues. **h**, **i** Fraction of residues with predominately helical (α and 3~10~, **h**) propensity and β-structure (sheet and turn, **i**). The helix content is higher in *I* than in *G*, and conversely, the β-structure content is lower in *I*. The distributions were normalized such that their integral equals the number of proteins in each set. Consequently, the amplitudes are inversely proportional to the width of the distributions, and the amplitudes of the two distributions in each panel reflect the different sizes of the two sets We judged that certain sequence motifs might be implicated in the association of interactors with sHSPs. To develop hypotheses for testing, we considered a model in which interfaces that allow the sHSP to self-assemble might be the same as interactor binding sites (Jacobs et al. [@CR20]). In this context, the inter-monomer contact made between the highly conserved "IXI" motif in the C-terminal region and the β4--β8 groove of the ACD has been proposed as an auto-inhibitory interface (Jehle et al. [@CR21]; van Montfort et al. [@CR42]). Theorizing that IXI motifs might mediate contacts with the sHSPs, we therefore asked whether they were differentially represented in the interactors. We also posed this question in a more general form, by searching for motifs matching the requirement \[I/L/V\]X\[I/L/V\], which is more encompassing across the breadth of sHSPs (Poulain et al. [@CR32]). Furthermore, we searched for VQL motifs, as this corresponds to the specific manifestation of the "IXI" in HSP16.6. Comparing the fractional abundance of these motifs (*f*~IXI~, *f*~\[ILV\]X\[ILV\]~, *f*~VQL~, respectively) between the interactors and the genome, we found there to be no meaningful difference for IXI and VQL, but the general form \[I/L/V\]X\[I/L/V\] was significantly under-represented in the interactors (Table [1](#Tab1){ref-type="table"}, Fig. [2](#Fig2){ref-type="fig"}c). sHSPs are thought to transfer interactors to the DnaK (HSP70 in eukaryotes) system for ATP-dependent refolding (Haslbeck and Vierling [@CR16]). We therefore hypothesized that the presence of DnaK-binding motifs (Rudiger et al. [@CR36]), which mediate association with this downstream chaperone, might be different between the interactors and the genome. We found the fractional abundance of DnaK motifs (*f*~DnaK~) to be \> 30% lower in the interactors (Table [1](#Tab1){ref-type="table"}, Fig. [2](#Fig2){ref-type="fig"}d). We next considered electrochemical properties of the proteins. The difference in pI between the interactors and genome was just outside our significance criterion (*p* = 0.036 \> 0.01). However, when examining the fraction of charged residues (*f*~Charged~), we discovered it to be higher in the interactors. By investigating negatively and positively charged residues separately (*f*~−~ and *f*~+~, respectively), we found this difference to be due to the former, with negatively charged residues \> 16% more abundant in the interactors. Conversely, the genome contains a higher fractional abundance of hydrophobic residues (*f*~H-phobic~) (Table [1](#Tab1){ref-type="table"}, Fig. [2](#Fig2){ref-type="fig"}e--g). Lastly, we asked whether predicted secondary structure differed between the two sets. The fraction of residues in disordered regions (*f*~*d*~) is insignificantly higher in the interactors, albeit very near our threshold (*p* = 0.015 ≈ 0.01). For the structured regions, on average, the interactors had a higher fraction of residues in helices (*f*~α~) and lower fraction in β-structures (*f*~β~), compared to the proteins in the wider genome (Table [1](#Tab1){ref-type="table"}, Fig. [2](#Fig2){ref-type="fig"}h, i). Functional classification of HSP16.6-associated proteins {#Sec11} -------------------------------------------------------- Where possible, interactors were classified according to their gene-ontology annotation into either "metabolic process," "cellular process," or "other biological process." Many proteins were assigned to multiple classes, and 15 proteins could not be matched to the reference list and were added to the set of unclassified proteins, which then comprised 24 proteins. This classification yielded different distributions of processes in *I* and *G* (Fig. [3](#Fig3){ref-type="fig"}a), indicating that HSP16.6 has an interaction profile that reflects the biological function of its interactors. To quantify the differences, we calculated the overrepresentation of proteins involved in the various biological processes (Fig. [3](#Fig3){ref-type="fig"}b). The data reveal statistically significant enrichment of proteins ascribed to certain biological processes in the interactors, suggesting that HSP16.6 makes function-specific interactions. The most striking association was for proteins involved in protein folding, with 6 out of the 19 known such proteins being found in *I* (Table [2](#Tab2){ref-type="table"}), corresponding to a thirteen-fold enrichment.Fig. 3Classification of proteins involved in different gene-ontology annotations of biological processes. **a** Pie charts show the extent of different classes in *I* and *G*. The most fundamental classes have labels in bold face. Note that "cellular metabolic process" belongs to both "metabolic process" and "cellular process" and is therefore represented by two colors. **b** Enrichment within *I* of proteins taking part in the various biological processes. Circle areas reflect the number of proteins in *I*, and numbers indicate proteins in *I* and *G*. *I* contains a smaller fraction of unclassified proteins than *G*, and all classes are somewhat enriched in *I*. Proteins involved in protein folding are enriched thirteen-fold, with 6 of the 19 such proteins known being found among the interactors. Inset: Same analysis performed for the 10 overlapping proteins from the analysis in (**c**). In all featured classes, the fold-enrichment is higher. **c** Venn diagram showing the overlap of sHSP interactor ranges from *Synechocystis*, *E. coli*, and *D. radiodurans*. Note that, with the exception of the intersection of the three sets, all areas of the diagram reflect the number of elements withinTable 2The six interactors of *Synechocystis* HSP16.6 annotated as belonging to the "protein folding" categoryGeneUniProt IDNamesll0058Q55154DnaK 1sll0170P22358DnaK 2sll1932P73098DnaK 3slr2076Q0597260 kDa chaperonin 1sll0533Q55511Trigger factor (TF)slr1251P73789Peptidyl-prolyl cis-trans isomerase To compare HSP16.6-interactors with those identified in other prokaryotes, we cross-referenced our list with those reported as IbpB interactors in *E*. *coli* (Fu et al. [@CR10]), and HSP20.2 in *D. radiodurans* (Bepperling et al. [@CR4]) (Fig. [3](#Fig3){ref-type="fig"}c). There were unique orthologs for 17 HSP16.6 interactors among the 113 IbpB interactors and 17 for the 101 HSP20.2 interactors. The overlap between IbpB and HSP20.2 interactors was larger still, comprising 36 unique orthologs. A total of 10 proteins were found in all three sets of interactors. Notably, these overlaps are much larger than one would expect by chance (approximately 3 for each pairwise overlap, and fewer than 1 for the triple overlap). Interestingly, these proteins were also diverse, spanning multiple biological processes, with only one eluding classification (Table [3](#Tab3){ref-type="table"}, Fig. [3](#Fig3){ref-type="fig"}b inset). With the exception of the "protein folding" and "other biological process," which were not represented at all in this subset, all categories were even more overrepresented than in the complete list of HSP16.6 interactors. We note that the small number of proteins precluded low *p* values for the levels of enrichment for the individual categories. Taken together, they nonetheless indicate that the enrichment pattern seen for the *Synechocystis* interactors is particularly prominent for the interactors that are common for all three sHSPs, with the striking exception of the protein-folding interactors, which might be a species- or sHSP-specific phenomenon, or the result of differences in the methods used for recovering interacting proteins.Table 3Proteins that we associated to all three of HSP16.6 (*Synechocystis*), IbpB (*E. coli*), and HSP20.2 (*D. radiodurans*). The GO annotations for biological processes are coded as follows: metabolic process (MP), cellular process (CP), nitrogen-compound metabolic process (NCMP), primary metabolic process (PMP), biosynthetic process (BP), organic substance metabolic process (OSMP), cellular metabolic process (CMP), and unclassified (U). In some cases, two distinct IbpB or HSP20.2 interactors would correspond to an HSP16.6 interactor, in which case, both UniProt IDs were included in the table*Synechocystis* geneUniProd ID\ *Synechocystis*\ *E. coli*\ *D. radiodurans*NameGO biological processsll0018Q55664\ G64976n\ NP_295312.1Fructose-bisphosphate aldolase, class IIMP, CP, NCMP, PMP, OSMP, CMPsll1099P74227\ NP_289744.1, pdb\|1EFC\|A\ NP_295522.1Elongation factor TuMP, CP, NCMP, PMP, BP, OSMP, CMPsll1180P74176\ NP_287490.1\ NP_295291.1Toxin secretion ABC transporter ATP-binding proteinCP, NCMP, PMP, OSMPsll1326P27179\ CAA23519.1\ NP_294424.1ATP synthase alpha chainMP, CP, NCMP, PMP, BP, OSMP, CMPsll1787P77965\ AAC43085.1\ NP_294636.1RNA polymerase beta subunitMP, CP, NCMP, PMP, BP, OSMP, CMPsll1789P73334\ NP_290619.1\ NP_294635.1RNA polymerase beta' subunitMP, CP, NCMP, PMP, BP, OSMP, CMPsll1818P73297\ CAA37838.1\ NP_295851.1RNA polymerase alpha subunitMP, CP, NCMP, PMP, BP, OSMP, CMPsll1841P74510\ NP_285811.1, NP_286443.1\ NP_293809.1, NP_293979.1Pyruvate dehydrogenase dihydrolipoamide acetyltransferase component (E2)MPslr0542P54416\ NP_286179.1\ NP_295695.1ATP-dependent protease ClpPMP, NCMP, PMP, OSMPslr1105P72749\ NP_289127.1\ NP_294922.1GTP-binding protein TypA/BipA homologU Discussion {#Sec12} ========== Here, we have examined the properties of 83 proteins that associate in vivo with HSP16.6 under conditions of heat stress. Given that the proteins were obtained from the soluble supernatant after centrifugation, they are likely to under-represent membrane- and cytoskeleton-associated proteins. Furthermore, as our experiment involves affinity pull-downs, these interactors are inevitably restricted to those that form interactions that are stable on the timescale of the experiment. In the context of the model proposed for sHSPs wherein they display both a low-affinity mode with high capacity, and a high-affinity mode with low capacity (McHaourab et al. [@CR28]), our interactors are likely representative of the latter. Notwithstanding these potential biases of the experiment, we have shown that the interactors were on average larger than the proteins in the genome, have a distinct electrochemical profile, an increased fraction of helical secondary structure, and a lower fraction of \[I/L/V\]X\[I/L/V\] and DnaK-binding motifs. We observed that HSP16.6 preferentially binds longer, more massive, proteins. This is in agreement with analysis of sHSP interactors *E. coli* and *D. radiodurans* (Fu et al. [@CR11]) and is interesting in light of recent data noting that thermally unstable proteins in cells are typically longer than those that are stable (Leuenberger et al. [@CR26]). Longer proteins might therefore be overrepresented in the interactors by virtue of being more likely to be destabilized by the heat-shock condition assayed here. Alternatively, or in addition, it is possible that longer proteins, by virtue of having more binding sites, might be held tighter by the sHSPs. This would stem from avidity effects resulting from the multivalency of sHSP oligomers (Hilton et al. [@CR17]), similar to observations made for other molecular chaperones (Huang et al. [@CR19]; Saio et al. [@CR37]). Upon considering amino acid motifs and composition, we found a lower fraction of \[I/L/V\]X\[I/L/V\] motifs in the interactors. This suggests that the β4--β8 groove, which binds this motif intra-molecularly in sHSP oligomers (Basha et al. [@CR3]; Hilton et al. [@CR17]), is not the binding site for these stable interactors. However, this does not preclude the β4--β8 groove being a site for low-affinity, or transient, interactions. This is consistent with the observation that the excised ACD can display potent chaperone activity (Cox et al. [@CR5]; Hochberg et al. [@CR18]). We also identified an overabundance of charged and, in particular negatively charged, residues in the interactors. A preponderance of charged residues was also observed for sHSP interactors in *E. coli* and *D. radiodurans* (Fu et al. [@CR11]). Notably, aspartates have been shown to be enriched in thermally unstable proteins (Leuenberger et al. [@CR26]), again hinting that thermal stability could be a key attribute for recognition by sHSPs. It is also interesting to consider the electrochemical profile of the sHSPs themselves, which have an overabundance of charged residues in the ACD and C-terminal region (Kriehuber et al. [@CR24]). As such, it is possible that there may be charge-complementarity aspects to binding. The depletion of DnaK-binding motifs in the HSP16.6 interactors is striking, particularly when considering that DnaK is able to release interactors from the complexes made with HSP16.6. This suggests that the DnaK-binding motif is not responsible for the recognition events that mediate interactor transfer between the chaperones. Instead, the DnaK-binding motif may be more reflective of DnaK's holdase, rather than refoldase activity. In this way, proteins that are not protected by the sHSPs are captured by HSP70 instead (Mayer and Bukau [@CR27]). The interactors are also enriched in α-helical propensity and depleted in β-structure. It is possible that, based on the observation that there is little cooperativity in the folding of β-sheets (Wu and Zhao [@CR43]), this may be reflective of physico-chemical differences in re- or unfolding. Gene-ontology analysis demonstrates that, while capable of associating with many interactors, HSP16.6 nonetheless does so with statistically significant specificity, evidenced by varying enrichments for different biological processes. This observation is validated by the overlap between *Synechocystis*, *E. coli*, and *D. radiodurans* sHSP interactors. The notion that sHSPs have specific interactors in the cell also extends to eukaryotes, where different sHSPs found in the same cellular compartment have differing interactor profiles (Fleckenstein et al. [@CR8]; McLoughlin et al. [@CR29]; Mymrikov et al. [@CR30]). The most enriched groups of proteins associated with HSP16.6 were other components of the protein folding machinery. We interpret this as due to HSP16.6 being part of a tightly linked molecular chaperone network (Gong et al. [@CR15]), collaborating to prevent and reverse improper protein interactions in the wider heat-shock response of the cell (Richter et al. [@CR35]). Possibly, these interactions are indirect, captured due to HSP16.6 and other protein-folding components acting on the same substrates. An indirect interaction with protein-folding components could also explain the lack of equivalent proteins in the *E. coli* sHSP interactors (Fu et al. [@CR10]), as the previous report employed covalent-crosslinking and urea solubilization prior to immunoprecipitation. The *D. radiodurans* interactors were identified by a different method, employing ex vivo addition of purified HSP20.2 to cell lysates, prior to heat stress and immunoprecipitation. Given the differences in methodology between these studies, we suggest that those proteins comprising common interactors are highly significant (Table [3](#Tab3){ref-type="table"}). In sum, our study provides an initial view of the functional interactome of prokaryotic sHSPs and of *Synechocystis* in particular. In addition, the statistical framework we have implemented for examining sequence determinants can be applied to the analysis of the likely future profusion of proteomic data identifying molecular chaperone interactors in cells. Electronic supplementary material ================================= {#Sec13} ESM 1(DOCX 28 kb) The original version of this article was revised: Table 1 needed corrections. The DOI of the Erratum is: 10.1007/s12192-018-0901-6 **Electronic supplementary material** The online version of this article (10.1007/s12192-018-0884-3) contains supplementary material, which is available to authorized users. A correction to this article is available online at <https://doi.org/10.1007/s12192-018-0901-6>. **Change history** 5/3/2018 Table 1 in the original publication has been corrected. We thank Linda Breci (University of Arizona) for performing MS experiments and Georg Hochberg (University of Chicago) for helpful discussions. This work was supported by the Swedish Research Council and the European Commission for a Marie Skodowska Curie International Career Grant (2015-00559) to EGM, the Biotechnology and Biological Sciences Research Council (BB/K004247/1) to JLPB, and the National Institutes of Health (RO1 GM42762) to EV.
{ "pile_set_name": "PubMed Central" }
Buddha's Lost Children Buddha's Lost Children is a 2006 documentary film by Dutch director Mark Verkerk. The feature film tells the story of Khru Bah Neua Chai Kositto, a Buddhist monk who has dedicated his life to orphaned children in the Golden Triangle area of Thailand. The film opened in Dutch cinemas in September 2006. Awards The film won the International Documentary Grand Jury Prize (2006) at the Los Angeles AFI Fest , the Jury Award for Documentary (2007) at the Newport Beach Film Festival, the Best Global Insight Film (2007) at the Jackson Hole Film Festival , the David L. Wolper Best Documentary Award (2007) at the Napa Sonoma Valley Film Festival , the City of Rome Award (2006) at the Asiaticafilmmediale in Rome, the Crystal Film (2006) at the Netherlands Film Festival, and the Silver Dove (2006) at the Dok Leipzig . External links Category:2006 films Category:Dutch films Category:Thai-language films Category:Documentary films about Buddhism Category:Dutch documentary films Category:Documentary films about orphanages Category:2000s documentary films
{ "pile_set_name": "Wikipedia (en)" }
Monte Carlo studies of three-dimensional O1 and O4 phi4 theory related to Bose-Einstein condensation phase transition temperatures. The phase transition temperature for the Bose-Einstein condensation (BEC) of weakly interacting Bose gases in three dimensions is known to be related to certain nonuniversal properties of the phase transition of three-dimensional O(2) symmetric phi(4) theory. These properties have been measured previously in Monte Carlo lattice simulations. They have also been approximated analytically, with moderate success, by large N approximations to O(N) symmetric phi(4) theory. To begin investigating the region of validity of the large N approximation in this application, the same Monte Carlo technique developed for the O(2) model [P. Arnold and G. Moore, Phys. Rev. E 64, 066113 (2001)] to O(1) and O(4) theories has been applied. The results indicate that there might exist some theoretically unanticipated systematic errors in the extrapolation of the continuum value from lattice Monte Carlo results. The final results show that the difference between simulations and next-to-leading order large N calculations does not improve significantly from N=2 to N=4. This suggests that one would need to simulate yet larger N's to see true large N scaling of the difference. Quite unexpectedly (and presumably accidentally), the Monte Carlo result for N=1 seems to give the best agreement with the large N approximation among the three cases.
{ "pile_set_name": "PubMed Abstracts" }
Kevin Mansker Kevin Mansker (born ) is an American male track cyclist. He competed in the sprint event at the 2012 UCI Track Cycling World Championships. References External links Profile at cyclingarchives.com Category:1989 births Category:Living people Category:American track cyclists Category:American male cyclists Category:Place of birth missing (living people)
{ "pile_set_name": "Wikipedia (en)" }
UNPUBLISHED UNITED STATES COURT OF APPEALS FOR THE FOURTH CIRCUIT No. 12-1195 MARY T. LACLAIR, Individually and as Personal Representative of the Estate of Cameron J. LaClair, Jr., Plaintiff – Appellant, v. SUBURBAN HOSPITAL, INCORPORATED, Defendant – Appellee, and PHYSICAL THERAPY AND SPORTS MEDICINE BINH M. TRAN, P.T., INC.; CATHERINE L. COELHO, M.P.T., f/k/a Catherine Chamberlain; SUBURBAN HOSPITAL FOUNDATION, INC.; SUBURBAN HOSPITAL HEALTHCARE SYSTEM, INC., Defendants. Appeal from the United States District Court for the District of Maryland, at Greenbelt. Peter J. Messitte, Senior District Judge. (8:10-cv-00896-PJM) ARGUED: January 31, 2013 Decided: April 15, 2013 Before TRAXLER, Chief Judge, and KEENAN, and THACKER, Circuit Judges. Affirmed by unpublished per curiam opinion. ARGUED: Patrick Michael Regan, REGAN ZAMBRI LONG & BERTRAM, Washington, D.C., for Appellant. Michael E. von Diezelski, ADELMAN, SHEFF & SMITH, LLC, Annapolis, Maryland, for Appellee. ON BRIEF: Jacqueline T. Colclough, REGAN ZAMBRI LONG & BERTRAM, Washington, D.C., for Appellant. Unpublished opinions are not binding precedent in this circuit. 2 PER CURIAM: Mary T. LaClair, individually and as personal representative of the estate of her husband, Cameron J. LaClair, Jr., appeals the district court’s order finding that the Appellee, Suburban Hospital, Inc. (“Suburban”), and Physical Therapy and Sports Medicine (“PTSM”), were joint tortfeasors with respect to her husband’s injuries sustained while he was a patient at Suburban. Mr. LaClair was first injured while receiving physical therapy at PTSM. After undergoing surgery at Suburban for that injury, he was further injured by the actions of Suburban’s patient care technicians. Suburban asks us to affirm the district court’s conclusion that it is a joint tortfeasor with PTSM because its actions did not constitute a superseding cause of harm to Mr. LaClair. In unraveling this appeal, Maryland law directs us to several provisions of the Restatement (Second) of Torts, each of which is grounded in the idea that an intervening act is not a superseding cause if it was foreseeable at the time of the primary negligence. Because the harm and injuries sustained at Suburban were foreseeable consequences of the alleged negligence of PTSM, Suburban’s actions were not a superseding cause of Mr. LaClair’s injuries. Thus, Suburban and PTSM are joint tortfeasors, and we affirm. 3 I. A. On November 1, 2007, Mr. LaClair, a “vibrant former CIA officer” in his mid-80s, J.A. 211, 1 sustained an injury while receiving physical therapy at the PTSM facility (the “November 1 incident”). He was attempting to secure himself in a piece of exercise equipment and fell onto the floor, while his physical therapist had stepped away. He was taken by ambulance to Suburban, where he was diagnosed with a cervical fracture and dislocation. Dr. Alexandros Powers, a neurosurgeon, performed surgery on Mr. LaClair on November 3, 2007. The surgery entailed Dr. Powers inserting screws and rods to secure Mr. LaClair’s spine. According to Dr. Powers, the surgery “was successful and proceeded without complication, and Mr. LaClair’s prognosis at that time included a complete and total recovery free from future cervical spine surgery.” J.A. 227. Dr. Powers stated that, as of the morning of November 6, 2007, Mr. LaClair was “recovered and was to be discharged [from Suburban] to a rehabilitation facility” the next day, and “there was no plan or expectation for subsequent cervical spine 1 Citations to the “J.A.” refer to the Joint Appendix filed by the parties in this appeal. 4 surgeries due to the success of the November 3 surgery[.]” J.A. 228. Later on November 6, Mr. LaClair was transferred from ICU to a regular room, and his catheter was removed. He needed assistance using the bathroom, and, after Mrs. LaClair called several times for assistance, two patient care technicians responded. Mr. LaClair used the bathroom, and the patient care technicians attempted to reposition him in his hospital bed. Although Suburban claims Mrs. LaClair “resort[s] to hyperbole when referring to the conduct of November 6,” and the patient care technicians, while perhaps negligent, were “performing their normal duties when they were aiding Mr. LaClair and repositioning him in bed,” Br. of Appellee 6, Mrs. LaClair views the incident as out of bounds because her husband’s “head was violently pushed against the side rail of the bed and he cried out in pain,” Br. of Appellant 4. Mrs. LaClair testified that one of the patient care technicians was “very rough,” explaining, “her motions were gross motions. They weren’t careful motions. And I thought, with somebody with a broken neck, I think I’d be careful, but there was none of that.” J.A. 362-63 (the “November 6 incident”). There is no dispute that Mr. LaClair sustained additional injuries as a result of the November 6 incident. Dr. Powers examined Mr. LaClair and found “a fracture of the C7 endplate, dislocation at C6/C7, dislodging of the screws placed 5 in previous surgery, ligament damage and hemorrhage, nerve root injury at the level of C7 and C8 and spinal cord injury.” J.A. 228. He determined Mr. LaClair could no longer be discharged on November 7 as previously scheduled, but rather, needed to undergo an additional surgery on November 8. Mr. LaClair later underwent a third surgery on February 6, 2008, at Georgetown University Hospital. He spent nearly five months hospitalized, underwent plaster casting of his cervical spine, developed bedsores, and ultimately required a feeding tube. Mrs. LaClair presented evidence to the district court that as a result of the November 6 incident, Mr. LaClair’s medical bills totaled over $1.05 million and had a projected future cost of $900,000. Another physician testified that absent the November 6 incident, his medical and rehabilitation expenses would have been only $75,000 to $125,000. B. The LaClairs filed two separate lawsuits: first, against PTSM for injuries stemming from the November 1 incident (filed March 19, 2009) (the “PTSM lawsuit”), and second, against Suburban for “separate and distinct” injuries stemming from the 6 November 6 incident (filed April 15, 2010) (the “Suburban lawsuit”). 2 The PTSM lawsuit alleged that PTSM was responsible for not only the injuries and damages incurred from the November 1 incident at PTSM’s facility, but also the injuries and damages incurred from the November 6 incident at Suburban. See J.A. 48 (PTSM Complaint) (“Plaintiff was taken via ambulance to Suburban [] where he was diagnosed with a cervical fracture and dislocation. Plaintiff remained at Suburban until November 13, 2007, where he underwent two surgical procedures to repair his cervical fracture, among other things.”). During discovery, however, Dr. Powers testified on January 5, 2010, that the injuries stemming from the November 1 incident were “separate, distinct, and divisible” from those sustained by the November 6 incident. Id. at 229, 262-329. Subsequently, the LaClairs settled with PTSM for $1 million on March 5, 2010. The Settlement Agreement specifically recognized that the LaClairs would be pursuing separate claims against Suburban, in connection with the November 6 incident alone: 2 Mr. LaClair passed away on November 4, 2011, during the course of this litigation. Mrs. LaClair took over as personal representative of his estate and was substituted as Plaintiff on January 25, 2012. 7 In any future action against [Suburban], the plaintiffs agree to file a pre-trial motion with the court attempting to establish that the conduct of Suburban . . . constituted superintervening negligence, and that these defendants are not joint tortfeasors with Suburban[.] The purpose of this requirement is to obviate the need for [PTSM] to be named as [a] part[y] in any future litigation. J.A. 179. The Suburban lawsuit, filed about six weeks after the PTSM settlement, alleges that Mr. LaClair suffered injuries from the November 6 incident that were separate and distinct from those of the November 1 incident. This litigation settled on May 31, 2011. Pursuant to the Settlement Agreement between the LaClairs and Suburban, however, the parties agreed to submit to the district court the question of whether PTSM and Suburban were joint tortfeasors in connection with the November 6 incident, or whether those injuries were separate and distinct such that Suburban alone would be liable. Pursuant to the Settlement Agreement, Suburban agreed to make an initial $650,000 payment to the LaClairs and further agreed to make an additional payment of $600,000 in the event that the court found PTSM and Suburban were not joint tortfeasors as to the November 6 incident. C. In accord with the PTSM Settlement Agreement, the LaClairs filed a pre-trial motion in the Suburban lawsuit on 8 June 10, 2011, asking for judicial determination that Suburban was a “successive tortfeasor” and therefore, not entitled to joint tortfeasor credit for the November 6 incident. J.A. 140. 3 That same day, Suburban filed a memorandum explaining why it should bear joint tortfeasor status with PTSM. The district court held a motions hearing on January 20, 2012, and decided that Suburban was indeed a joint tortfeasor with PTSM such that Mrs. LaClair could not recover additional damages. The district court explained, [T]his was not highly extraordinary. That this kind of thing could well have happened, even if the doctors did not see it or had seen it themselves. But a reasonable man knowing what they knew at the time would conclude that this sort of thing might happen. . . . I am persuaded by the fact that if what happens is reasonably close to the reason for the initial hospitalization, which is what this was, then you really do have a kind of a continuous flow here, and whatever negligence you have is really part and parcel of the initial negligence, too. And so I do conclude on these facts that the liability of the – the defendant, Suburban Hospital, is joined and not independent. J.A. 771. The court entered a short, one-page order to this effect on January 24, 2012, naming Suburban as a joint tortfeasor “for reasons stated in the record.” Id. at 797. It is from that order that Mrs. LaClair appeals. 3 Solely for purposes of the motion on the causation issue, Suburban conceded that it was negligent on November 6, 2007, but it continued to dispute all issues of causation and damages. 9 II. The parties submit that the district court’s order is reviewed for clear error. However, this analysis necessarily involves deciding whether the district court correctly applied Maryland law, and thus, we approach this appeal “by inspecting factual findings for clear error and examining de novo the legal conclusions derived from those facts.” F.C. Wheat Mar. Corp. v. United States, 663 F.3d 714, 723 (4th Cir. 2011). A finding is clearly erroneous when “although there is evidence to support it, the reviewing court on the entire evidence is left with the definite and firm conviction that a mistake has been committed.” Anderson v. City of Bessemer City, N.C., 470 U.S. 564, 573 (1985) (internal quotation marks omitted). Because this case is in federal court based on diversity jurisdiction, the substantive law of the forum state — in this case, Maryland — applies. See Erie R.R. v. Tompkins, 304 U.S. 64, 78 (1938). We should determine: how the [Court of Appeals of Maryland] would rule. If th[at] [court] has spoken neither directly nor indirectly on the particular issue before us, we are called upon to predict how that court would rule if presented with the issue. In making that prediction, we may consider lower court opinions in [Maryland], the teachings of treatises, and the practices in other states. 10 Twin City Fire Ins. Co. v. Ben Arnold-Sunbelt Beverage Co., 433 F.3d 365, 369 (4th Cir. 2005) (internal quotation marks and citations omitted). III. A. PTSM will not be jointly liable for the November 6 incident “if it appears highly extraordinary and unforeseeable that the plaintiffs’ injuries [on November 6] occurred as a result of [PTSM’s] alleged tortious conduct.” Pittway Corp. v. Collins, 973 A.2d 771, 788 (Md. 2009). Accordingly, PTSM avoids liability for the November 6 incident “only if the intervening negligent act,” i.e., Suburban’s conduct, “is considered a superseding cause of the harm to” Mr. LaClair. Id. at 789; see also Morgan v. Cohen, 523 A.2d 1003, 1004-05 (Md. 1987) (“It is a general rule that a negligent actor is liable not only for harm that he directly causes but also for any additional harm resulting from normal efforts of third persons in rendering aid, irrespective of whether such acts are done in a proper or a negligent manner.”). Maryland courts (and federal district courts sitting in diversity) have addressed the superseding cause issue with varying results. Pittway is the seminal Maryland case on superseding cause, providing a framework for analyzing an argument that an intervening act cuts off the liability of an 11 original tortfeasor. The Court of Appeals of Maryland explained: The defendant is liable where the intervening causes, acts, or conditions were set in motion by his earlier negligence, or naturally induced by such wrongful act . . . or even it is generally held, if the intervening acts or conditions were of a nature, the happening of which was reasonably to have been anticipated[.] Pittway, 973 A.2d at 789 (internal quotation marks and alteration omitted). Pittway recognizes that Section 442 of the Restatement (Second) of Torts establishes the test applied in Maryland courts for analyzing superseding cause: The following considerations are of importance in determining whether an intervening force is a superseding cause of harm to another: (a) the fact that its intervention brings about harm different in kind from that which would otherwise have resulted from the actor’s negligence; (b) the fact that its operation or the consequences thereof appear after the event to be extraordinary rather than normal in view of the circumstances existing at the time of its operation; (c) the fact that the intervening force is operating independently of any situation created by the actor’s negligence, or, on the other hand, is or is not a normal result of such a situation; (d) the fact that the operation of the intervening force is due to a third person’s act or his failure to act; (e) the fact that the intervening force is due to an act of a third person which is wrongful toward the other and as such 12 subjects the third person to liability to him; (f) the degree of culpability of a wrongful act of a third person which sets the intervening force in motion. Restatement (Second) of Torts § 442 (1965); Pittway, 973 A.2d at 789. B. We conclude that the district court did not err in finding that Suburban and PTSM were joint tortfeasors. 1. The majority of the Restatement Section 442 factors weigh in favor of a conclusion that Suburban and PTSM were joint tortfeasors. a. As to factor (a), above, Mrs. LaClair attempts to show that the injuries sustained on November 6 were “separate and distinct” from those sustained on November 1, and thus, “different in kind.” See Br. of Appellant 3-9. We first note that we would be hard-pressed to find a case regarding subsequent negligent medical care in which there was not a “separate and distinct” injury after the injury caused by the initial actor’s negligence. This, alone, does not lead us to the conclusion that the negligent medical care is a superseding cause of harm. See Underwood-Gary v. Mathews, 785 A.2d 708, 713 13 (Md. 2001) (“[W]hen a physician negligently treats the plaintiff’s injuries, the physician becomes liable to the plaintiff to the extent of the harm caused by the physician’s negligence. Thus, the physician’s negligent treatment is a subsequent tort for which both the doctor and the original tortfeasor are jointly liable.” (internal citations omitted)). In any event, the harm brought about by the November 6 incident was not so different from the type of harm that is likely to result from an 86-year-old man’s fall from a piece of exercise equipment, even assuming, as Mrs. LaClair would have us do, that a severe spinal cord injury resulted from Mr. LaClair’s repositioning in his bed. For these reasons, factor (a) weighs in favor of Suburban. b. In addressing factor (b), the Restatement directs us to look to Restatement (Second) of Torts § 435(2), Comments (c) and (d). Comment (c) provides, in part, “Where it appears to the court in retrospect that it is highly extraordinary that an intervening cause has come into operation, the court may declare such a force to be a superseding cause.” Restatement (Second) of Torts § 435(2) cmt. c (1965). Comment (d) provides, in part, “The court’s judgment as to whether the harm is a highly extraordinary result is made after the event with the full knowledge of all that has happened. This includes those 14 surroundings of which at the time the actor knew nothing but which the course of events discloses to the court.” Id. cmt. d. Comment (d) continues: [The court] also follows the effects of the actor’s negligence as it passes from phase to phase until it results in harm to the plaintiff. In advance, the actor may not have any reason to expect that any outside force would subsequently operate and change the whole course of events from that which it would have taken but for its intervention. None the less, the court, knowing that such a force has intervened, may see nothing extraordinary either in its intervention or in the effect which it has upon the further development of the injurious results of the defendant’s conduct. This is particularly important where the intervening force is supplied by the act of a human being . . . , which is itself a reaction to the stimulus of a situation for which the actor is responsible. Id. Mrs. LaClair presents testimony from three neurosurgeons that the “application of [the patient care technicians’] force to the body of an elderly, post-operative cervical spine patient . . . had never before been witnessed or known to them in all their years of practice as Neurosurgeons[.]” Br. of Appellant 27 (citing J.A. 190, 222, 229). However, as explained by Comment (d) above, PTSM may have had no reason to expect that Mr. LaClair would be injured by being repositioned in his hospital bed, but the proper way to view the situation is after-the-fact: “knowing that such a 15 force has intervened.” Restatement (Second) Torts § 435 cmt. d (emphasis added). For example, in Henley v. Prince George’s Cnty., the Court of Appeals of Maryland explained the difference between foreseeability when considering the existence of a duty and, as here, causation: “Foreseeability as a factor in the determination of the existence of a duty involves a prospective consideration of the facts existing at the time of the negligent conduct. Foreseeability as an element of proximate cause permits a retrospective consideration of the total facts of the occurrence[.]” 503 A.2d 1333, 1341 (Md. 1986) (emphases added). Viewing the facts of this case retrospectively, there is “an appropriate nexus” between the November 1 incident and injuries and the November 6 incident and injuries such that it is “at least a permissible conclusion” that Mr. LaClair’s already- injured spine would be further injured by being positioned into a hospital bed. Id. at 1342. Again, we agree with the district court that Suburban’s actions were not “so extraordinary as to bring about a conclusion of separate intervening cause.” J.A. 766. Thus, factor (b) also weighs in favor of Suburban. 16 c. Considering the cross-referencing set forth in Restatement (Second) Section 442, factors (c), (e), and (f) 4 boil down to the same core inquiries: whether Suburban’s actions were “a normal consequence of a situation created by the actor’s negligent conduct,” 5 and whether the manner in which the intervening act was done was “extraordinarily negligent.” Restatement (Second) Torts §§ 443, 447(c) (1965). First, clearly, Mr. LaClair would not have sustained the injuries on November 6 if PTSM’s negligence had not put him in the hospital in the first place. 6 And the district court 4 As to factor (d), the district court dismissed this factor as irrelevant to the inquiry, but it only appeared to analyze the “failure to act” portion of § 442(d). See J.A. 767-68. While this may have been legal error, even assuming factor (d) weighs in favor of Mrs. LaClair, the balance of the factors nonetheless weighs in favor of Suburban. 5 The comments to factor (c) explain that the “situation created by the actor’s negligence” means any situation that the original tortfeasor’s actions were a substantial factor in bringing about. See Restatement (Second) of Torts §§ 447(c), 442(c) cmt. d. 6 Indeed, the LaClairs themselves believed the November 6 incident to be a foreseeable consequence of the November 1 incident. They recognized as much in their initial complaint against PTSM, which sought to hold PTSM liable for “two surgical procedures” at Suburban. J.A. 48 (emphasis added). In addition, on July 12, 2009, the LaClairs answered interrogatories and listed the following as caused by the PTSM’s negligence: admission to Suburban from November 1 to November 13, 2007; admission to the rehabilitation center from November 13 to November 30; admission to Georgetown University for (Continued) 17 found, “the act, . . . the putting back in bed is not itself extraordinary.” J.A. 767. Mrs. LaClair’s attorney agreed. See id. at 709 (The Court: “[T]he objective anyway was to put this man back in bed. That’s not unforeseeable; correct? Mr. Regan: Yes.”). The district court did not err in finding that it is a “normal consequence,” (i.e., foreseeable) that a cervical spine patient might sustain additional spinal injuries at the hands of medical professionals. As to the manner in which the negligent act was done, we should consider the injuries and the degree of culpability of the patient care technicians. Even if the patient care technicians were “very rough,” J.A. 362, that does not quite get us to the level of “extraordinarily negligent.” Restatement (Second) of Torts § 447(c). Indeed, Maryland courts have held that original tortfeasors are liable for more significant harm inflicted by intervening negligent medical professionals. See Underwood-Gary, 785 A.2d at 713 (“[An] original tortfeasor is liable for additional harm caused by a treating physician’s improper diagnosis and unnecessary surgery[.] This rule is based on the premise that the negligent actor, by his or her conduct, has placed the plaintiff in a surgery from February 5 to February 25, 2008; and home nursing care from April 2008 to July 2009. See id. at 64-78. 18 position of danger and should answer for the risks inherent in treatment and rendering aid.” (citing Restatement (Second) of Torts § 457 cmt. c, illus. 1)); Richards v. Freeman, 179 F. Supp. 2d 556, 560-61 (D. Md. 2002) (where physicians negligently performed surgeries that left car accident victim with a right arterial tear in her heart, finding physicians and original defendant driver to be “joint” yet “subsequent tortfeasors” under Maryland’s Uniform Contribution Among Tort-Feasors Act (UCATA)); see also Morgan, 523 A.2d at 1008 (stating that under the UCATA, an original tortfeasor and a negligent health care provider could be considered concurrent tortfeasors concurring in producing the additional harm). Kyte v. McMillion, 259 A.2d 532 (Md. 1969), cited by Mrs. LaClair, does not change this result. There, a young woman was involved in a car wreck due to a negligent driver, and she was taken to the hospital and treated for broken bones. Upon admission to the hospital, a physician ordered a blood transfusion, but the nurse used the wrong type of blood. See id. at 533. As a result of this mistake, the plaintiff suffered “bleak prospects of future pregnancies” and was projected to have “difficult gestation from both an emotional and physical point of view.” Id. The plaintiff filed suit against the hospital first, ultimately reaching an agreement and signing a release as to damages stemming only from 19 the blood transfusion. See id. at 533-34. Later, when the plaintiff filed suit against the allegedly negligent driver, McMillion, the court held that McMillion was not included in the release and thus, the damages awarded to the plaintiff from the hospital should not be credited to McMillion. Id. at 543. Notably, the Maryland Court of Special Appeals has limited this case to its facts as “the Court [in Kyte] was careful to point out that the injuries [broken bones and inability to have children] were peculiarly separate and divisible[.]” Sullivan v. Miller, 337 A.2d 185, 191 (Md. Ct. Spec. App. 1975). Even the Kyte court itself declared, “It should be understood . . . that the decision announced herein goes no further than the unusual facts and circumstances of this case.” See Kyte, 259 A.2d at 543. 7 Therefore, we cannot say that the negligence of the patient care technicians, either in manner or consequence, was 7 In this appeal, Suburban also contends that the settlement with PTSM already took into account the damages arising from the November 6 incident, and points to the LaClairs’ answers to interrogatories on July 12, 2009, in the PTSM lawsuit. See supra, note 7. However, while this argument may have some merit, we do not rely on it because it appears that the LaClairs shifted gears in the middle of their litigation with PTSM (and after the interrogatory answers were filed) due to the testimony of Dr. Powers. Moreover, reliance on this basis is unnecessary given the weight of other factors in favor of Suburban. 20 abnormal or extraordinary. Thus, factors (c), (e), and (f) weigh in favor of Suburban. 2. Examining the Restatement Section 442 factors does not end our inquiry. The Court of Appeals of Maryland further explains that Section 447 of the Restatement (Second) of Torts illuminates these factors: “The fact that an intervening act of a third person is negligent in itself or is done in a negligent manner does not make it a superseding cause of harm to another which the actor’s negligent conduct is a substantial factor in bringing about, if (a) the actor at the time of his negligent conduct should have realized that a third person might so act, or (b) a reasonable man knowing the situation existing when the act of the third person was done would not regard it as highly extraordinary that the third person had so acted, or (c) the intervening act is a normal consequence of a situation created by the actor’s conduct and the manner in which it is done is not extraordinarily negligent.” Pittway, 973 A.2d at 789 (quoting Restatement (Second) of Torts § 447). Thus, “a superseding cause arises primarily when unusual and extraordinary independent intervening negligent acts occur that could not have been anticipated by the original tortfeasor.” Id. (internal quotation marks omitted). Therefore, courts should look to both the foreseeability of the 21 harm suffered by the plaintiff, as well as the foreseeability of the intervening act itself. See id. at 792. Any doubt that the Restatement Section 442 factors weigh in favor of Suburban is resolved by an analysis of Section 447: PTSM should have realized that an elderly man injured by a fall from its own exercise equipment would have to go to the hospital, would receive medical care, and may possibly experience negligent medical care there. Mr. LaClair’s ultimate injuries and the manner in which they occurred were not extraordinary, nor were these unfortunate consequences unforeseeable. IV. For the foregoing reasons, the judgment of the district court is AFFIRMED. 22
{ "pile_set_name": "FreeLaw" }
Featured topics Publication year Search on nutri-facts for Topic of the Month Micronutrient Deficiencies Throughout the World April 24, 2017 Julia Bird The discovery of vitamins a little over one century ago was incredibly important for the field of nutrition (1). At last, we had found the key to preventing vitamin deficiencies! Knowing about the vitamins meant that medical questions that had puzzled humans for centuries – why does fresh citrus fruit cure scurvy, but a syrup made from the juice does not? – could be reliably answered (2). Despite this grand leap in the understanding of nutrition, however, vitamin and mineral deficiencies still plague us around the globe. While we know in general which micronutrients and how much most people need to stay healthy, making sure that everyone has access to micronutrients is more problematic. Each region in the world has its own nutrition concerns. The problem of “hidden hunger,” when people may get enough calories but the micronutrient content of their diet is lacking, is improving but there is still a long way to go (3). Which micronutrient deficiencies are found throughout the world? South Asia, East Asia and the Pacific South Asia, East Asia and the Pacific, comprising countries such as China, Indonesia, Vietnam, India, Bangladesh and Malaysia, have mostly showed a large improvement in micronutrient status in their population over the past decades (3). General programs to support economic growth have raised the standard of living for many people living in developing Asian countries, and staple food fortification has been able to reduce specific micronutrient deficiencies such as iodine and iron. Despite these gains, deficiencies in iron and vitamin A are still prevalent in some risk groups: 27 million school age children, 7.5 million pregnant women and 96 million non-pregnant women in the region are affected by anemia, while 13 percent of pre-school children and 21 percent of pregnant women are affected by vitamin A deficiency (4). Eastern Europe and Central Asia Low- and middle-income countries in Europe and Central Asia have shown a modest improvement in reducing micronutrient deficiencies (3). In this region, however vitamins A and D, iodine, iron, zinc, folate and thiamine are marked as micronutrients of special concern (5). The rates of deficiencies vary depending on the country, as local laws, the economic situation, cultural trends and the environment can affect supply of vitamins and minerals. In particular, iodine deficiency in central Europe is common, and is very much impacted by national policies regarding iodine fortification (6). Seasonable variations in the availability of different foods can affect dietary intakes and nutrient status in Europe. For example, more fruits and vegetables are eaten in the summer and autumn months, leading to a better folate status in the general population in Slovakia (7). Certain vulnerable populations are at greater risk of micronutrient deficiency. These groups include pregnant women and young children, the elderly, people with a low socioeconomic status, and those affected by chronic disease (8-10). Latin America and the Caribbean Micronutrient nutrition in Latin American and the Caribbean has improved in the past few years, and rates of deficiency tend to be the lowest of the low- and middle-income countries (3). In fact, all countries in this area of the world reduced their prevalence of hidden hunger in the period 1995-2011 (3). Despite these relative improvements, micronutrient deficiencies have an impact on health for a significant proportion of people in this area of the world. Iron deficiency anemia and zinc deficiency remain a problem for women of childbearing age and children aged under 6 years (11, 12). While vitamin B12 deficiency is not monitored as well as other micronutrients, an incidence greater than 10 percent is reported for vulnerable groups in some countries, such as women aged 13 to 49 in Colombia, and children aged 6 months to 5 years in Guatemala. Rates of vitamin and mineral deficiencies can vary greatly between countries. For example, vitamin A deficiency in young children has been virtually eradicated in Guatemala and Nicaragua, yet is a severe public health problem in Colombia, Mexico, and Haiti (13). One micronutrient success story has been the use of folic acid fortification to improve folate status and reduce the occurrence of neural tube defects in Latin American and the Caribbean. The introduction of mandatory folic acid fortification for almost all countries has led to a dramatic reduction in the percentage of the population with folate deficiency (14). In turn, surveillance of neural tube defects shows a decrease of one- to two-thirds compared to the pre-fortification period (15). Carefully designed interventions such as staple food fortification, and that focus on vulnerable groups, are needed to further improve micronutrient nutrition in Latin America and the Caribbean (12). Middle East and North Africa The nutrition situation in the Middle East and North Africa has improved substantially in the last decades. Many countries are undergoing an advanced nutrition transition, whereby there is a modest reduction in micronutrient malnutrition, while rates of overweight and obesity are rapidly increasing (16). Unfortunately, the complex security situation in several countries (Afghanistan, Libya, Somalia, Sudan, and Syria) has further increased food insecurity, leading to widespread acute and chronic under nutrition, especially in young children and pregnant women (16). The vitamins and minerals most often found to be deficient in nutritional surveys in the region include calcium, iodine, iron, vitamin A, vitamin D, and folate (16). Food fortification programs in the area are patchy, and while many countries have dietary guidelines for individuals that promote a healthful diet, their uptake has been limited (16). Anemia is the most prevalent micronutrient deficiency in the Middle East, and can affect more than half of some countries. Vitamin D deficiency has been reported for many countries despite plentiful sunshine; this relates to few dietary sources and wearing traditional clothing that blocks sunlight from reaching the skin. Several countries including Jordan, Egypt, the United Arab Emirates, Oman and Kuwait have mandatory wheat flour fortification policies in place. All these countries fortify with folic acid and iron, and some include zinc and other vitamins as well. However, rice and maize are also staple foods in these countries and are not fortified, hence micronutrient deficiencies remain widespread despite the existence of fortification. West, Central and Sub-Saharan Africa The majority of countries showing an increase in hidden hunger over the past years were located in West, Central and Southern Africa. These results do not bode well for the social and economic development in countries affected by a high prevalence of under nutrition (3). The causes of micronutrient deficiencies in Africa are multi-factorial and relate to poor economic development, unstable governments that neglect critical investments into education, health and infrastructure, and food insecurity related to harsh agricultural environments (17). The high prevalence of vitamin and mineral deficits, such as iron deficiency anemia, zinc deficiency and vitamin A deficiency will only be reduced when the underlying causes of poverty are alleviated. In some countries in southern Africa, commitment to improving the nutritional status of the population has shown positive results. For example, a mandatory fortification program for maize and wheat flour in South Africa has been effective in improving vitamin and mineral intakes (18, 19). There is still room for improvement in South Africa, however; it is one of 48 countries worldwide prioritized as having an “unfinished fortification” program (20). High-Income Countries While developing countries bear the greatest burden of micronutrient deficiencies around the world, they still exist in high-income countries. The considerable resources of high-income countries mean that the micronutrient status of their populations is studied in greater detail than the rest of the world and give a better estimate of the true rate. Comprehensive a representative analyses of U.S. populations find that 5 percent or more is affected by deficiencies in vitamins B6, C and D, and almost 10 percent of women of child-bearing age are affected by low body iron (21). In Europe, international comparisons find that at least half of certain population groups do not meet recommendations. Intakes of thiamine in Italian women, B6 in women from many countries, and vitamin C in Scandinavian men and male smokers are clearly too low (22, 23). Also, intakes of both vitamin D and E are low for most people living in Northern, Western and Southern Europe (22, 23). A lack of education about nutrient-dense diets and poor food choices are a major contributor to micronutrient deficiencies in high-income countries. Food and Agriculture Organization of the United Nations. Addressing social and economic burden of malnutrition through nutrition-sensitive agricultural and food policies in the region of Europe and Central Asia. 2015. Papathakis PC, Pearson KE. Food fortification improves the intake of all fortified nutrients, but fails to meet the estimated dietary requirements for vitamins A and B6, riboflavin and zinc, in lactating South African women. Public Health Nutr 2012;15(10):1810-7. doi: 10.1017/S1368980012003072
{ "pile_set_name": "Pile-CC" }
List of spouses of Prime Ministers of Japan The is the wife or husband of the Prime Minister of Japan. Role and duties The role of the Prime Ministerial Consort is not an official office and as such they are not given a salary or official duties. Spouse of the Prime Ministers of the Empire of Japan (1885–1947) Spouse of the Prime Ministers during the Meiji period (1885–1912) Under the Meiji Emperor Spouse of the Prime Ministers during the Taishō period (1912–1926) Under the Taishō Emperor Spouse of the Prime Ministers during the Shōwa period (1926–1947) Under the Shōwa Emperor Spouse of the Prime Ministers of the State of Japan (1947–present) Spouse of the Prime Ministers during the Shōwa period (1947–1989) Under the Shōwa Emperor Spouse of the Prime Ministers during the Akihito period (1989–present) Under Emperor Akihito References * *Spouse Japan
{ "pile_set_name": "Wikipedia (en)" }
Preprint hep-ph/0006089 [Improved Conformal Mapping of the Borel Plane]{} U. D. Jentschura and G. Soff [*Institut für Theoretische Physik, TU Dresden, 01062 Dresden, Germany*]{}\ [**Email:**]{} jentschura@physik.tu-dresden.de, soff@physik.tu-dresden.de The conformal mapping of the Borel plane can be utilized for the analytic continuation of the Borel transform to the entire positive real semi-axis and is thus helpful in the resummation of divergent perturbation series in quantum field theory. We observe that the convergence can be accelerated by the application of Padé approximants to the Borel transform expressed as a function of the conformal variable, i.e. by a combination of the analytic continuation via conformal mapping and a subsequent numerical approximation by rational approximants. The method is primarily useful in those cases where the leading (but not sub-leading) large-order asymptotics of the perturbative coefficients are known. 11.15.Bt, 11.10.Jj General properties of perturbation theory;\ Asymptotic problems and properties The problem of the resummation of quantum field theoretic series is of obvious importance in view of the divergent, asymptotic character of the perturbative expansions [@LGZJ1990; @ZJ1996; @Fi1997]. The convergence can be accelerated when additional information is available about large-order asymptotics of the perturbative coefficients [@JeWeSo2000]. In the example cases discussed in [@JeWeSo2000], the location of several poles in the Borel plane, known from the leading and next-to-leading large-order asymptotics of the perturbative coefficients, is utilized in order to construct specialized resummation prescriptions. Here, we consider a particular perturbation series, investigated in [@BrKr1999], where only the [*leading*]{} large-order asymptotics of the perturbative coefficients are known to sufficient accuracy, and the subleading asymptotics have – not yet – been determined. Therefore, the location of only a single pole – the one closest to the origin – in the Borel plane is available. In this case, as discussed in [@CaFi1999; @CaFi2000], the (asymptotically optimal) conformal mapping of the Borel plane is an attractive method for the analytic continuation of the Borel transform beyond its circle of convergence and, to a certain extent, for accelerating the convergence of the Borel transforms. Here, we argue that the convergence of the transformation can be accelerated further when the Borel transforms, expressed as a function of the conformal variable which mediates the analytic continuation, are additionally convergence-accelerated by the application of Padé approximants. First we discuss, in general terms, the construction of the improved conformal mapping of the Borel plane which is used for the resummation of the perturbation series defined in Eqs. (\[gammaPhi4\]) and (\[gammaYukawa\]) below. The method uses as input data the numerical values of a finite number of perturbative coefficients and the leading large-order asymptotics of the perturbative coefficients, which can, under appropriate circumstances, be derived from an empirical investigation of a finite number of coefficients, as it has been done in [@BrKr1999]. We start from an asymptotic, divergent perturbative expansion of a physical observable $f(g)$ in powers of a coupling parameter $g$, $$\label{power} f(g) \sim \sum_{n=0}^{\infty} c_n\,g^n\,,$$ and we consider the generalized Borel transform of the $(1,\lambda)$-type (see Eq. (4) in [@JeWeSo2000]), $$\label{BorelTrans} f^{(\lambda)}_{\rm B}(u) \; \equiv \; f^{(1,\lambda)}_{\rm B}(u) \; = \; \sum_{n=0}^{\infty} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The full physical solution can be reconstructed from the divergent series (\[power\]) by evaluating the Laplace-Borel integral, which is defined as $$\label{BorelIntegral} f(g) = \frac{1}{g^\lambda} \, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \, \exp\bigl(-u/g\bigr)\, f^{(\lambda)}_{\rm B}(u)\,.$$ The integration variable $u$ is referred to as the Borel variable. The integration is carried out either along the real axis or infinitesimally above or below it (if Padé approximants are used for the analytic continuation, modified integration contours have been proposed [@Je2000]). The most prominent issue in the theory of the Borel resummation is the construction of an analytic continuation for the Borel transform (\[BorelTrans\]) from a finite-order partial sum of the perturbation series (\[power\]), which we denote by $$\label{PartialSum} f^{(\lambda),m}_{\rm B}(u) = \sum_{n=0}^{m} \frac{c_n}{\Gamma(n+\lambda)}\,u^n\,.$$ The analytic continuation can be accomplished using the direct application of Padé approximants to the partial sums of the Borel transform $f^{(\lambda),m}_{\rm B}(u)$ [@BrKr1999; @Je2000; @Raczka1991; @Pi1999] or by a conformal mapping [@SeZJ1979; @LGZJ1983; @GuKoSu1995; @CaFi1999; @CaFi2000]. We now assume that the [*leading*]{} large-order asymptotics of the perturbative coefficients $c_n$ defined in Eq. (\[power\]) is factorial, and that the coefficients display an alternating sign pattern. This indicates the existence of a singularity (branch point) along the negative real axis corresponding to the leading large-order growth of the perturbative coefficients, which we assume to be at $u=-1$. For Borel transforms which have only a single cut in the complex plane which extends from $u=-1$ to $u=-\infty$, the following conformal mapping has been recommended as optimal [@CaFi1999], $$\label{DefZ} z = z(u) = \frac{\sqrt{1+u}-1}{\sqrt{1+u}+1}\,.$$ Here, $z$ is referred to as the conformal variable. The cut Borel plane is mapped unto the unit circle by the conformal mapping (\[DefZ\]). We briefly mention that a large variety of similar conformal mappings have been discussed in the literature . It is worth noting that conformal mappings which are adopted for doubly-cut Borel planes have been discussed in [@CaFi1999; @CaFi2000]. We do not claim here that it would be impossible to construct conformal mappings which reflect the position of more than two renormalon poles or branch points in the complex plane. However, we stress that such a conformal mapping is likely to have a more complicated mathematical structure than, for example, the mapping defined in Eq. (27) in [@CaFi1999]. Using the alternative methods described in [@JeWeSo2000], poles (branch points) in the Borel plane corresponding to the subleading asymptotics can be incorporated easily provided their position in the Borel plane is known. In a concrete example (see Table 1 in [@JeWeSo2000]), 14 poles in the Borel plane have been fixed in the denominator of the Padé approximant constructed according to Eqs. (53)–(55) in [@JeWeSo2000], and accelerated convergence of the transforms is observed. In contrast to the investigation [@JeWeSo2000], we assume here that only the [*leading*]{} large-order factorial asymptotics of the perturbative coefficients are known. We continue with the discussion of the conformal mapping (\[DefZ\]). It should be noted that for series whose leading singularity in the Borel plane is at $u = -u_0$ with $u_0 > 0$, an appropriate rescaling of the Borel variable $u \to |u_0|\, u$ is necessary on the right-hand side of Eq. (\[BorelIntegral\]). Then, $f^{(\lambda)}_{\rm B}(|u_0|\,u)$ as a function of $u$ has its leading singularity at $u = -1$ (see also Eq. (41.57) in [@ZJ1996]). The Borel integration variable $u$ can be expressed as a function of $z$ as follows, $$\label{UasFuncOfZ} u(z) = \frac{4 \, z}{(z-1)^2}\,.$$ The $m$th partial sum of the Borel transform (\[PartialSum\]) can be rewritten, upon expansion of the $u$ in powers of $z$, as $$\label{PartialSumConformal} f^{(\lambda),m}_{\rm B}(u) = f^{(\lambda),m}_{\rm B}\bigl(u(z)\bigr) = \sum_{n=0}^{m} C_n\,z^n + {\cal O}(z^{m+1})\,,$$ where the coefficients $C_n$ as a function of the $c_n$ are uniquely determined (see, e.g., Eqs. (36) and (37) of [@CaFi1999]). We define partial sum of the Borel transform, expressed as a function of the conformal variable $z$, as $$f'^{(\lambda),m}_{\rm B}(z) = \sum_{n=0}^{m} C_n\,z^n\,.$$ In a previous investigation [@CaFi1999], Caprini and Fischer evaluate the following transforms, $$\label{CaFiTrans} {\cal T}'_m f(g) = \frac{1}{g^\lambda}\, \int_0^\infty {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f'^{(\lambda),m}_{\rm B}(z(u))\,.$$ Caprini and Fischer [@CaFi1999] observe the apparent numerical convergence with increasing $m$. The limit as $m\to\infty$, provided it exists, is then assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}'_m f(g)\,.$$ We do not consider the question of the existence of this limit here (for an outline of questions related to these issues we refer to [@CaFi2000]). In the absence of further information on the analyticity domain of the Borel transform (\[BorelTrans\]), we cannot necessarily conclude that $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ as a function of $z$ is analytic inside the unit circle of the complex $z$-plane, or that, for example, the conditions of Theorem 5.2.1 of [@BaGr1996] are fulfilled. Therefore, we propose a modification of the transforms (\[CaFiTrans\]). In particular, we advocate the evaluation of (lower-diagonal) Padé approximants [@BaGr1996; @BeOr1978] to the function $f'^{(\lambda),m}_{\rm B}(z)$, expressed as a function of $z$, $$\label{ConformalPade} f''^{(\lambda),m}_{\rm B}(z) = \bigg[ [\mkern - 2.5 mu [m/2] \mkern - 2.5 mu ] \bigg/ [\mkern - 2.5 mu [(m+1)/2] \mkern - 2.5 mu ] \bigg]_{f'^{(\lambda),m}_{\rm B}}\!\!\!\left(z\right)\,.$$ We define the following transforms, $$\label{AccelTrans} {\cal T}''_m f(g) = \frac{1}{g^\lambda}\, \int_{C_j} {\rm d}u \,u^{\lambda - 1} \,\exp\bigl(-u/g\bigr)\, f''^{(\lambda),m}_{\rm B}\bigl(z(u)\bigr)$$ where the integration contour $C_j$ ($j=-1,0,1$) have been defined in [@Je2000]. These integration contours have been shown to to provide the physically correct analytic continuation of resummed perturbation series for those cases where the evaluation of the standard Laplace-Borel integral (\[BorelIntegral\]) is impossible due to an insufficient analyticity domain of the integrand (possibly due to multiple branch cuts) or due to spurious singularities in view of the finite order of the Padé approximations defined in (\[ConformalPade\]). We should mention potential complications due to multi-instanton contributions, as discussed for example in Ch. 43 of [@ZJ1996] (these are not encountered in the current investigation). In this letter, we use exclusively the contour $C_0$ which is defined as the half sum of the contours $C_{-1}$ and $C_{+1}$ displayed in Fig. 1 in [@Je2000]. At increasing $m$, the limit as $m\to\infty$, provided it exists, is then again assumed to represent the complete, physically relevant solution, $$f(g) = \lim_{m\to\infty} {\cal T}''_m f(g)\,.$$ Because we take advantage of the special integration contours $C_j$, analyticity of the Borel transform $f^{(\lambda)}_{\rm B}{\mathbf (}u(z){\mathbf )}$ inside the unit circle of the complex $z$-plane is not required, and additional acceleration of the convergence is mediated by employing Padé approximants in the conformal variable $z$. [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-0$ & $501~565~232$ & $-0$ & $538~352~234$ & $-0$ & $573~969~740$ & $-0$ & $827~506~173$\ ------------------------------------------------------------------------ 29 & $-0$ & $501~565~232$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~143$\ ------------------------------------------------------------------------ 30 & $-0$ & $501~565~231$ & $-0$ & $538~352~233$ & $-0$ & $573~969~738$ & $-0$ & $827~506~136$\ [cr@[.]{}lr@[.]{}lr@[.]{}lr@[.]{}l]{} ------------------------------------------------------------------------ $m$ & & & &\ ------------------------------------------------------------------------ 28 & $-1$ & $669~071~213$ & $-1$ & $800~550~588$ & $-1$ & $928~740~624$ & $-1$ & $852~027~809$\ ------------------------------------------------------------------------ 29 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~626$ & $-1$ & $852~027~810$\ ------------------------------------------------------------------------ 30 & $-1$ & $669~071~214$ & $-1$ & $800~550~589$ & $-1$ & $928~740~625$ & $-1$ & $852~027~810$\ We consider the resummation of two particular perturbation series discussed in [@BrKr1999] for the anomalous dimension $\gamma$ function of the $\phi^3$ theory in 6 dimensions and the Yukawa coupling in 4 dimensions. The perturbation series for the $\phi^3$ theory is given in Eq. (16) in [@BrKr1999], $$\label{gammaPhi4} \gamma_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{G_n}{6^{2 n - 1}} \, g^n\,,$$ where the coefficients $G_n$ are given in Table 1 in [@BrKr1999] for $n=1,\dots,30$ (the $G_n$ are real and positive). We denote the coupling parameter $a$ used in [@BrKr1999] as $g$; this is done in order to ensure compatibility with the general power series given in Eq. (\[power\]). Empirically, Broadhurst and Kreimer derive the large-order asymptotics $$G_n \sim {\rm const.} \; \times \; 12^{n-1} \, \Gamma(n+2)\,, \qquad n\to\infty\,,$$ by investigating the explicit numerical values of the coefficients $G_1,\dots,G_{30}$. The leading asymptotics of the perturbative coefficients $c_n$ are therefore (up to a constant prefactor) $$\label{LeadingPhi4} c_n \sim (-1)^n \frac{\Gamma(n+2)}{3^n}\,, \qquad n\to\infty\,.$$ This implies that the $\lambda$-parameter in the Borel transform (\[BorelTrans\]) should be set to $\lambda=2$ (see also the notion of an asymptotically optimized Borel transform discussed in [@JeWeSo2000]). In view of Eq. (\[LeadingPhi4\]), the pole closest to the origin of the Borel transform (\[BorelTrans\]) is expected at $$u = u^{\rm hopf}_0 = -3\,,$$ and a rescaling of the Borel variable $u \to 3\,u$ in Eq. (\[BorelIntegral\]) then leads to an expression to which the method defined in Eqs. (\[power\])–(\[AccelTrans\]) can be applied directly. For the Yukawa coupling, the $\gamma$-function reads $$\label{gammaYukawa} {\tilde \gamma}_{\rm hopf}(g) \sim \sum_{n=1}^{\infty} (-1)^n \, \frac{{\tilde G}_n}{2^{2 n - 1}} \, g^n\,,$$ where the ${\tilde G}_n$ are given in Table 2 in [@BrKr1999] for $n=1,\dots,30$. Empirically, i.e. from an investigation of the numerical values of ${\tilde G}_1,\dots,{\tilde G}_{30}$, the following factorial growth in large order is derived [@BrKr1999], $${\tilde G}_n \sim {\rm const.'} \; \times \; 2^{n-1} \, \Gamma(n+1/2)\,, \qquad n\to\infty\,.$$ This leads to the following asymptotics for the perturbative coefficients (up to a constant prefactor), $$c_n \sim (-1)^n \frac{\Gamma(n+1/2)}{2^n} \,, \qquad n\to\infty\,.$$ This implies that an asymptotically optimal choice [@JeWeSo2000] for the $\lambda$-parameter in (\[BorelTrans\]) is $\lambda=1/2$. The first pole of the Borel transform (\[BorelTrans\]) is therefore expected at $$u = {\tilde u}^{\rm hopf}_0 = -2\,.$$ A rescaling of the Borel variable according to $u \to 2\,u$ in (\[BorelIntegral\]) enables the application of the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]). In Table \[table1\], numerical values for the transforms ${\cal T}''_m \gamma_{\rm hopf}(g)$ are given, which have been evaluated according to Eq. (\[AccelTrans\]). The transformation order is in the range $m=28~,29,~30$, and we consider coupling parameters $g=5.0,~5.5,~6.0$ and $g=10.0$. The numerical values of the transforms display apparent convergence to about 9 significant figures for $g \leq 6.0$ and to about 7 figures for $g=10.0$. In Table \[table2\], numerical values for the transforms ${\cal T}''_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[AccelTrans\]) are shown in the range $m=28,~29,~30$ for (large) coupling strengths $g=5.0,~5.5,~6.0$. Additionally, the value $g = 30^2/(4\,\pi)^2 = 5.69932\dots$ is considered as a special case (as it has been done in [@BrKr1999]). Again, the numerical values of the transforms display apparent convergence to about 9 significant figures. At large coupling $g = 12.0$, the apparent convergence of the transforms suggests the following values: $\gamma_{\rm hopf}(12.0) = -0.939\,114\,3(2)$ and ${\tilde \gamma}_{\rm hopf}(12.0) = -3.287\,176\,9(2)$. The numerical results for the Yukawa case, i.e. for the function ${\tilde \gamma}_{\rm hopf}$, have recently been confirmed by an improved analytic, nonperturbative investigation [@BrKr2000prep] which extends the perturbative calculation [@BrKr1999]. We note that the transforms ${\cal T}'_m \gamma_{\rm hopf}(g)$ and ${\cal T}'_m {\tilde \gamma}_{\rm hopf}(g)$ calculated according to Eq. (\[CaFiTrans\]), i.e. by the unmodified conformal mapping, typically exhibit apparent convergence to 5–6 significant figures in the transformation order $m=28,~29,~30$ and at large coupling $g \geq 5$. Specifically, the numerical values for $g=5.0$ are $$\begin{aligned} {\cal T}'_{28} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~567~294\,, \nonumber\\[2ex] {\cal T}'_{29} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~564~509\,, \nonumber\\[2ex] {\cal T}'_{30} \gamma_{\rm hopf}(g = 5.0) \; &=& \; -0.501~563~626\,. \nonumber\end{aligned}$$ These results, when compared to the data in Table \[table1\], exemplify the acceleration of the convergence by the additional Padé approximation of the Borel transform [*expressed as a function of the conformal variable*]{} \[see Eq. (\[ConformalPade\])\]. It is not claimed here that the resummation method defined in Eqs. (\[power\])–(\[AccelTrans\]) necessarily provides the fastest possible rate of convergence for the perturbation series defined in Eq. (\[gammaPhi4\]) and (\[gammaYukawa\]). Further improvements should be feasible, especially if particular properties of the input series are known and exploited (see in part the methods described in [@JeWeSo2000]). We also note possible improvements based on a large-coupling expansion [@We1996d], in particular for excessively large values of the coupling parameter $g$, or methods based on order-dependent mappings (see [@SeZJ1979; @LGZJ1983] or the discussion following Eq. (41.67) in [@ZJ1996]). The conformal mapping [@CaFi1999; @CaFi2000] is capable of accomplishing the analytic continuation of the Borel transform (\[BorelTrans\]) beyond the circle of convergence. Padé approximants, applied directly to the partial sums of the Borel transform (\[PartialSum\]), provide an alternative to this method [@Raczka1991; @Pi1999; @BrKr1999; @Je2000; @JeWeSo2000]. Improved rates of convergence can be achieved when the convergence of the transforms obtained by conformal mapping in Eq. (\[PartialSumConformal\]) is accelerated by evaluating Padé approximants as in Eq. (\[ConformalPade\]), and conditions on analyticity domains can be relaxed in a favorable way when these methods are combined with the integration contours from Ref. [@Je2000]. Numerical results for the resummed values of the perturbation series (\[gammaPhi4\]) and (\[gammaYukawa\]) are provided in the Tables \[table1\] and \[table2\]. By the improved conformal mapping and other optimized resummation techniques (see, e.g., the methods introduced in Ref. [@JeWeSo2000]) the applicability of perturbative (small-coupling) expansions can be generalized to the regime of large coupling and still lead to results of relatively high accuracy.\ U.J. acknowledges helpful conversations with E. J. Weniger, I. Nándori, S. Roether and P. J. Mohr. G.S. acknowledges continued support from BMBF, DFG and GSI. [10]{} J. C. LeGuillou and J. Zinn-Justin, [*Large-Order Behaviour of Perturbation Theory*]{} (North-Holland, Amsterdam, 1990). J. Zinn-Justin, [*Quantum Field Theory and Critical Phenomena*]{}, 3rd ed. (Clarendon Press, Oxford, 1996). J. Fischer, Int. J. Mod. Phys. A [**12**]{}, 3625 (1997). U. D. Jentschura, E. Weniger, and G. Soff, Asymptotic Improvement of Resummation and Perturbative Predictions, Los Alamos preprint hep-ph/0005198, submitted. D. Broadhurst and D. Kreimer, Phys. Lett. B [**475**]{}, 63 (2000). I. Caprini and J. Fischer, Phys. Rev. D [**60**]{}, 054014 (1999). I. Caprini and J. Fischer, Convergence of the expansion of the Laplace-Borel integral in perturbative QCD improved by conformal mapping, Los Alamos preprint hep-ph/0002016. U. D. Jentschura, Resummation of Nonalternating Divergent Perturbative Expansions, Los Alamos preprint hep-ph/0001135, Phys. Rev. D (in press). P. A. Raczka, Phys. Rev. D [**43**]{}, R9 (1991). M. Pindor, Padé Approximants and Borel Summation for QCD Perturbation Series, Los Alamos preprint hep-th/9903151. R. Seznec and J. Zinn-Justin, J. Math. Phys. [**20**]{}, 1398 (1979). J. C. L. Guillou and J. Zinn-Justin, Ann. Phys. (N. Y.) [**147**]{}, 57 (1983). R. Guida, K. Konishi, and H. Suzuki, Ann. Phys. (N. Y.) [**241**]{}, 152 (1995). D. J. Broadhurst, P. A. Baikov, V. A. Ilyin, J. Fleischer, O. V. Tarasov, and V. A. Smirnov, Phys. Lett. B [**329**]{}, 103 (1994). G. Altarelli, P. Nason, and G. Ridolfi, Z. Phys. C [**68**]{}, 257 (1995). D. E. Soper and L. R. Surguladze, Phys. Rev. D [**54**]{}, 4566 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Phys. Lett. B [**371**]{}, 93 (1996). K. G. Chetyrkin, J. H. Kühn, and M. Steinhauser, Nucl. Phys. B [**482**]{}, 213 (1996). K. G. Chetyrkin, R. Harlander, and M. Steinhauser, Phys. Rev. D [**58**]{}, 014012 (1998). G. A. Baker and P. Graves-Morris, [*Padé approximants*]{}, 2nd ed. (Cambridge University Press, Cambridge, 1996). C. M. Bender and S. A. Orszag, [*Advanced Mathematical Methods for Scientists and Engineers*]{} (McGraw-Hill, New York, NY, 1978). D. Broadhurst and D. Kreimer, in preparation (2000). E. J. Weniger, Phys. Rev. Lett. [**77**]{}, 2859 (1996).
{ "pile_set_name": "ArXiv" }
GT300 The GT300 may refer to: A Super GT car category The GT300 family of graphics processors from Nvidia
{ "pile_set_name": "Wikipedia (en)" }
Q: Is false considered to be nil in rspec? There is field name active in customer table. It validates as below in customer.rb: validates :active, :presence => true Here is the rspec code to test a field short_name: it "should be OK with duplicate short_name in different active status" do customer = Factory(:customer, :active => false, :short_name => "test user") customer1 = Factory.build(:customer, :active => true, :short_name => "Test user") customer1.should be_valid end Validation for short_name is: validates :short_name, :presence => true, :uniqueness => { :scope => :active } The above code causes the error: 1) Customer data integrity should be OK with duplicate short_name in different active status Failure/Error: customer = Factory(:customer, :active => false, :short_name => "test user") ActiveRecord::RecordInvalid: Validation failed: Active can't be blank # ./spec/models/customer_spec.rb:62:in `block (3 levels) in <top (required)>' It seems that the false value assigned to field active was considered to be blank or nil by rspec and failed the data validation check. Tried to use 0 for false and it causes the same error. The rspec case passes if removing the validation for field active. A: This is not a rspec issue, it's related to Rails' validation. I suppose your active field is a boolean and, to quote the validates_presence_of documentation: If you want to validate the presence of a boolean field (where the real values are true and false), you will want to use validates_inclusion_of :field_name, :in => [true, false] This is due to the way Object#blank? handles boolean values. false.blank? # => true So simply change your validator to something like the following (assuming you want the "sexy" syntax) and it should work: validates :active, :inclusion => [true, false]
{ "pile_set_name": "StackExchange" }
#coding=utf-8 ''' Created on 2015-11-4 @author: zhangtiande ''' from django.shortcuts import HttpResponse from teamvision.project.models import Project,Tag from django.contrib.auth.models import User from business.ucenter.account_service import AccountService class VM_AdminUser(object): ''' classdocs ''' def __init__(self,user,is_create=False): self.user=user self.is_create=is_create self.admin="" self.manager="" self.default_group="" self.set_user_group() def user_active(self): result="finished-check fa-check-square" if not self.user.is_active: result="fa-square-o unfinished-check" return result def user_name(self): return self.user.email def user_full_name(self): result=self.user.username if self.user.last_name and self.user.first_name: result=self.user.last_name+self.user.first_name return result def user_avatar(self): result="/static/global/images/fruit-avatar/Fruit-1.png" if self.user.extend_info: result=AccountService.get_avatar_url(self.user) return result def user_groups(self): return self.user.groups.all() def form_id(self): result="user_edit_form" if self.is_create: result="user_create_form" return result def set_user_group(self): if self.user: if self.user.groups.all().filter(id=27): self.admin="checked" elif self.user.groups.all().filter(id=28): self.manager="checked" else: self.default_group="checked"
{ "pile_set_name": "Github" }
Sigma Beauty Angled Brow Brush Sigma Beauty Angled Brow Brush The E75 Angled Brow Brush features a short, slightly stiff angled brush head. Use this brush with brow powder to fill in the brows using a sketching motion for a natural effect. Pairs well with Brow Powder Duo - choose from Light, Medium, or Dark.
{ "pile_set_name": "Pile-CC" }
Maizey Coming In Maizey hasn’t ever been inside this house. Dropped off here at the farm long ago, she has always been a farm dog, making her rounds, keeping a keen nose to the slightest change in the air and barking at whatever she felt needed fending off. In winter, she and her son Joe, kept each other company, curling up in the hay in the barn. But Joe died in the fall and Maizey, a good 15 years old, shivers now alone. It has taken gentle pushing to get her to cross over the doorstep and come inside. Temperature tonight is predicted to be 3 degrees F, with a windchill of -9. It is imperative that Maizey come in. Her first evening indoors, night before last, there was much pacing, tentative sniffing and more gentle pushing to get her to step across the threshold to go out and come back in again. She now goes in and out without hesitation— well, when she sniffed the snow out the backdoor early this morning, she turned around and came back in. Later, she stepped out onto the porch— and came back after one trot round around the yard. She likes sleeping on the rug in the bedroom, the mat in front of the wood-burning stove. I like her inside wherever she wants to be. The warmth is critical for her, as is the warmth of her company for me.
{ "pile_set_name": "Pile-CC" }
The quantum computing apocalypse is imminent Shlomi Dolev is the Chair Professor and founder of the Computer Science department of Ben-Gurion University of the Negev. He is the author of Self-Stabilization. Shlomi also is a cybersecurity entrepreneur and the co-founder and chief scientist of Secret Double Octopus. In the ancient world, they used cubits as an important data unit, but the new data unit of the future is the qubit — the quantum bits that will change the face of computing. Quantum bits are the basic units of information in quantum computing, a new type of computer in which particles like electrons or photons can be utilized to process information, with both “sides” (polarizations) acting as a positive or negative (i.e. the zeros and ones of traditional computer processing) alternatively or at the same time. According to experts, quantum computers will be able to create breakthroughs in many of the most complicated data processing problems, leading to the development of new medicines, building molecular structures and doing analysis going far beyond the capabilities of today’s binary computers. The elements of quantum computing have been around for decades, but it’s only in the past few years that a commercial computer that could be called “quantum” has been built by a company called D-Wave. Announced in January, the D-Wave 2000Q can “solve larger problems than was previously possible, with faster performance, providing a big step toward production applications in optimization, cybersecurity, machine learning and sampling.” IBM recently announced that it had gone even further — and that it expected that by the end of 2017 it would be able to commercialize quantum computing with a 50-qubit processor prototype, as well as provide online access to 20-qubit processors. IBM’s announcement followed the September Microsoft announcement of a new quantum computing programming language and stable topological qubit technology that can be used to scale up the number of qubits. Taking advantage of the physical “spin” of quantum elements, a quantum computer will be able to process simultaneously the same data in different ways, enabling it to make projections and analyses much more quickly and efficiently than is now possible. There are significant physical issues that must be worked out, such as the fact that quantum computers can only operate at cryogenic temperatures (at 250 times colder than deep space) — but Intel, working with Netherlands firm QuTech, is convinced that it is just a matter of time before the full power of quantum computing is unleashed. “Our quantum research has progressed to the point where our partner QuTech is simulating quantum algorithm workloads, and Intel is fabricating new qubit test chips on a regular basis in our leading-edge manufacturing facilities,” said Dr. Michael Mayberry, corporate vice president and managing director of Intel Labs. “Intel’s expertise in fabrication, control electronics and architecture sets us apart and will serve us well as we venture into new computing paradigms, from neuromorphic to quantum computing.” The difficulty in achieving a cold enough environment for a quantum computer to operate is the main reason they are still experimental, and can only process a few qubits at a time — but the system is so powerful that even these early quantum computers are shaking up the world of data processing. On the one hand, quantum computers are going to be a boon for cybersecurity, capable of processing algorithms at a speed unapproachable by any other system. By looking at problems from all directions — simultaneously — a quantum computer could discover anomalies that no other system would notice, and project to thousands of scenarios where an anomaly could turn into a security risk. Like with a top-performing supercomputer programmed to play chess, a quantum-based cybersecurity system could see the “moves” an anomaly could make later on — and quash it on the spot. The National Security Agency, too, has sounded the alarm on the risks to cybersecurity in the quantum computing age. “Quantum computing will definitely be applied anywhere where we’re using machine learning, cloud computing, data analysis. In security that [means] intrusion detection, looking for patterns in the data, and more sophisticated forms of parallel computing,” according to Kevin Curran, a cybersecurity researcher at Ulster University and IEEE senior member. But the computing power that gives cyber-defenders super-tools to detect attacks can be misused, as well. Last year, scientists at MIT and the University of Innsbruck were able to build a quantum computer with just five qubits, conceptually demonstrating the ability of future quantum computers to break the RSA encryption scheme. That ability to process the zeros and ones at the same time means that no formula based on a mathematical scheme is safe. The MIT/Innsbruck team is not the only one to have developed cybersecurity-breaking schemes, even on these early machines; the problem is significant enough that representatives of NIST, Toshiba, Amazon, Cisco, Microsoft, Intel and some of the top academics in the cybersecurity and mathematics worlds met in Toronto for the yearly Workshop on Quantum-Safe Cryptography last year. The National Security Agency, too, has sounded the alarm on the risks to cybersecurity in the quantum computing age. The NSA’s “Commercial National Security Algorithm Suite and Quantum Computing FAQ” says that “many experts predict a quantum computer capable of effectively breaking public key cryptography” within “a few decades,” and that the time to come up with solutions is now. According to many experts, the NSA is far too conservative in its prediction; many experts believe that the timeline is more like a decade to a decade and a half, while others believe that it could happen even sooner. And given the leaps in progress that are being made on almost a daily process, a commercially viable quantum computer offering cloud services could happen even more quickly; the D-Wave 2000Q is called that because it can process 2,000 qubits. That kind of power in the hands of hackers makes possible all sorts of scams that don’t even exist yet. For example, forward-looking hackers could begin storing encrypted information now, awaiting the day that fast, cryptography-breaking quantum computing-based algorithms are developed. While there’s a possibility that the data in those encrypted files might be outdated, there is likely to be more than enough data for hackers to use in various identity theft schemes, among other things. It’s certain that the threats to privacy and information security will only multiply in the coming decades. In fact, why wait? Hackers are very well-funded today, and it certainly wouldn’t be beyond their financial abilities to buy a quantum computer and begin selling encryption-busting services right now. It’s likely that not all the cryptography-breaking algorithms will work on all data, at least for now — this is a threat-in-formation — but chances are that at least some of them will, meaning that even now, cyber-criminals could utilize the cryptography-breaking capabilities of quantum computers, and perhaps sell those services to hackers via the Dark Web. That NSA document that predicted “decades” before quantum computers become a reality was written at the beginning of 2016, which shows how much progress has been made in barely a year and a half. The solution lies in the development of quantum-safe cryptography, consisting of information theoretically secure schemes, hash-based cryptography, code-based cryptography and exotic-sounding technologies like lattice-based cryptography, multivariate cryptography (like the “Unbalanced Oil and Vinegar scheme”), and even supersingular elliptic curve isogeny cryptography. These, and other post-quantum cryptography schemes, will have to involve “algorithms that are resistant to cryptographic attacks from both classical and quantum computers,” according to the NSA. Whatever the case, it’s certain that the threats to privacy and information security will only multiply in the coming decades, and that data encryption will proceed in lockstep with new technological advances.
{ "pile_set_name": "Pile-CC" }
Tissue reparative effects of macrolide antibiotics in chronic inflammatory sinopulmonary diseases. It is well established that macrolide antibiotics are efficacious in treating sinopulmonary infections in humans. However, a growing body of experimental and clinical evidence indicates that they also express distinct salutary effects that promote and sustain the reparative process in the chronically inflamed upper and lower respiratory tract. Unlike the anti-infective properties, these distinct effects are manifested at lower doses, usually after a relatively prolonged period (weeks) of treatment, and in the absence of an identifiable, viable pathogen. Long-term, low-dose administration of macrolide antibiotics has been used most commonly for sinusitis, diffuse panbronchiolitis, asthma, bronchiectasis, and cystic fibrosis. It is associated with down-regulation of nonspecific host inflammatory response to injury and promotion of tissue repair. Although large-scale trials are lacking, the prolonged use of these drugs has not been associated with emergence of clinically significant bacterial resistance or immunosuppression. Long-term, low-dose administration of 14- and 15-membered ring macrolide antibiotics may represent an important adjunct in the treatment of chronic inflammatory sinopulmonary diseases in humans.
{ "pile_set_name": "PubMed Abstracts" }
Q: What is the size of codomain of a function $G(x) = F^A(x) \oplus F^B(x)$, where $F(x) = \text{Keccak-}f[1600](x)$? Assuming that $m$ is a multiset of bitstrings where all bitstrings have the same length, let $D(m)$ denote the number of distinct elements in $m$. That is, $D(m)$ is equal to the dimension of $m$. For example, if $$m = \{00, 10, 11, 10, 11\},$$ then $D(m)=3$. Let $F(x) = \text{Keccak-}f[1600](x)$, the block permutation function of SHA-3 (for $64$-bit words). We can define the following notation: $$\begin{array}{l} {F^0(x)} = x,\\ {F^1(x)} = F(x),\\ {F^2(x)} = F(F(x)),\\ {F^3(x)} = F(F(F(x))),\\ \ldots \end{array}$$ Assuming that $A$ and $B$ are two different natural numbers greater than or equal to $0$, let $G_{A, B}(x)$ denote a function defined as $$G_{A, B}(x) = F^A(x) \oplus F^B(x),$$ where $x$ denotes a $1600$-bit input and $\oplus$ denotes an XOR operation. Assuming that $L = 2^{1600}$, let $S_i$ denote an $i$-th bitstring from a set of all possible $1600$-bit inputs: $$\begin{array}{l} S_1 = 0^{1600},\\ S_2 = 0^{1599}1,\\ \ldots,\\ S_{L-1} = 1^{1599}0,\\ S_L = 1^{1600}.\\ \end{array}$$ Let $A$ and $B$ denote two arbitrarily large, but different natural numbers (one of them is allowed to be equal to $0$). For example, $$A = 0, B = 1$$ or $$A = 2^{3456789}, B = 9^{876543210}$$ are valid pairs. Then $$\begin{array}{l} S_{A, B}[i] = G_{A, B}(S_i),\\ C_{A, B} = \{S_{A, B}[1], S_{A, B}[2], \ldots, S_{A, B}[L-1], S_{A, B}[L]\}.\\ \end{array}$$ The question: can we assume that $D(C_{A, B})$ is expected to be approximately equal to $$(1-1/e) \times 2^{1600} = 10^{481} \times 2,810560755\ldots$$ for all (or almost all) pairs of $A$ and $B$? A: Let $\pi$ and $\sigma$ be two independent uniform random permutations, and $f$ a uniform random function. The best advantage of any $q$-query algorithm to distinguish $\pi + \sigma$ from $f$ is bounded by $(q/2^n)^{1.5}$[1]. In this case, the expected fraction of distinct outputs of $\pi + \sigma$ can't be too far from the expected fraction of distinct outputs from $f$, which is $1 - e^{-1} \approx 63\%$. What about $\sigma = \pi^2$, or $\sigma = \pi^k$ for $k > 2$? Then $\pi$ and $\sigma$ are not independent. Nevertheless, it would be rather surprising if this situation were substantially different. What about $\pi^{2^{3456789}} + \pi^{2^{987654321}}$ instead of $\pi + \pi^2$? This is the same as $\pi + \pi^{2^{987654321 - 3456789}}$. It's not clear why you would be worried about uncomputably large exponents like this unless you were flailing around without principle trying to make a design that looks complicated.
{ "pile_set_name": "StackExchange" }
Whatever You Love, You Are Whatever You Love, You Are is the fifth studio album by Australian trio, Dirty Three, which was released in March 2000. Cover art is by their guitarist, Mick Turner. Australian musicologist, Ian McFarlane, felt that it showed "deep, rich, emotional musical vistas, and furthered the band’s connection to the music and approach of jazz great John Coltrane". Reception Track listing "Some Summers They Drop Like Flies" – 6:20 "I Really Should've Gone Out Last Night" – 6:55 "I Offered It Up to the Stars & the Night Sky" – 13:41 "Some Things I Just Don't Want to Know" – 6:07 "Stellar" – 7:29 "Lullabye for Christie" – 7:45 References General Note: Archived [on-line] copy has limited functionality. Specific Category:2000 albums Category:ARIA Award-winning albums Category:Dirty Three albums Category:Touch and Go Records albums
{ "pile_set_name": "Wikipedia (en)" }
/** @file Intel Processor Power Management ACPI Code. Copyright (c) 2018 - 2019, Intel Corporation. All rights reserved.<BR> SPDX-License-Identifier: BSD-2-Clause-Patent **/ #include "CpuPowerMgmt.h" DefinitionBlock ( "CPU0PSD.aml", "SSDT", 0x02, "PmRef", "Cpu0Psd", 0x3000 ) { External(\PC00, IntObj) External(\TCNT, FieldUnitObj) External(\_SB.CFGD, FieldUnitObj) External(\_SB.PR00, DeviceObj) Scope(\_SB.PR00) { Name(HPSD,Package() // HW_ALL { Package() {5, // NumEntries. Current Value is 5. 0, // Revision. Current Value is 0. 0, // Domain. 0xFE, // Coordination type 0xFE = HW_ALL 0x80 // Number of processors. } }) Name(SPSD,Package() // SW_ALL { Package() {5, // NumEntries. Current Value is 5. 0, // Revision. Current Value is 0. 0, // Domain. 0xFC, // Coordination type 0xFC = SW_ALL 0x80 // Number of processors. } }) // // The _PSD object provides information to the OSPM related // to P-State coordination between processors in a multi-processor // configurations. // Method(_PSD,0) { If (And(\_SB.CFGD, PPM_TURBO_BOOST_MAX)) // Intel Turbo Boost Max 3.0 { Store (0, Index(DerefOf(Index(HPSD, 0)),2)) // Domain Store (1, Index(DerefOf(Index(HPSD, 0)),4)) // Number of processors belonging to the domain. } Else { Store (TCNT, Index(DerefOf(Index(HPSD, 0)),4)) Store (TCNT, Index(DerefOf(Index(SPSD, 0)),4)) } If(And(PC00,0x0800)) // If Hardware co-ordination of P states { Return(HPSD) } Return(SPSD) } } // End of Scope(\_SB.PR00) } // End of Definition Block
{ "pile_set_name": "Github" }
Toni, FYI. Vince ---------------------- Forwarded by Vince J Kaminski/HOU/ECT on 10/09/2000 11:12 AM --------------------------- "Martin Jermakyan" <martin@electrapartners.com> on 10/09/2000 10:55:34 AM Please respond to "Martin Jermakyan" <martin@electrapartners.com> To: Vince J Kaminski/HOU/ECT@ECT cc: Subject: Updated resume Dear Vince, Attached please find my upgraded resume. Look forward to hearing from you. Regards, Martin - martin .doc
{ "pile_set_name": "Enron Emails" }
Tuesday, 22 December 2009 Indian surgeons, Oz aid give Iraqi boy new life Undertaking a rare and complicated brain surgery, doctors at the Indraprastha Apollo Hospital in Delhi have saved the life of a 14-year-old Iraqi student. The expensive surgery was funded by Iraqi Christians in Australia, doctors said here Tuesday. "It was a very complicated operation. The planning and execution of the surgery was very, very complex and we had not handled such a case till this boy came in," Pranav Kumar, a leading neurosurgeon at the hospital, told. Ahmed Hashmi from Iraq was brought to Apollo hospital with a large aneurysm in one of four main arteries in the brain. He had already suffered a stroke, resulting in slurred speech and weakness of right side of his body. "To stop any further risk, the diseased artery had to be blocked immediately. However, the preliminary tests revealed that the brain could not have tolerated closure of this abnormal artery," Pranav Kumar added. Shahin Nooreyezdan, a member of the team of doctors who treated Ahmed, said: "Though brain is a very small part of the body, it needs at least 20 percent of the total blood supply. But this problem was obstructing blood flow to his brain. The aneurysm was like a ticking time-bomb, capable of bursting anytime and causing massive brain haemorrhage." "Had it been left untreated, Ahmed's life span would have been very short," added Hash Rastogi, another senior doctor at Apollo. The surgery was carried out in two steps - on Dec 4 and Dec 11. The entire expenses of the surgery -Rs.1.4 million - were met by Iraqi Christians based in Australia. "In stage one of the treatment, a delicate bypass surgery was carried out on his brain successfully. In this operation, a small artery from his face was connected to one of the fine arteries in the brain. This resumed blood supply to brain," Nooreyezdan said. "A week later, we successfully blocked the abnormal artery. "The treatment has defused the time bomb, which Ahmed was carrying in his brain," Rastogi added. Wearing a green hospital dress, holding a gift pack and a chocolate, Ahmed was happy. "I have suffered a lot and thank God I am out of it now. I thank all my doctors for giving me a new"He is out of danger. But, he needs to take care against any head injury in future," he said. Ahmed, whose father is dead, belongs to a poor family. After he was diagnosed with the disease and Iraqi doctors were not in a condition to cure him, his sister contacted a foundation in Australia on the internet and requested help. "This is how Ahmed got the funding and now you can see him smiling," said Walid M. Albakili, another doctor and research fellow in charge of Gulf and Arab region. life." "I have already missed my school for six months and am eager to get back soon," said the 14-year-old who wishes to see Taj Mahal before leaving Delhi for his homeland. Pranav Kumar said that the boy is now almost fit and in two weeks he will be ready to leave for Iraq.
{ "pile_set_name": "Pile-CC" }
Dopaminergic inhibition of gonadotropic release in hibernating frogs, Rana temporaria. The influence of a dopaminergic antagonist, metoclopramide (MET), and an agonist, bromocriptine (BROMO), on reproductive status was examined in female frogs, Rana temporaria. MET induced advanced ovulation during hibernation, suggesting dopaminergic inhibition of gonadotropin (LH) release during this period. BROMO did not decrease plasma LH in intact females in comparison with vehicle (VEH)-treated controls (VEH: 11 +/- 6 vs BROMO: 5 +/- 4 ng/ml) or in sham-lesioned (SL) females (SL; 12 +/- 5 vs SL + BROMO: 9 +/- 8 ng/ml). However, BROMO significantly depressed the rise in plasma LH following lesioning (L) which disconnected the hypothalamus from the medium eminence-pituitary complex (L + BROMO: 29 +/- 10 vs L: 74 +/- 30 ng/ml; P < 0.002). Taken together with previous results of lesion studies, these data point to an important role of dopaminergic inhibition in the regulation of seasonal reproduction in this frog.
{ "pile_set_name": "PubMed Abstracts" }
Introduction {#Sec1} ============ Alcohol use disorders (AUD) are common and associated with significant morbidity and mortality \[[@CR1]--[@CR3]\], but are substantially undertreated. In 2013, 16.6 million U.S. adults met diagnostic criteria for an AUD, but research suggests only 7.8% received any formal treatment \[[@CR4]\]. One of the major gaps in treatment for AUD is the significant under-utilization of medications that are effective for treating AUD \[[@CR1], [@CR5], [@CR6]\]. Three medications---*disulfiram*, *acamprosate*, and *naltrexone* (both oral and injectable)---have FDA approval specifically for the treatment of AUD, and *topiramate* has strong meta-analytic support \[[@CR7]\]. Efforts to increase treatment of AUD with medications is motivated in part because the modality may address many reported barriers to receiving any formal AUD treatment \[[@CR4], [@CR8]\]. For instance, psychosocial treatments are often offered in group settings, heightening stigma-related issues for some patients, whereas medications can be provided on an individual basis \[[@CR9]\]. In addition, patients may not be ready to abstain \[[@CR8], [@CR10]\]. Further, though this may be shifting over time \[[@CR11], [@CR12]\], many treatment programs view abstinence as the ultimate goal \[[@CR8]\], whereas abstinence is not required with all medications and reduced drinking can be a goal of medication treatment \[[@CR9]\]. Finally, AUD medications can be offered across healthcare settings, including primary care, which has been highlighted as an optimal setting for expansion of care for AUD \[[@CR8], [@CR13], [@CR14]\]. Despite the promise of medication treatment for addressing several known barriers to AUD treatment and national recommendations encouraging medications be made available to all patients with AUD \[[@CR15], [@CR16]\], rates of pharmacotherapy for AUD remain extremely low. Among patients with AUD, 4-12% are treated pharmacologically \[[@CR1], [@CR6], [@CR17]--[@CR21]\]. Among subsets of patients with AUD and co-occurring schizophrenic, bipolar, posttraumatic stress or major depressive disorder, receipt of medications for AUD ranged from 7 to 11%, whereas receipt of medications for the comorbid disorder ranged from 69 to 82% \[[@CR19]\]. This gap in the quality of AUD treatment is well known, and the substantial barriers to provision of AUD medications in diverse contexts have been described \[[@CR22]--[@CR27]\]. However, the optimal strategies for addressing these barriers and increasing use of medications for AUD treatment remain elusive. In recent years, two related lines of research have contributed to knowledge regarding strategies to increase use of medications to treat AUD: evaluations of care delivery interventions and evaluations of implementation interventions. Care delivery interventions typically focus on improving patient-level clinical outcomes (e.g., reduction in heavy drinking days or abstinence from alcohol use), but often secondarily assess patient- or clinician-level process outcomes focused on treatment receipt (e.g., engagement in pharmacotherapy for AUD). Implementation interventions are typically designed to improve patient- or clinician-level process outcomes, but sometimes secondarily include patient-level clinical outcomes when the evidence for the effects of the underlying practice is weak (so called Hybrid I studies) \[[@CR28]\]. Other key differences exist between these types of research that may influence both clinical and process outcomes. Most importantly, care delivery interventions typically involve recruitment of patients who are willing to be randomized to the treatment arms contained within the new care delivery model. Thus, these trials may be restricted to patients who are at least open to, if not actively interested in, treatment for AUD. On the other hand, evaluations of implementation interventions typically recruit and intervene on clinical entities (e.g., providers, clinics, hospitals) who serve large groups of patients who likely have more variable interest in treatment. Further, evaluations of care delivery interventions are typically designed to establish the effectiveness (or lack thereof) of particular care delivery models. Thus, these studies generally put significant effort and resources into ensuring fidelity to the care delivery model. On the other hand, implementation evaluations are often trying to establish the effectiveness of bundles of strategies (interventions) to increase uptake of practices that do not depend on external research resources. Thus, evaluations of implementation interventions may measure fidelity as a process outcome but typically exert less direct control \[[@CR29]\]. Even though care delivery and implementation interventions differ in terms of methodology, patient inclusion criteria, and primary outcomes, they may evaluate the effectiveness of the same underlying implementation strategies, such as reorganizing, supplementing, or intervening on existing models of care \[[@CR29]\]. The fact that the same component implementation strategies (e.g., audit and feedback) have been evaluated by these different research designs with very different patient populations affords an opportunity to take stock of the effectiveness of these interventions, and to distill insights into which designs, contexts, and component strategies appear to drive outcomes. Therefore, our goal was to conduct a structured review of published evaluations of care delivery and implementation interventions that have either primarily or secondarily aimed to increase use of pharmacotherapy for patients with AUD, with the goal of identifying component strategies that may be effective in increasing pharmacologic treatment of AUD. Our review was guided by an existing taxonomy of implementation strategies and terms identified via a three-round modified-Delphi process \[[@CR30]\]. The purpose of our review was to learn which components have been tried most commonly and which strategies might be associated with larger effects. Also, due to the fact that evaluations of care delivery interventions exert greater efforts to ensure fidelity and include patients willing to be randomized, we hypothesized that higher adoption of medications for AUD will be observed in those contexts compared to implementation interventions, which typically aim to intervene on clinician and patient populations with greater variability in treatment motivation, knowledge, and preferences. Methods {#Sec2} ======= For this structured literature review, we sought to identify published evaluations of care delivery and implementation interventions reporting effects on receipt of medication treatments for patients with AUD. We reviewed literature through May 2018. Studies were identified via searching PubMed, Google Scholar, and PsychInfo with relevant search terms (e.g., pharmacotherapy, alcohol use disorder medications, AUD medications, naltrexone, Acamprosate, disulfiram, medication-assisted treatment). We also reviewed reference lists from identified studies to identify additional studies that may have been missed by our search. Finally, because we have personally conducted and/or served as co-investigators on related studies, additional studies were also identified via networking. Once identified, each individual article was coded for implementation strategies used, as guided by Powell et al.'s refined compilation of implementation strategies resulting from the Expert Recommendations for Implementing Change (ERIC) project \[[@CR30]\]. All articles were independently reviewed and coded by two investigators (EW and TM). When multiple articles and/or published protocols or commentaries were identified that described a single intervention and/or implementation effort, these articles were aggregated to the level of the intervention (e.g., three studies had adjoining published protocol papers, which were coded under the umbrella of a single study). Once coded, all authors met to review coding discrepancies, discuss interpretation of codes, arrive at consensus, and revise individual codes based on consensus. After reaching internal consensus on coding, we reached out to the lead or senior author of each study to ask whether our codes aligned with their understanding/interpretation of their study and associated report. We shared Powell et al's description of strategies and asked them to review our coding to see if they thought we had missed or miscoded anything. Finally, process (e.g., rates of prescribed AUD pharmacotherapy) and alcohol use outcome data were extracted from each study and described. All authors reviewed the coding of implementation strategies against study outcomes data to qualitatively identify sets of implementation strategies that might have been be most effective for increasing provision of AUD medications and report whether interventions that increased AUD pharmacotherapy also improved alcohol use outcomes. Results {#Sec3} ======= Our literature review identified nine studies that evaluated interventions to primarily or secondarily increase utilization of pharmacotherapy for AUD. Four were randomized clinical trials of care delivery interventions designed to improve alcohol-related outcomes \[[@CR31]--[@CR38]\]. Four were quasi-experimental evaluations of large-scale implementation interventions designed to increase medication receipt \[[@CR39]--[@CR43]\], and one was a quasi-experimental evaluation of targeted implementation intervention in a single-site \[[@CR44]\]. Two additional studies were identified but not included. The first reported on a large-scale implementation intervention designed to increase screening and brief intervention for unhealthy alcohol use and secondarily assessed whether the implementation was associated with increased receipt of AUD medications among those who screened positive \[[@CR45]\]. However, it was not clear how many of the patients who screened positive met diagnostic criteria for AUD and thus would have been eligible for medication treatment, and, though findings regarding medication use were summarized, detailed data were not reported. The second report was a description of a demonstration project to implement extended release naltrexone in Los Angeles County, but no evaluation of the program's effect on receipt of medication treatment among patients with AUD was reported \[[@CR46]\]. Table [1](#Tab1){ref-type="table"} presents implementation strategies identified by our internal coding process across each identified study (labelled with X). All lead or senior authors of studies responded to our request for review of the codes and added additional codes (labelled with an O). Implementation strategies used were variable across the studies, and no strategy was used across all studies (Table [1](#Tab1){ref-type="table"}). The most frequently used strategies were assessing readiness and identifying barriers and facilitators, distributing educational materials, facilitating relay of clinical data to providers (audit and feedback), and providing ongoing consultation. Strategies less frequently used involved payment and/or incentives or changes in laws and/or credentialing and licensing.Table 1Implementation strategies identified in published evaluations of care delivery and implementation interventions that have aimed to increase medication treatment for patients with alcohol use disorderStrategySaitz AHEAD CCM \[[@CR32]\]Oslin\ Alcohol Care Management \[[@CR31]\]Watkins\ SUMMIT \[[@CR35]--[@CR38]\]Bradley CHOICE \[[@CR33], [@CR34]\]Robinson Group Manage \[[@CR44]\]Harris\ VA Academic Detailing Program \[[@CR40]\]Hagedorn\ ADaPT--PC \[[@CR39], [@CR42]\]Ford\ Medication Research Partnership \[[@CR43]\]Ornstein\ PPRNet-TRIP \[[@CR41]\]Row totalAssess readiness and identify barriers/facilitatorsOXXXXXX7Distribute educational materialsXXXXXXX7Facilitate relay of clinical data to providersXXXXXXX7Provide ongoing consultationOXXXXXO7Intervene with patients/consumers to enhance uptake and adherenceOXOXXX6Conduct ongoing trainingOXXXXX6Create new clinical teamsXXXXXO6Identify and prepare championsOXXXXX6Provide local technical assistanceOOXXXX6Conduct educational meetingsXXXXXX6Develop and implement tools for quality monitoringXXXXXX6Develop/organize quality monitoring systemsXXXXX5Conduct educational outreach visitsOXXXX5Audit and provide feedbackOOXXX5Develop educational materialsXXXXO5Organize clinician implementation team meetingsOXXXX5FacilitationOXXXX5Obtain formal commitmentsXXXXX5Remind cliniciansOXOXX5Revise professional rolesOXXXO5Provide clinical supervisionXXXX4Develop academic partnershipsXXOO4Promote adaptabilityOXXX4Centralize technical assistanceOXXO4Conduct cyclical small tests of changeXOXO4Create a learning collaborativeXXXO4Make training dynamicXXOO4Purposely reexamine the implementationOOXX4Tailor strategiesXXXX4Use an implementation advisorXOOO4Use data warehousing techniquesOXXX4Conduct local needs assessmentXXX3Change record systemsOOX3Promote network weavingXXO3Build a coalitionOXO3Conduct local consensus discussionsOXO3Develop a formal implementation blueprintXXX3Recruit, designate, and train for leadershipXOX3Access new fundingOX2Change service sitesXX2Increase demandXX2Involve executive boardsOO2Involve patients/consumers and family membersXX2Prepare patients to be active participantsOX2Use advisory boards and workgroupsOO2Use data expertsX?2Capture and share local knowledgeXX2Identify early adoptersOO2Make billing easierXO2Visit other sitesXX2Alter incentive/allowance structuresX1Alter patient/consumer feesX1Change physical structure and equipmentO1Inform local opinion leadersX1Mandate changeX1Model and simulate changeX1Use train-the-trainer strategiesO1Fund and contract for the clinical innovation0Work with educational institutions0Develop resource sharing agreements0Change accreditation or membership requirements0Change liability laws0Create/change credentialing and/or licensure standards0Develop an implementation glossary0Develop disincentives0Obtain/use patient/consumer/family feedback0Place innovation on fee for service lists/formularies0Shadow other experts0Stage implementation scale up0Start a dissemination organization0Use capitated payments0Use mass media0Use other payment schemes0 The effects of the interventions on receipt of AUD pharmacotherapy were also variable across studies (Table [2](#Tab2){ref-type="table"}). In three of the four randomized evaluations of care delivery models \[[@CR31]--[@CR33]\], the interventions were associated with varying magnitude of increased receipt of AUD medications. At follow-up, treatment group rates of medication receipt ranged from 13 \[[@CR36]\] to almost 70% \[[@CR31]\]. The latter study, Oslin's Alcohol Care Management model \[[@CR31]\], was the only approach to significantly increase receipt of AUD medications and improve patient-level alcohol use outcomes (Table [2](#Tab2){ref-type="table"}). Two of the four implementation interventions \[[@CR40], [@CR41]\] were associated with increased AUD medication receipt. While Ornstein's Practice Partner Research Network-Translating Research Into Practice (PPRNet-TRIP) intervention appeared to have small early effects, proportions of patients receiving medications were so low that continued evaluation over time was not possible \[[@CR41]\]. The Veterans Health Administration's (VA) Academic Detailing Program appeared to increase rates of AUD medication receipt from 4.6 to 8.3% among patients with AUD \[[@CR40]\]. Receipt of AUD medications also appeared to increase in in a single VA facility after implementation of a group medication management program attended by patients taking and considering medication treatment \[[@CR44]\].Table 2Study designs and intervention effects on AUD medication receiptStudy\ (Author, abbreviated name, and reference)Sample size\ (Patients/sites)% Receiving AUD medicationsMeasure of AUD medication receiptIntervention and intervention effectSAITZ, AHEAD CCM \[[@CR32]\]563/1*BASELINE*:\ Intervention 4%\ Control 8%Receipt of addiction medication (buprenorphine, methadone, naltrexone, Acamprosate, disulfiram)*Program Name and Brief Description*: The Addiction Health Evaluation and Disease (AHEAD) Management Chronic Care Management (CCM) model "included longitudinal care coordinated with a primary care clinician; motivational enhancement therapy; relapse prevention counseling; and on-site medical, addiction, and psychiatric treatment, social work assistance, and referrals (including mutual help). The control group received a primary care appointment and a list of treatment resources including a telephone number to arrange counseling." AHEAD CCM was delivered by a multidisciplinary team (nurse care manager, social worker, internists, psychiatrist with addiction expertise)\ *Setting*: Hospital-based primary care practice (patients recruited from residential detoxification unit and referrals from urban teaching hospital) in Boston, MA\ *Goal*: Harm reduction\ *Key Components*: Use of registry to track and proactively reach out to patients, longitudinal care coordinated with primary care clinician and facilitated by shared electronic health record (EHR), motivational enhancement therapy, relapse prevention counseling, on-site medical, addiction and psychiatric treatment, social work assistance, and referrals (including to mutual help)\ *Effect on Medication Receipt*: OR = 1.88 (95% CI 1.28--2.75) *p* = 0.001\ *Effect on Alcohol Use Outcomes*: Not significant*FOLLOW-UP*:\ Intervention 21%\ Control 15%OSLIN Alcohol Care Management \[[@CR31]\]163/3*BASELINE*:\ Not reportedReceipt of naltrexone*Program Name and Brief Description*: Alcohol Care Management "focused on the use of pharmacotherapy and psychosocial support. Alcohol Care Management was delivered in-person or by telephone within the primary care clinic. The control group received standard treatment in a specialty outpatient addiction treatment program" Delivered by a behavioral health provider in-person or over-the-phone with primary care provider recommendation and support. Behavioral health providers were trained in motivational interviewing\ *Setting*: Veteran Affairs (VA) primary care in New York and Philadelphia\ *Goal*: Abstinence\ *Key Components*: Weekly 30 min visits, individualized patient education, pharmacotherapy and psychosocial support, repeated assessment of alcohol use, encouraged treatment adherence, monitoring of problems and management of potential side effects, use of shared EHR for communication with primary care provider\ *Effect on Medication Receipt*: Naltrexone prescribed in 65.9% of the Alcohol Care Management group relative to 11.5% in control; Chi^2^ 50.10, *p* \< 0.001\ *Effect on Alcohol Use Outcomes*: The Alcohol Care Management group was more likely to refrain from heavy drinking than the control (OR = 2.16, 96% CI 1.27--3.66) but no effect on any alcohol use (OR = 1.40, 95% CI 0.75--2.59)*FOLLOW-UP*:\ Intervention 65.9%\ Control 11.5%WATKINS SUMMIT \[[@CR35]--[@CR38]\]377/2*BASELINE*:\ Not reportedReceipt of any "medication assisted treatment" with either long-acting injectable naltrexone or buprenorphine/naloxone.*Program Name and Brief Description*: Collaborative care "was a system-level intervention, designed to increase the delivery of either a 6-session brief psychotherapy treatment and/or medication-assisted treatment with either sublingual buprenorphine/naloxone for opioid use disorders (OUDs) or long-acting injectable naltrexone for alcohol use disorders (AUDs). The control group was told that the clinic provided opioid and/or alcohol use disorder treatment and given a number for appointment scheduling and list of community referrals." Delivered by care coordinators and therapists with a social work degree\ *Setting*: Primary care at Federally Qualified Health Center in L.A., CA\ *Goal*: Increase screening and brief intervention for unhealthy alcohol use\ *Key components*: Intended 6 sessions of brief psychotherapy and/or med-assisted treatment (buprenorphine/naloxone for OUD and naltrexone for AUDs), repeated assessments of substance use, use of registry to track and proactively reach out to patients, motivation and encouragement of engagement in therapy\ *Effect on Medication Receipt*: OR comparing intervention to control at 6-months follow-up for patients with AUD and/or OUD = 1.23 (95% CI 0.60--2.40) *p* = 0.53. Published commentary from SUMMIT investigators \[[@CR37]\] suggests similar non-significant findings among patients with AUD only\ *Effect on Alcohol Use Outcomes*: Among patients with AUD only (54% of the sample) the SUMMIT intervention was significantly associated with abstinence from any alcohol use and all opioids at follow-up and was borderline significant for no heavy drinking in the past 30 days.*FOLLOW-UP*:\ Intervention 13.4%\ Control 12.6%BRADLEY CHOICE \[[@CR33], [@CR34]\]304/3*BASELINE*:\ Intervention 1% versus Control 2%Receipt of naltrexone, Acamprosate or disulfiram*Program Name and Brief Description*: Choosing Healthier Options in Primary Care (CHOICE) was a care management intervention in which "nurse care managers offered outreach and engagement, repeated brief counseling using motivational interviewing and shared decision making about treatment options, and nurse practitioner--prescribed AUD medications (if desired), supported by an interdisciplinary team (CHOICE intervention). The control group received usual primary care."\ *Setting*: VA Primary care in Washington State\ *Goal*: Harm reduction\ *Key components*: Proactive outreach and engagement, repeated brief counseling using MI and shared decision-making about treatment options (AUD medication, biomarker assessment if abnormal baseline, behavioral goal-setting and skills development for reducing drinking, encouragement of mutual help and/or specialty addictions treatment, self-monitoring)\ *Effect on Medication Receipt*: OR = 6.3 (95% CI 3.4--11.8) *p* \< 0.0001\ *Effect on Alcohol Use Outcomes*: Not significant*FOLLOW-UP*:\ Intervention 32% versus Control 8%ROBINSON, Group Management \[[@CR44]\]1600/1*BASELINE*:\ Increasing 0.08%/month in pre-implementation periodReceipt of naltrexone or Acamprosate, or extended-release naltrexone*Program Name and Brief Description*: Group Management of pharmacotherapy initially implemented to provide continued access during a staffing shortage, sought to provide psychosocial education on medication management for alcohol dependence. Delivered by an addiction psychiatrist in collaboration with either an Addiction Therapist or a Certified Nurse Specialist\ *Setting*: VA San Diego Health Care System\ *Goal*: Increase adoption of pharmacotherapy for AUD\ *Key components*: Group participants capped at 8, review of naltrexone and Acamprosate, discussion of side effects or benefits, discussion of barriers to sobriety in group format. Sessions lasted 1 h\ *Effect on Medication Receipt*: The rate of increase in the percent of patients treated pharmacologically for alcohol dependence increased 0.08% per month in the pre-implementation period to 0.21% per month after group visits were implemented\ *Effect on Alcohol Use Outcomes*: Not measured*FOLLOW-UP*:\ Increasing 0.21%/month in post-implementation periodHARRIS, VA Academic Detailing Program \[[@CR40]\]NA/37*BASELINE*:\ Intervention 4.56%\ Control 6.01%Monthly rates of receipt of naltrexone (oral or injectable), Acamprosate, disulfiram, or topiramate*Program Name and Brief Description*: VA Academic Detailing Program in which "The academic detailers strove to educate, motivate, and enable key health care providers to identify and address the spectrum of hazardous alcohol use, especially to facilitate more active consideration of pharmacological treatment options for AUD." Academic detailers were clinical pharmacy specialists\ *Setting*: VA medical centers and outpatient clinics in California, Nevada and the Pacific Islands\ *Goal*: Increase adoption of pharmacotherapy for AUD\ *Key components*: Local champions and leadership buy-in, dashboard for identifying patient candidates for AUD medication, repeated in-person visits to educate and build rapport with priority providers, problem-solve barriers and address knowledge gaps/misunderstanding about AUD meds, additional educational resources (e.g., patient education tools and pocket cards), integrated audit and feedback tools into EHR for identifying AUD patients, commitment from providers to increase prescribing patterns for AUD medication, monitoring and follow-up\ *Effect on Medication Receipt*: Slope of intervention sites increased more steeply than slope at control sites (*p* \< 0.001). From immediately pre-intervention to the end of the observation period (Month 16--Month 36), the percent of patients with AUD who received medication increased 3.36% in absolute terms and 67.77% in relative terms\ *Effect on Alcohol Use Outcomes*: Not measured*FOLLOW-UP*:\ Intervention 8.32%\ Control 6.90%HAGEDORN, ADaPT--PC \[[@CR39], [@CR42]\]NA/3*BASELINE*:\ Intervention 3.8%\ at end of pre-implementation period\ Control 3.7%Monthly rates of filled prescription for AUD medication (oral/injectable naltrexone, Acamprosate, disulfiram, topiramate) within 30 days after PC visit*Program Name and Brief Description*: Alcohol Use Disorder Pharmacotherapy and Treatment in Primary Care settings (ADaPT--PC) "targets stakeholder groups with tailored strategies based on implementation theory and prior research identifying barriers to implementation of AUD pharmacotherapy. Local SUD providers and primary care mental health integration (PCMHI) providers are trained to serve as local implementation/clinical champions and receive external facilitation. Primary care providers receive access to consultation from local and national clinical champions, educational materials, and a dashboard of patients with AUD on their caseloads for case identification. Veterans with AUD diagnoses receive educational information in the mail just prior to a scheduled PC visit." Delivered by site champions and external facilitators\ *Setting*: VA primary care\ *Goal*: Increase adoption of pharmacotherapy for AUD\ *Key components*: Training local champions, website with educational materials for primary care providers, a case-finding dashboard, technical assistance from local and national experts\ *Effect on Medication Receipt*: Rate of change (slope) increased significantly in the implementation period (*p* = 0.0023). Immediate post-implementation change not significant (*p* = 0.3401). Change over 12-month post-implementation relative to pre-implementation change significant (0.0033). No difference between intervention and control sites in immediate post-implementation change (p-0.8508). No difference between intervention and control sites in post-implementation slope (*p* = 0.4793)\ *Effect on Alcohol Use Outcomes*: Not measured*FOLLOW-UP*:\ Intervention 5.2%\ at end of implementation period\ Control 5.8%FORD\ Medication Research Partnership \[[@CR43]\]3887/9*BASELINE*:\ Intervention 9.0%\ Control 11.4%Receipt of AUD medication during an episode of care*Program Name and Brief Description*: Medication Research Partnership, "a collaboration between a national commercial health plan and nine addiction treatment centers, implemented organizational and system changes to promote use of federally approved medications for treatment of alcohol and opioid use disorders." Delivered by commercial health plan, "nationally recognized experts in the substance abuse field," and "change leaders."\ *Setting*: Specialty addiction treatment centers located on Northeastern seaboard of the U.S.\ *Goal*: Promote use of federally approved medications for AUD/OUD\ *Key components*: "Change leaders" and "change teams," external coaches, rapid change cycles to test strategies to promote medication use, provider training and technical assistance\ *Effect on Medication Receipt*: Difference in differences at Year 3:\ Unadjusted: 5.8%; Adjusted: 5.2% (95% CI − 4.1 to 14.5) *p* = 0.27\ *Effect on Alcohol Use Outcomes*: Not measured*3-YEAR FOLLOW-UP*:\ Intervention 26.5%\ Control 23.1%ORNSTEIN PPRNet-TRIP \[[@CR41]\]15053/19*EARLY INTERVENTION CLINICS:*\ Phase 1: 2.6%\ Phase 2: 5.5%Prescription for disulfiram, naltrexone (oral or injectable), Acamprosate, or topiramate*Program Name and Brief Description*: Practice Partner Research Network-Translating Research Into Practice (PPRNet-TRIP) involved "practice site visits for academic detailing and participatory planning and network meetings for 'best practice' dissemination"\ *Setting*: Primary care practices from 15 U.S. States\ *Goal*: Increased prescription for disulfiram, naltrexone (oral or injectable), Acamprosate, or topiramate for those with an AUD\ *Key components*: EHR template, performance reports, provider education, and development of an implementation plan\ *Effect on Medication Receipt*: Due to small proportions of subjects receiving medications, pre-post (phase 1 versus phase 2) comparison of medication receipt was only estimable in the early intervention clinics. The adjusted OR for phase 1 versus phase 2 in the early intervention clinics was 2.24 (95% CI 1.03 -4.88) *p* \< 0.05\ *Effect on Alcohol Use Outcomes*: Not measured*DELAYED INTERVENTION CLINICS*:\ Phase 1: 0%\ Phase 2: 2.4% Patterns of implementation strategies did not clearly distinguish studies that successfully increased use of pharmacotherapy versus those that did not. Discussion {#Sec4} ========== Nine studies have evaluated the effects of care delivery or implementation interventions designed to increase active consideration and use of pharmacologic treatment options for patients with AUD. The interventions varied widely in context, intensity, target populations, and the underlying strategies, though many strategies were shared across studies, regardless of design (care delivery or implementation intervention). As hypothesized, care delivery interventions, targeted on patients willing to be randomized, were associated with much larger and more consistent improvements in rates of medication receipt compared to implementation interventions targeted at the overall population of patients with AUD. Among the care delivery interventions evaluated, three out of four increased use of medications. However, of these three, only Oslin's Alcohol Care Management intervention improved initiation of medications for AUD with more than one third of enrolled patients (69%) and improved in patient-level alcohol use. This trial may have been distinct from the others in its recruitment approaches---patients were recruited with the knowledge that the intervention aimed to provide pharmacologic treatment \[[@CR31]\]. Among the implementation interventions evaluated, only the VA Academic Detailing Program \[[@CR40]\] showed significant promise in increasing rates of medication receipt. It may be noteworthy that, compared to the other implementation interventions, the VA Academic Detailing Program was very labor intensive and targeted to diverse clinical settings with a high density of patients with AUD, not just primary care. The study of group medication management visits, intended as a means to increase prescribing capacity and educate patients who were considering medication treatment, \[[@CR44]\] showed signals of effectiveness in one VA facility with a highly motivated champion. Interestingly, group settings have previously been identified as a barrier to receiving treatment for AUD, but appeared to facilitate increased treatment receipt among persons already seeking treatment. This intervention should be more rigorously evaluated in contexts where the primary barrier is low capacity to provide medication management. A major goal of this review was to identify the underlying implementation strategies that were positively associated with larger effects. We categorized strategies based on published reports, but then solicited feedback from the intervention designers. There was substantial heterogeneity of strategies and some heterogeneity of effects, but no clear mapping of strategies to effectiveness was apparent. This process nonetheless proved informative by highlighting potential limitations of using of Powell et al.'s taxonomy to classify implementation strategies \[[@CR30]\]. Specifically, strategies listed in the taxonomy appeared not be hermeneutically distinct, causing frequent difficulty classifying strategies as one or another. Relatedly, strategy definitions are somewhat inexplicit and hard to confidently map onto what was done in the interventions, resulting in different decisions being made by our two independent coders and between our coders and the lead or senior authors of publications. This discordance was greater when our team was not involved with the study and therefore had to rely on the published report to garner information. In all but one case, intervention developers added strategies to those identified by our 2-expert Delphi process. In some cases, the additional strategies were not fully described in the published reports. These findings suggest that an improved compilation of implementation strategies may be needed to enable accurate and reliable identification of distinct strategies. Efforts to refine such a compilation should consider designating umbrella strategies and sub-categories within them or providing a list of strategies that are similar but variable with regard to naming or minor procedural variants. Findings from our study also make clear the importance of comprehensive reporting of strategies used. While providing full descriptions of multi-faceted implementation strategies can be difficult in a single outcomes paper, authors should be encouraged to publish more detailed study protocols (as several did in the present study \[[@CR34], [@CR35], [@CR38], [@CR39]\]), and reviewers may, nonetheless, need to query intervention developers as a final validity check. Perhaps more importantly, no method has been developed to characterize the intensity of strategies or cross-classify strategies with targets. Oslin's Alcohol Care Management used many of the same strategies as other care delivery models but was targeted on patients willing to participate in an intervention focused on pharmacologic treatment. VA's Academic Detailing Program did not differ from other implementation interventions in terms of component strategies so much as intensity and diversity of targets. Developing methods to more fully characterize interventions beyond component strategies may lead to insights that have greater utility for creating generalizable knowledge. In addition, because effectiveness of implementation interventions and strategies often depends on context, methods to cross-classify strategies with context and/or setting should be developed. Beyond the aforementioned limitations of the existing implementation science tools used in this study, other limitations are worth noting. Although we searched multiple data sources and used reference lists from identified studies and networking to ensure comprehensive capture of existing studies, it is possible we missed intervention studies that aimed at increasing pharmacologic treatment of AUD. Second, our review identified only a small number of studies that reported receipt of AUD medication as a primary or secondary outcome. The small number of studies to date may limit the ability to identify generalizable information about the effectiveness of specific strategies. Moreover, of the nine studies that met inclusion criteria for this review, four were care delivery interventions tested in trials that were powered on main (clinical) outcomes. These studies may have been underpowered to detect differences in secondary outcomes, such as medication receipt. Despite these limitations, this is the first review to our knowledge conducted with the goal of understanding strategies that may be effective for implementing medication treatment for AUD---a substantially underutilized treatment. Unfortunately, our review did not reveal which strategies are most effective for implementing AUD medications. However, we cataloged the use of specific strategies, perhaps suggesting candidates for future study. Further work is needed to understand why rates of medication treatment of AUD continue to be so low, even after patients are enrolled in care management interventions and/or receiving care in a healthcare setting that has been targeted by a multifaceted intervention. It is entirely possible that previous examinations of barriers, and interventions designed to overcome them have missed the mark. To further assess this, research will be needed to better understand patient-level perspective, preferences and barriers to receipt of medications. VA : U.S. Veterans Health Administration AUD : alcohol use disorder ERIC : Expert Recommendations for Implementing Change project PPRNet-TRIP : Practice Partner Research Network-Translating Research Into Practice AHEAD : Addiction Health Evaluation and Disease CCM : Chronic Care Model EHR : electronic health record SUMMIT : Substance Use Motivation and Medication Integrated Treatment CHOICE : Choosing Healthier Drinking Options in Primary Care ADaPT--PC : Alcohol Use Disorder Pharmacotherapy and Treatment in Primary Care Settings OUD : opioid use disorder PCMHI : primary care mental health integration AHSH and ECW collaborated on the conception of the manuscript. All authors reviewed the literature, coded implementation strategies, and participated in drafting the manuscript. All authors read and approved the final manuscript. Acknowledgements {#FPar1} ================ The authors would like to acknowledge the lead and/or senior investigators of publications included in this review for coding additional implementation strategies that may not have been apparent in the published article. Competing interests {#FPar2} =================== The authors declare that they have no competing interests. Availability of data and materials {#FPar3} ================================== Data sharing is not applicable to this article as no datasets were generated or analysed during the current study. Consent for publication {#FPar4} ======================= Not applicable. Ethics approval {#FPar5} =============== Not applicable. Funding {#FPar6} ======= This study was supported in part by a VA HSR&D Research Career Scientist award (RCS 14-132) to Dr. Harris and a VA HSR&D Career Development award (CDA 12-276) to Dr. Williams. The views expressed in this article are those of the authors and do not necessarily reflect the position nor policy of the Department of Veterans Affairs (VA) or the United States government. Publisher's Note {#FPar7} ================ Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
{ "pile_set_name": "PubMed Central" }
Q: Sum of probability differential is zero I keep seeing: $$ \sum_i [ln(p_i) + 1]dp_i = \sum_i ln(p_i)dp_i $$ where $p_i$ is the probability of the $i^{th}$ state and where $ \sum_i p_i = 1 $ cropping up in my statistical mechanics course. I'm a bit unsure of how one comes to this? EDIT: A similar result which confuses me is: $$ \sum_i [ln(Z)] dp_i = 0 $$ where $Z$ is the partition function. A: Presumably because we always have $ \sum_i p_i = 1$: if we differentiate this condition, $$ \sum_i dp_i = 0 $$ so that the total probability remains $1$. It's like $\dot{x} \cdot x = 0 $ when $x$ is forced to lie on a sphere $x \cdot x = R^2$.
{ "pile_set_name": "StackExchange" }
Subtraction method for the high-performance liquid chromatographic measurement of plasma adenosine. The measurement of plasma adenosine with traditional high-performance liquid chromatographic techniques is difficult because of its nanomolar concentration, its short half-life in blood, and because of the difficulty in isolating adenosine from interfering peaks in the chromatogram. To prevent loss of adenosine in the blood sample, a "stop solution" is used to prevent enzymatic degradation and cellular uptake. Peak-shifting techniques on fractionated samples to measure adenosine derivatives have been used in the past to avoid interfering peaks in the chromatogram. A new method has been developed by which nanomolar levels of plasma adenosine can be accurately measured despite co-eluting peaks in the chromatogram. In this method, plasma samples are collected with a stop solution, processed, and divided. Adenosine deaminase is added to part of the sample to form a blank. A computer program subtracts the blank chromatogram from the paired unknown, and the result is compared to adenosine standards prepared from the blank and subtracted in a similar fashion. With this subtraction method, the overall recovery of physiological concentrations of adenosine was 89% from dog blood, and the average coefficient of variation was 12%. In summary, the subtraction method of plasma adenosine measurement offers good recovery, reproducibility, and the ability to quantify low levels of adenosine despite interfering peaks in the chromatogram.
{ "pile_set_name": "PubMed Abstracts" }
Dear Sir's. Pls. find below corrected daily pos. report with correct distances. Total distance from Lake Charles to Qalhat = 9214 n.miles. Sorry for the mistake, and will check better in the futher. POSREP HOEGH GALLEON 19TH. APRIL 1200LT (1600UTC) A) Position at noon : N 26 41 W 089 29 B) Dist. from last noon : 271,5 n. miles (14 HRS.)(1 HRS. ADV) C) Dist. to go to next port: 8942.5 n.miles (Qalhat) D) Eta next port : May 12th. 0001 local time E) 1:Average sea/weather : Wind: SE 4 2, Sea: SE 3 2:Average speed : 19,39 knots / average RPM 91,90 3:B.O.G. consumed : nol 4:F.O. consumed : 75 mt. 5:G.O. consumed : nil 6:Average vapor press. Tk. 1: bar. Average liq. temp: Tk. 2: : Tk. 3: : Tk. 4: : Tk. 5: : POSREP HOEGH GALLEON 20TH. APRIL 1200LT (1600UTC) A) Position at noon : N 24 00 W 081 57 B) Dist. from last noon : 440 n. miles (24 HRS.) C) Dist. to go to next port: 8502.5 n.miles (Qalhat) D) Eta next port : May 12th. 0001 local time E) 1:Average sea/weather : Wind: E 6, Sea: E 5 2:Average speed : 18,33 knots / average RPM 91,55 3:B.O.G. consumed : nil 4:F.O. consumed : 128 mt. 5:G.O. consumed : nil 6:Average vapor press. Tk. 1: bar. Average liq. temp: Tk. 2: : Tk. 3: : Tk. 4: : Tk. 5: : POSREP HOEGH GALLEON 21ST. APRIL 1200LT (1600UTC) A) Position at noon : N 24 00 W 081 57 B) Dist. from last noon : 440 n. miles (24 HRS.) C) Dist. to go to next port: 8037.5 n.miles (Qalhat) D) Eta next port : May 12th. 0001 local time E) 1:Average sea/weather : Wind: E 6, Sea: E 5 2:Average speed : 18,33 knots / average RPM 91,55 3:B.O.G. consumed : nil 4:F.O. consumed : 128 mt. 5:G.O. consumed : nil 6:Average vapor press. Tk. 1: bar. Average liq. temp: Tk. 2: : Tk. 3: : Tk. 4: : Tk. 5: : POSREP HOEGH GALLEON 22ND. APRIL 1200LT (1500UTC) A) Position at noon : N 29 05 W 068 07 B) Dist. from last noon : 436 n. miles (24 HRS.) C) Dist. to go to next port: 7601.5 n.miles (Qalhat) D) Eta next port : May 12th. 0001 local time E) 1:Average sea/weather : Wind: ENE 6 , Sea: ENE 5 2:Average speed : 18,17 knots / average RPM 93,76 3:B.O.G. consumed : nil 4:F.O. consumed : 161 mt. 5:G.O. consumed : nil 6:Average vapor press. Tk. 1: bar. Average liq. temp: Tk. 2: : Tk. 3: : Tk. 4: : Tk. 5: : POSREP HOEGH GALLEON 23RD. APRIL 1200LT (1400UTC) A) Position at noon : N 29 42 W 062 33 B) Dist. from last noon : 294 n. miles (17H 45M.) C) Dist. to go to next port: 7307.5 n.miles (Qalhat) D) Eta next port : May 12th. 0001 local time E) 1:Average sea/weather : Wind: ENE 6 , Sea: ENE 5 2:Average speed : 17.04 knots / average RPM 91,56 3:B.O.G. consumed : nil 4:F.O. consumed : 101 mt. 5:G.O. consumed : 1.0 mt 6:Average vapor press. Tk. 1: bar. Average liq. temp: Tk. 2: : Tk. 3: : Tk. 4: : Tk. 5: : Brgds Oe. Hansen Master E-mail: master.gall@hoegh.no Teleph: Inmarsat tel.no.+874 330853910 Fax : Inmarsat fax no.+874 330853913 Telex : Inmarsat B telex no.+584 330853915 Telex : Inmarsat C telex no.+584 430853910 (24hrs.watch)
{ "pile_set_name": "Enron Emails" }
T.C. Memo. 2019-54 UNITED STATES TAX COURT MARY BUI, Petitioner v. COMMISSIONER OF INTERNAL REVENUE, Respondent Docket No. 20453-16. Filed May 21, 2019. Ronda N. Edgar, for petitioner. Adam B. Landy, Nancy M. Gilmore, and Thomas R. Mackinson, for respondent. MEMORANDUM FINDINGS OF FACT AND OPINION GOEKE, Judge: Respondent issued a notice of deficiency to petitioner determining an income tax deficiency for 2011 of $173,058 and an addition to tax -2- [*2] under section 6651(a)(1) of $66,668.1 After concessions, the sole issue remaining for consideration is whether petitioner must include in gross income cancellation of indebtedness of $355,488. We hold that she may properly exclude $48,151 but must include the remaining $307,337. FINDINGS OF FACT This case was tried on September 10, 2018, in San Francisco, California. The parties have submitted a stipulation of facts and accompanying exhibits, which are incorporated herein by this reference. When the petition was timely filed, petitioner resided in California.2 Petitioner is also known as Nga Thuy Lan Bui. For 2011 petitioner excluded $355,488 of discharged indebtedness from her gross income and indicated the excluded indebtedness was qualified principal residence 1 Unless otherwise indicated, all section references are to the Internal Revenue Code (Code) as amended and in effect at all relevant times, and all Rule references are to the Tax Court Rules of Practice and Procedure. 2 The petition was received with an illegible postmark on September 19, 2016, five days after the time to file a petition with this Court had expired. Sec. 301.7502-1(c)(1)(iii)(A), Proced. & Admin. Regs., places on the taxpayer the burden to prove the date an illegible postmark was made. On March 12, 2019, we issued an order directing petitioner to sustain her burden of establishing that the postmark was timely made. On March 24, 2019, petitioner responded to our order and supplemented the record with proof of mailing on September 12, 2016. Accordingly, we are satisfied of our jurisdiction to hear this case. -3- [*3] indebtedness. On June 16, 2016, respondent issued a notice of deficiency to petitioner for 2011 and proposed an adjustment disallowing her entire exclusion of discharged indebtedness income. Respondent now concedes that petitioner was insolvent by $42,852 in 2011. I. Residences A. Red River Property On June 1, 1981, petitioner, her former spouse, and three other persons purchased a single-family residence on Red River Way in San Jose, California (Red River property), for $156,500. Petitioner and her former spouse together owned a 25% interest in the Red River property. By grant deed dated October 15, 1985, and recorded January 28, 1986, petitioner and her former spouse purchased the remaining 75% interest in the Red River property for $97,500. By quitclaim deed dated November 14, 2002, and recorded December 12, 2002, petitioner acquired sole ownership in the Red River property. Petitioner legally separated from her former spouse in 2005 or 2006. Petitioner lived at the Red River property from its acquisition in 1981 through March 14, 2011, and treated it as her primary residence. On March 14, 2011, petitioner relinquished ownership of the Red River property by short sale for -4- [*4] $485,000. At that time, the balance of the mortgage on the Red River property was $416,000. B. Cedar Grove Property On or around June 1, 1988, petitioner and her former spouse purchased a single-family rental home on Cedar Grove Circle in San Jose, California (Cedar Grove property). By quitclaim deed dated November 14, 2002, and recorded December 12, 2002, petitioner acquired sole ownership in the Cedar Grove property. After petitioner sold the Red River property in March 2011, she moved into the Cedar Grove property and established it as her new primary residence. II. Wells Fargo Lines of Credit Before 2011 petitioner obtained three home equity lines of credit with Wells Fargo Bank, N.A. (Wells Fargo). Petitioner executed a deed of trust dated February 14, 2007, and recorded March 12, 2007, securing a $250,000 line of credit for an account ending in 9471 between herself and Wells Fargo with the Red River property listed as collateral (9471 loan). Petitioner executed a deed of trust dated March 1, 2007, and recorded March 26, 2007, securing a $40,000 line of credit for an account ending in 7231 between herself and Wells Fargo with the Cedar Grove property as collateral (7231 loan). Petitioner also executed a deed of trust dated March 20, 2007, and recorded April 30, 2007, securing a $101,942 line -5- [*5] of credit for an account ending in 5371 between herself and Wells Fargo with the Cedar Grove property as collateral (5371 loan). In 2011 Wells Fargo issued three Forms 1099-C, Cancellation of Debt, to petitioner indicating that the remaining debt associated with the 9471 loan, the 7231 loan, and the 5371 loan had been canceled. On the Forms 1099-C Wells Fargo described the debts as “HEQ Secured Installment Loan” and checked the box indicating petitioner was personally liable for repayment of the debts. Petitioner’s canceled Wells Fargo debt for 2011 was as follows: Date of Form 1099-C Amount of canceled debt Account No. Mar. 18, 2011 $243,299 9471 Oct. 28, 2011 11,999 7231 Oct. 28, 2011 100,190 5371 Petitioner executed at least four additional deeds of trust with Wells Fargo before 2011. In addition, petitioner, with and without her former spouse, executed at least seven deeds of trust between 1986 and 2004 from banking institutions other than Wells Fargo. The indebtedness indicated by these additional deeds of trust was not canceled in 2011. -6- [*6] III. Home Improvements Petitioner testified to carrying out a number of home improvement projects before 2011 for the Red River property, but she provided no documentation relating to when or how expenses of these projects were paid. She did not testify to any home improvement project expenses related to the Cedar Grove property. Petitioner paid approximately $10,000 for custom drapes to be installed at the Red River property in 2007. In addition, she spent approximately $12,000 for driveway repair and expansion work at the Red River property in 2008. The remaining home improvement expenditures petitioner testified to were made before 2007, the year she obtained the Wells Fargo lines of credit. The associated debts were discharged in 2011. OPINION Generally, the Commissioner’s determinations in a notice of deficiency are presumed correct, and the taxpayer bears the burden of proving the determinations are erroneous. Rule 142(a); Welch v. Helvering, 290 U.S. 111, 115 (1933). However, for the presumption of correctness to attach in an unreported income case such as this, the Commissioner must base his deficiency determination on some substantive evidence that the taxpayer received unreported income. Hardy v. Commissioner, 181 F.3d 1002, 1004 (9th Cir. 1999), aff’g T.C. Memo. 1997-97. -7- [*7] There is no dispute in this case that petitioner had debt that was forgiven. Section 7491(a) shifts the burden of proof to the Commissioner where the taxpayer has presented credible evidence with respect to any factual issue relevant to ascertaining the correct tax liability of the taxpayer. Section 7491(a) also requires that the taxpayer have substantiated all appropriate items, maintained records as required under the Code, and cooperated with all reasonable requests by the Commissioner for witnesses, information, documents, meetings, and interviews. Sec. 7491(a)(2)(A) and (B). Petitioner has not attempted to argue, and the record does not demonstrate, her compliance with the requirements of section 7491(a); accordingly, the burden remains with petitioner to show respondent’s determinations were incorrect. This is a dispute over whether petitioner had reportable cancellation of indebtedness income that she failed to report on her 2011 tax return. The Code defines income liberally as “all income from whatever source derived”. Sec. 61(a). Specifically, income includes any income from the discharge of indebtedness. Sec. 61(a)(12); sec. 1.61-12(a), Income Tax Regs. The underlying rationale for the inclusion of canceled debt as income is that the release from a debt obligation the taxpayer would otherwise have to pay frees up assets -8- [*8] previously offset by the obligation and acts as an accession to wealth--i.e., income. United States v. Kirby Lumber Co., 284 U.S. 1, 2 (1931). Generally, when canceled debt creates income, the amount includible in income is equal to the face value of the discharged obligation minus any amount paid in satisfaction of the debt. Rios v. Commissioner, T.C. Memo. 2012-128, 2012 WL 1537910, at *4, aff’d, 586 F. App’x 268 (9th Cir. 2014); see Merkel v. Commissioner, 192 F.3d 844, 849 (9th Cir. 1999), aff’g 109 T.C. 463 (1997). The income is recognized for the year in which the debt is canceled. Montgomery v. Commissioner, 65 T.C. 511, 520 (1975). Petitioner argues that although the cancellation of debt generally creates reportable income her canceled debt is excludable. Some “accessions to wealth that would ordinarily constitute income may be excluded by statute or other operation of law.” Commissioner v. Dunkin, 500 F.3d 1065, 1069 (9th Cir. 2007), rev’g 124 T.C. 180 (2005). Even so, “given the clear Congressional intent to ‘exert * * * the full measure of its taxing power,’ * * * exclusions from gross income are construed narrowly in favor of taxation.” Id. (quoting Commissioner v. Glenshaw Glass Co., 348 U.S. 426, 429 (1955)) (citing Merkel v. Commissioner 192 F.3d at 848). Petitioner argues two exclusions apply to her cancellation of indebtedness income: section 108(a)(1)(E), which offers an exclusion when the -9- [*9] canceled debt is “qualified principal residence indebtedness”; and section 108(a)(1)(B), which provides an exclusion where the taxpayer is insolvent. We will examine both exclusions as applied to petitioner. I. Qualified Principal Residence Indebtedness Section 108(a)(1)(E) provides that gross income does not include amounts which would be includible as cancellation of indebtedness income if “the indebtedness discharged is qualified principal residence indebtedness”. Qualified principal residence indebtedness is defined as (1) acquisition indebtedness (2) with respect to the taxpayer’s principal residence. Sec. 108(h)(2), (5). Acquisition indebtedness is “incurred in acquiring, constructing, or substantially improving any qualified residence of the taxpayer” and must be secured by that residence. Sec. 163(h)(3)(B)(i). If only a portion of a discharged loan obligation meets the definition of qualified principal residence indebtedness, only the amount discharged which exceeds the nonqualified principal residence indebtedness is excludable. Sec. 108(h)(4). Petitioner’s primary residence was the Red River property until she sold it in March 2011 and established the Cedar Grove property as her new primary residence. Three of her Wells Fargo lines of credit--the 9471 loan, the 7231 loan, and the 5371 loan--were canceled in 2011. Petitioner does not argue that any of - 10 - [*10] these loans, which were obtained in 2007, were used to acquire or construct either the Red River property or the Cedar Grove property, both of which were solely acquired by petitioner in 2002. Petitioner instead argues that funds from these loans were used to substantially improve her primary residence. Petitioner provided no evidence regarding substantial improvements made to the Cedar Grove property. For the qualified principal residence indebtedness exclusion to apply, the debt must be used to acquire, construct, or substantially improve the taxpayer’s primary residence, and that residence must secure the loan. See secs. 108(h)(2), 163(h)(3)(B)(i). Both the 7231 loan and the 5371 loan were secured by the Cedar Grove property. Therefore, because these loans were not used to acquire, construct, or substantially improve the Cedar Grove property, they are not excludable from gross income as qualified principal residence indebtedness. Petitioner offered testimony on a number of improvements made to the Red River property before she obtained the 9471 loan. These improvements could not have been financed by a loan that had not materialized at the time they were made. Thus, they will be disregarded for purposes of determining whether any portion of the 9471 loan was qualified personal residence indebtedness. Petitioner spent $12,000 on driveway expansion and repair work at the Red River property in - 11 - [*11] 2008. We are satisfied from her testimony that this amount was paid with the 9471 loan. Accordingly, the portion of the 9471 loan that was used to finance the driveway project is qualified principal residence indebtedness. Petitioner also testified that she had custom drapes installed at the Red River property in 2007 for $10,000. We do not find that this expense constitutes a substantial improvement to the Red River property, and therefore it is not qualified principal residence indebtedness. We have determined that $12,000 of the 9471 loan was qualified principal residence indebtedness; however, the amount that petitioner may properly exclude is limited by section 108(h)(4). Section 108(h)(4) provides that where only a portion of a discharged loan is qualified principal residence indebtedness, the amount that may be excluded is only “so much of the amount discharged as exceeds the amount of the loan (as determined immediately before such discharge) which is not qualified principal residence indebtedness.” To apply this limitation we must determine how much of the loan was not qualified principal residence indebtedness. The original line of credit was for $250,000. We have determined that $12,000 was qualified principal residence indebtedness; thus $238,000 was not qualified principal residence indebtedness. Therefore, petitioner may exclude only $5,299 of the canceled 9471 loan from her income under the qualified - 12 - [*12] principal residence indebtedness exclusion ($243,299 canceled debt minus the $238,000 of the debt that was not qualified principal residence indebtedness). II. Insolvency Exclusion Petitioner argues that even if her cancellation of indebtedness income is not excludable as qualified principal residence indebtedness, it should be excludable because she was insolvent in 2011. Section 108(a)(1)(B) provides an exclusion from gross income of cancellation of indebtedness amounts where the taxpayer is insolvent at the time the discharge occurs. A taxpayer is insolvent by the amount her liabilities exceed the fair market value of her assets, determined immediately before the discharge of indebtedness. Sec. 108(d)(3). Respondent concedes that petitioner was insolvent by $42,852 and, therefore, admits that amount of cancellation of indebtedness income is excludable. In the case of a taxpayer who qualifies for the insolvency exclusion, the excluded amount cannot exceed the amount by which the taxpayer is insolvent. Sec. 108(a)(3). Petitioner suggests that respondent did not accurately account for her assets and liabilities when calculating her insolvency. However, petitioner stipulated respondent’s insolvency calculations and has offered no coherent argument as to why the calculations are in error. Accordingly, petitioner is entitled to an insolvency exclusion for her cancellation of indebtedness income of - 13 - [*13] $42,852. Petitioner may exclude a total of $48,151--representing $42,852 under the insolvency exclusion and $5,299 under the qualified principal residence indebtedness exclusion--of her cancellation of indebtedness income from her gross income.3 In reaching our holding, we have considered all arguments made, and, to the extent not mentioned above, we conclude they are moot, irrelevant, or without merit. Decision will be entered under Rule 155. 3 Sec. 108(a)(2)(C) provides that the insolvency exclusion does not apply to any discharge to which the qualified principal residence indebtedness exclusion applies unless the taxpayer elects the insolvency exclusion to apply in lieu of the qualified principal residence indebtedness exclusion. Petitioner made no such election; however, because three debts were discharged we may apply the insolvency exclusion to the loans not eligible for the qualified principal residence exclusion.
{ "pile_set_name": "FreeLaw" }
There are over 223,000 railroad grade crossings in the United States alone. Most of these crossings, especially those in rural areas, have only warning signs to alert motorists to the danger posed by an approaching train. Typical of railroad grade crossing warning signs is the familiar X-shaped "RAILROAD CROSSING" sign or "crossbuck." Warning signs, however, only alert motorists to the presence of a railroad crossing and do not alert them to the presence of an oncoming train. Often, a motorist may fail to see an approaching train because he was distracted or because his view of the train was obstructed by environmental conditions or darkness. Consequently, collisions between trains and automobiles at railroad crossings account for thousands of accidents each year, many of which result in extensive property damage and serious injury or death to motorists. Known to the art are active railroad crossing warning systems utilizing the railroad tracks themselves to detect an approaching train and activate warning signal apparatus such as flashing lights and bells. These systems warn motorists when a train is detected at a predetermined distance from the crossing. However, present active warning systems do no take into account the speed of the train and thus make no allowance for the time it will take the train to reach the crossing. For example, a fast moving train may reach the crossing in only a few seconds after it is detected, while a slow moving train may fail to reach the crossing until several minutes have passed. Motorists may become impatient waiting for slow moving trains to reach the crossing. Consequently, some motorists may begin to ignore the warnings and attempt to cross the tracks possibly causing an accident should a fast moving train be encountered. Further, installation of current active warning systems may require the insulation and resetting of great lengths of track. Additionally, these systems may require the installation of expensive high voltage transformers, relays, and batteries for backup systems. Unfortunately, many rural crossings are not conducive to the installation of active warning systems that requires AC electrical power and extensive grade preparation. Consequently, these crossings usually remain inadequately protected. High speed rail corridors being proposed across the United States will only exacerbate this problem. These corridors will require improved crossing warning systems to properly secure the safety of both passengers and motorists.
{ "pile_set_name": "USPTO Backgrounds" }
The present invention relates to the field of information technology, including, more particularly, to systems and techniques for simplifying access to different applications. Organizations look to their information technology (IT) department to plan, coordinate, and manage the computer-related activities of the organization. An IT department is responsible for upkeep, maintenance, and security of networks. This may include analyzing the computer and information needs of their organizations from an operational and strategic perspective and determining immediate and long-range personnel and resource requirements. Monitoring the computer-related activities of the organization is an increasingly difficult task because the modern workplace is a complex blend of multiple users and multiple applications which combine into a complex and dynamically evolving environment. For example, at any given time multiple applications may be executing on multiple machines or “in the cloud.” It can be hard to follow what is going on in the cloud, for an application, for a given user. Many organizations do not have systems for tracking how resources are used by applications and users. Thus, there is a need to provide systems and techniques to manage computing resources.
{ "pile_set_name": "USPTO Backgrounds" }
Q: Move specific items to the end of a list I have an ArrayList in Java: {"deleteItem", "createitem", "exportitem", "deleteItems", "createItems"} I want to move all string which contains delete to the end of the list, so I would get the next: {"createitem", "exportitem", "createItems", "deleteItem", "deleteItems"}` I can create two sublists - one for the words which contain the 'delete' word, and one for the others, and then merge them, but I search for a more efficient way. A: Use custom Comparator: List<String> strings = Arrays.asList( "deleteItem", "createitem", "exportitem", "deleteItems", "createItems" ); Comparator<String> comparator = new Comparator<String>() { @Override public int compare(final String o1, final String o2) { if (o1.contains("delete") && !o2.contains("delete")) { return 1; }else if (!o1.contains("delete") && o2.contains("delete")) { return -1; } return 0; } }; Collections.sort(strings, comparator); System.out.println(strings); A: If you want something efficient and need to remove elements in the beginning and middle of a List I would suggest using a LinkedList instead of a array list. That would avoid rewriting the underlying array for each remove operation. Then, you simply iterate on the list, calling remove and addLast for any string that contains delete. Of course, this is only OK if there is nothing preventing you from replacing your ArrayList with a LinkedList.
{ "pile_set_name": "StackExchange" }
‘Rock’n’Roll Bangkok’ to be witnessed at The Overstay in Pinklao. Featuring five bands of original and authentic R’n’R music in a most extravagant and priceless venue. New bands to be seen and loud tunes to be heard. ROCK’N’ROLL BANGKOK Sexellency – Diva Punk Dreaming Hot Rod – Hot Rod’n’Roll Planet Zorch – Psychobilly Prevolution BTS […]
{ "pile_set_name": "Pile-CC" }
Penzcraft Hello! Welcome to the PenzCraft server! We are a very small community of people who enjoy playing minecraft and like to aspect of vanilla SMP. We are looking for mature players to help build this small community. Rules: -No griefing or x-ray mods(there are protection and residence plugins to protect your stuff) -Only PVP in the pvp area( PVP area is anywhere north from Spawn((You cant kill in the south, if you do, please return their items) You also cannot kill in your house in the south. -Add jpen somewhere in your app to show that you read the rules -Be respectful. We have /ignore if someone is being rude. This is not a ban-able offense! Were all mature enough to take the highroad. -Do not harrass people! -Please try not to build so close to spawn, the world is very big, and you dont need to be right up against it. -Must be atleast 16+ No exceptions!! -You must submit and application to build! Application: IGN Name Age Why I should add you to the server Classes: Guest(yellow): Cant do anything until approved Member(Blue): After approved, can do most things with limitation Supporter(Red): Once donated, receive perks Mods(Blue): Help admin with things Op(Green): JRPenza620 Admin(Purple): AdminJRP620 Anyone can come on the server, but you can not break blocks until added to the whitelist!
{ "pile_set_name": "Pile-CC" }
DNA aptamers as analyte-responsive cation transporters in fluorogenic vesicles: signal amplification by supramolecular polymerization. We report that single-stranded (ss) DNA aptamers can be activated by counterions such as dodecylguanidinium (DG) to act as transporters in fluorogenic vesicles. However, their activity is independent of the presence or absence of the analyte. Dimerization into ds-DNA helices increases activity in an overadditive manner. Duplex disassembly in response to analyte binding is thus detectable as inactivation. Shortened and mismatched antiaptamers destabilize the active duplex, reduce activity, and increase the sensitivity for the analyte. Supramolecular polymerization of aptamer/antiaptamer duplexes with "sticky ends" is shown to further increase activity without losses in sensitivity for the analyte. The results demonstrate that the principles of DNA nanotechnology are directly applicable to membrane based sensing systems with high precision and fidelity.
{ "pile_set_name": "PubMed Abstracts" }
2020 Australian S5000 Championship The 2020 Australian S5000 Championship is planned to be the inaugural season of the Australian S5000 Championship, run after a series of exhibition races the previous year. The series will be sanctioned by Motorsport Australia and promoted by the Australian Racing Group as part of the 2020 Shannons Nationals Motor Racing Series. The season is currently scheduled for 6 rounds, beginning in March at the Albert Park Circuit and ending on 13 September at Sandown Raceway. Teams and drivers The following teams and drivers are under contract to compete in the 2020 championship: Race calendar The proposed 2020 calendar was released on 29 October 2019, with six confirmed rounds, plus one non-championship round. All rounds will be held in Australia. Final scheduling of race dates is yet to be determined. The date for the inaugural "Bathurst International" event was revealed on 15 January 2020. References S5000 Championship Australian S5000 Championship
{ "pile_set_name": "Wikipedia (en)" }
KTLO-FM KTLO-FM 97.9 FM is a radio station licensed to Mountain Home, Arkansas. The station broadcasts an Adult Standards format and is owned by Mountain Lakes Broadcasting Corp. History On January 7, 1969, Mountain Home Broadcasting Corporation, the owner of KTLO (1240 AM), filed with the Federal Communications Commission to build a new FM radio station in Mountain Home. The construction permit was granted on July 1, 1970, and KTLO-FM began broadcasting at 98.3 MHz on January 11, 1971. $30,000 in new equipment was installed at the KTLO studios on Highway 5 to prepare for the launch of the stereo outlet. KTLO-FM broadcast from a hilltop tower located west of the studios and AM transmitter site. Early FM programming was in a block format, with contemporary and country music interspersed with news features. KTLO-AM-FM was sold in 1975 to four new investors for $400,000. By the mid-1980s, KTLO had settled into a middle-of-the-road music format known as "Stardust 98". The 1990s saw ownership and technical changes for KTLO-FM. The former began with a $775,000 sale of KTLO-AM-FM to Charles and Scottie Earls in late 1994. The Earls oversaw a major technical overhaul for the FM outlet: in 1996, it increased its power to 50,000 watts and relocated to 97.9 MHz from a transmitter on Crystal Mountain, with the programming remaining the same. The Earls divested their remaining shares in KTLO-AM-FM and KCTT-FM 101.7 to the Ward and Knight families in 2010 in a transaction that gave the Earls full control of KOMC-FM and KRZK in Branson, Missouri; the two families had previously been minority owners in Mountain Lakes. Among KTLO-FM's regular programs is Talk of the Town, an interview show. Talk of the Town had previously been hosted by Brenda Nelson, who retired after 34 years on air in 2009 after airing some 8,000 interviews. References External links KTLO-FM website TLO-FM Category:Adult standards radio stations in the United States Category:1971 establishments in Arkansas Category:Radio stations established in 1971
{ "pile_set_name": "Wikipedia (en)" }
Prognostic differences of the Mini Nutritional Assessment short form and long form in relation to 1-year functional decline and mortality in community-dwelling older adults receiving home care. To compare the prognostic value of the revised Mini Nutritional Assessment short form (MNA-SF) classification with that of the long form (MNA-LF) in relation to mortality and functional change in community-dwelling older adults receiving home care in Germany. Multicenter, 1-year prospective observational study. Community. Older adults (≥ 65) receiving home care (n = 309). Nutritional status (well nourished, at risk of malnutrition, malnourished) was classified using the MNA-SF and MNA-LF at baseline. Functional status was determined according to the Barthel Index of activities of daily living (ADLs) at baseline and after 1 year. Hazard ratios (HRs) and 95% confidence intervals (CIs) of mortality were calculated for MNA-SF and MNA-LF categories using stepwise Cox regression analyses. Repeated-measurements analysis of covariance was used to examine changes in ADL scores over time for MNA-SF and MNA-LF categories. MNA-SF classified 15% of the sample as malnourished and 41% as being at risk of malnutrition, whereas the MNA-LF classified 14% and 58%, respectively. During the follow-up year, 15% of participants died. The estimated hazard ratios (HR) for 1-year mortality were lower for MNA-SF than for MNA-LF categories (at risk of malnutrition: HR = 2.21, 95% confidence interval (CI) = 1.02-4.75 vs HR = 5.05, 95% CI = 1.53-16.58; malnourished: HR = 3.27, 95% CI = 1.34-8.02 vs HR = 8.75, 95% CI = 2.45-31.18). For MNA-SF categories, no differences in functional change were found. According to the MNA-LF, ADL decline tended to be greater in those at risk of malnutrition (7.1 ± 10.1 points) than in those who were well nourished (3.7 ± 10.1 points) and malnourished (4.9 ± 10.1 points). In this sample of older adults receiving home care, the MNA-LF was superior to the MNA-SF in predicting mortality and differentiating functional decline during 1 year of follow-up.
{ "pile_set_name": "PubMed Abstracts" }
Q: Redefine obeyspaces to newline I want to typeset code snippets from different programming languages. I couldn't get listings to do what I want (one complete height of an empty line takes up too much space for my liking) and neither did I manage to define everything I want myself. I'd like to define a new environment where return calls \newline, and where an empty line calls \par (this one is already present in normal text mode) so that I can differentiate between them. In addition, every space inserted should be printed, but that is taken care of by \obeyspaces. MWE: \documentclass{article} \newenvironment{code}{ \ttfamily \parindent=0pt\parskip=5pt \obeyspaces\obeylines }{} \begin{document} \begin{code} text 1space 2spaces new line empty line before this line \end{code} \end{document} I found \def\obeypar{\catcode`\^^M\active \let ^^M\par }` and tried to define \obeylines (LaTeX tells me it's undefined) but since these are TeX primitives (?) they give an error. Can I tell LaTeX that this part should be treated as TeX? What am I missing or where I can read about these things? A: If I understand the question, you need to distinguish the empty and non-empty lines in code environment. You can try the following: \def\emptyline{\hbox to\hsize{\dotfill empty line\dotfill}} %\def\emptyline{\vskip.7\baselineskip} % ... another alternative ... \def\printemptyline#1{\def\par{\ifvmode\emptyline\fi\endgraf}\obeylines} \begin{code}\printemptyline text 1space 2spaces new line empty line before this line \end{code} This gives the result:
{ "pile_set_name": "StackExchange" }
Hope. Joy.. Feelings cloaked as words. Tag challenge deep down where you are truly are the most rudimental form of you. it is a will, a compassion, a purpose, a meaning, a purpose to bring meaning to this world that slowly loses its meaning. on the surface we... Continue Reading → We live in a society where opportunities are more open to everyone, information is more accessible to everyone, the ways of thinking are more widely acceptable by everyone, choices are more freely to be made by people. Or is it?... Continue Reading → 3:30 a.m. woke up by the alarm and some discipline, the wee hours felt groggy but I must get ready for the run 2 hours away. Dragged myself out of bed, washed up with a half-awoken mind, saturated myself with... Continue Reading → Wow... This is beyond my imaginations and the fact that I have made it this far not giving up on anything yet, indeed this is remarkable. Thanks! A big shout-out to people who read my posts although I had hiatuses... Continue Reading → What is life? A simple rhetorical question that we ask ourselves every time that we need to. Most of us wander around in this realm of life without any definite purpose; most of us live simply just to find the... Continue Reading → Falling down in life is inevitable and the chances of people getting up are never on the bright side. Life is never a bed of roses and we should never underestimate the repercussions of losing momentum completely. If we were... Continue Reading → The sun overhung above me, shining mercilessly on me, scorching the air that I breathed. I was heaving heavily as I reached my 3rd km mark, my body was shouting out for me to quit, to take a rest, and... Continue Reading → do the right thing, at the right time, at the right place, for the right reason. - Mr. Leong Youth is never waiting, so as time. I am not any younger than I was yesterday, I am letting time slip... Continue Reading → Scratching my head after I had awakened from a sudden blackout while pondering about my choices about giving consciousness to an Artificial Intelligence (A.I.)- my computer, Alexa. I did all my equations over stacks of papers, scribbled messily. The idea... Continue Reading →
{ "pile_set_name": "Pile-CC" }
Q: On CentOS Linux 7.4, cannot install the R package "httpuv" I am currently using CentOS Linux 7.4.1708 (Core). I have tried to install the package httpuv in R through various methods to no avail. It always ends with the error: CC src/unix/libuv_la-procfs-exepath.lo CC src/unix/libuv_la-proctitle.lo CC src/unix/libuv_la-sysinfo-loadavg.lo CC src/unix/libuv_la-sysinfo-memory.lo CCLD libuv.la libtool: error: require no space between '-L' and '-L/n/helmod/apps/centos7/Core/pcre/8.37-fasrc02/lib' make[1]: *** [libuv.la] Error 1 make[1]: Leaving directory `/tmp/Rtmp5Dj7hL/R.INSTALL5c046d96dc92/httpuv/src/libuv' make: *** [libuv/.libs/libuv.a] Error 2 ERROR: compilation failed for package ‘httpuv’ Does anyone have any thought as to what is going on here? Thanks. A: The previous answer is partially correct in that it identifies libuv as the missing dependency. In CentOS 7 you can add this with yum install libuv-devel, then attempt to install the package again with install.packages("httpuv") and provided that was your only issue, it should compile correctly.
{ "pile_set_name": "StackExchange" }
INTRODUCTION {#s1} ============ Hepatocellular carcinoma (HCC) is one of the most common malignant tumors and ranks the third highest cause of cancer-related deaths worldwide \[[@R1], [@R2]\]. The resection rate of HCC has increased over decades due to the improvements in early diagnostic methods and surgical techniques. However, the postoperative recurrence rate and overall survival (OS) are not optimistic due to limited response to various adjunctive therapies and aggressive behaviors in advanced stages of HCC \[[@R3]\]. Thus, an accurate understanding of the biological behavior of therioma is critical in predicting the prognosis of HCC patients. Traditional prognostic factors related to the clinicopathological characteristics of the neoplasm after hepatic resection such as tumor size, vascular invasion, tumor-node-metastasis(TNM) stage, functional liver reserve and Child-Pugh class have a limited clinical value for outcome prediction \[[@R4]\]. Therefore, a variety of other potential molecular predictive markers need to be further identified. A sequential process, including escape from the primary tumor site, local invasion, systemic transport through vascular or lymphatic vessels, adhesion to distant organs, re-colonization, and expansion, is believed to be involved in hepatic carcinogenesis. Epithelial to mesenchymal transition (EMT) is known to play a pivotal role in the diffusion of cancer cells and the growth of tumors, in which epithelial cells lose their polarity and cell-cell contacts due to repressed expression of E-cadherin and up-regulated expression of mesenchymal markers such as N-cadherin, vimentin and α--smooth muscle actin (α--SMA) \[[@R5]\]. EMT could enhance not only the capacity of invasion and migration but resistance to apoptosis and chemoresistance in cancer. EMT may alter the gene expression of epithelial cells due to the activation of EMT-inducing transcription factors (EMT-TFs). In this meta-analysis, we focused on the most prominent inducers of EMT such as the zinc finger E-box binding homeobox (ZEB1 and ZEB2), the zinc-finger transcriptional repressor (Snail and Slug), and the basic helix-loop-helix transcription factor (Twist1) through searching the published literature \[[@R6], [@R7]\], knowing that EMT-TFs are directly or indirectly involved in metastasis of malignant cells through a series of signaling cascades, including the wingless-related integration site(Wnt), serine/threonine-specific protein kinase (Akt), mitogen-activated protein kinase (MAPK) and signal transducer and activator of transcription 3 (STAT3) pathways \[[@R8], [@R9]\]. During the past decade, much research has begun noticing the correlation between the expression of EMT-TFs and the prognosis of HCC. However, the results are often unconvincing due to the limited sample sizes. Here, we sought to perform a meta-analysis to evaluate clinicopathological and prognostic significance of EMT-TFs overexpression in HCC patients, especially those with high incidences of recurrence after curative resection. RESULTS {#s2} ======= Study selection and patient characteristics {#s2_1} ------------------------------------------- The initial search identified 418 potentially relevant studies. After screening, 10 published studies including 1,334 patients were selected for this pooled analysis \[[@R10]--[@R19]\]. A flowchart depicting the selection of the eligible literature is shown in Figure [1](#F1){ref-type="fig"}. ![Flow chart of literature selection for the meta-analysis](oncotarget-08-59500-g001){#F1} All the included studies were retrospectively analyzed, with the sample size ranging from 40 to 323 (median 133). The overexpression of EMT-TFs was reported in 662 (49.6%) of the 1,334 included patients. The highest positive expression rate was Twist1, accounting for 60.3%, followed by Snail (51.9%), ZEB2 (50.3%), ZEB1 (43.6%) and Slug (29.4%). These studies were published between 2007 and 2015. Among all cohorts, Asia was the only source region of the 10 included studies, including 9 studies from China \[[@R11]--[@R19]\] and one from Japan \[[@R10]\]. Newcastle-Ottawa Quality Assessment Scale was applied to assess these studies. The result showed that the quality scores ranged from 5 to 8 (median 6.5), indicating a relatively high study quality. Characteristics of the included studies are listed in Table [1](#T1){ref-type="table"}. All the studies focused on OS, with a median follow-up period of at least 48 (48--80) months. The definition of EMT-TFs positive expression was based on immunohistochemistry (IHC) or western blot analysis (WB) evaluation in all the eligible articles, as expressed as the percentage of positive cells or/and staining intensity. Hazard ratios (HRs) and 95% confidence intervals (CIs) were directly recorded in 8 studies \[[@R10]--[@R12], [@R15]--[@R19]\] and could be inferred from two other studies using the Tierney\'s methods described above, among which one \[[@R14]\] were calculated by variance and *P* value, and the other \[[@R13]\] was estimated only by Kaplan-Meier survival curves. ###### Characteristics of the included studies EMT-TFs Author Year Country Case EMT-TFs Positive(%) Treatment Antibody method Outcome MFu time (months) NOS score --------- ---------- ------- --------- ----------- --------------------- ----------- ------------ ------------ ---------------------- ---------------------- ----------- ---- ---- --- ZEB1 Motoyuki 2013 Japan 108 23 (21.3) Surgery goat polyclonal 1:100 SantaCruz, CA, USA IHC OS 60 8 Zhou 2011 China 110 72 (65.5) Surgery NA NA NA SantaCruz, CA, USA WB OS 60 7 ZEB2 Cai 2012 China 248 150 (60.5) Surgery rabbit polyclonal 1:100 Sigma, St.Louis, USA IHC OS 80 8 Yang 2015 China 92 21 (22.8) Surgery rabbit polyclonal 1:100 Abcam, Cambridge, UK IHC OS 60 5 Snail1 Zhou 2014 China 323 161 (49.8) Surgery NA NA NA Novus, USA IHC OS 60 6 Zhao 2012 China 97 57 (58.8) Surgery NA NA 1:250 SantaCruz, CA, USA IHC OS 60 7 Slug Zhang 2013 China 119 35 (29.4) Surgery NA NA NA Danvers, MA, USA IHC OS 60 8 Twist1 Zhang 2010 China 100 70 (70.0) Surgery rabbit polyclonal 1:50 SantaCruz, CA, USA IHC OS 76 7 Zhao 2011 China 97 51 (52.6) Surgery NA NA 1:100 SantaCruz, CA, USA IHC OS 60 7 Niu 2007 China 40 22 (55.0) Surgery rabbit monoclonal 1:50 SantaCruz, CA, USA IHC OS 48 6 Abbreviations: EMT-TFs = epithelial to mesenchymal transition-inducing transcription factors; NA = not available; IHC = Immunohistochemistry; WB = Western Blot; MFu = median Follow-up; OS = overall survival; NOS = Newcastle-Ottawa Scale. Evidence synthesis {#s2_2} ------------------ EMT-TFs and OS in HCC {#s2_3} --------------------- The pooled HR for OS indicated that EMT-TF positive expression was associated with poor OS \[HR = 1.71; 95% CI: 1.40--2.08; *p* \< 0.00001\] in HCC with a statistically significant 71% increase in the risk for mortality (Figure [2](#F2){ref-type="fig"}). Seeing that the heterogeneity test showed a *P* value of 0.08 and an I^2^ statistic index of 41%, we considered that there may be relatively substantial heterogeneity between these studies, and therefore we used the random-effects model. ![Forest plot of comparison between EMT-TF overexpression and EMT-TFs low/negative expression on OS in HCC patients](oncotarget-08-59500-g002){#F2} Figure [3](#F3){ref-type="fig"} shows the impact of various individual EMT-TFs on the survival of HCC patients. The significantly higher HRs for OS was Slug \[HR = 2.12; 95% CI: 1.16--3.86; *p* = 0.01\]. But as the transcription factor was reported in only one study, the result should be considered with caution. In addition to ZEB2 \[HR = 1.23; 95% CI: 0.59--2.57; *p* = 0.58\], HRs for Twist1 \[HR = 2.04; 95% CI: 1.50--2.78; *p* \< 0.00001\], Snail1 \[HR = 1.87; 95% CI: 1.41--2.48; *p* \< 0.0001\], and ZEB1 \[HR = 1.61; 95% CI: 1.12--2.31; *p* = 0.01\] suggested that their positive expression correlated with poor OS. ![Forest plot describing subgroup analysis of the association between individual EMT-TF overexpression and OS in HCC patients](oncotarget-08-59500-g003){#F3} EMT-TFs and clinicopathological features in HCC {#s2_4} ----------------------------------------------- In the meta-analysis, the pooled data revealed that the associations between EMT-TFs and the following clinicopathological features were significant: TNM stage \[III+IV vs. I+II; OR = 2.18; 95% CI: 1.08--4.38; *p* = 0.03\], histological differentiation \[poor vs. well+moderate; OR = 1.96; 95% CI: 1.22--3.17; *p* = 0.006\], intrahepatic metastasis \[pos vs. neg; OR = 2.94; 95% CI: 1.56--5.54; *p* = 0.0009\] and vascular invasion \[pos vs. neg; OR = 3.09; 95% CI: 1.67--5.73; *p* = 0.0003\]. Therefore, the findings from the subgroup analysis were consistent with the conclusion that EMT-TFs as a poor prognostic factor. However, no significant association between EMT-TFs overexpression and age (\> 55 vs. ≤ 55), gender (male vs. female), tumor size (\> 5 cm vs. ≤ 5 cm), cirrhosis (yes vs. no), hepatitis B surface antigen (pos vs. neg) or AFP (\> 20 ng/ml vs. ≤ 20 ng/ml) was found. The details of the subgroup analysis results are summarized in Table [2](#T2){ref-type="table"}. ###### Correlation of EMT-TFs overexpression and clinicopathological features in HCC variable No.of No.of OR (95% CI) *P* Heterogeneity Model used ------------------------------------------- ------- ------- ------------------- -------- --------------- ------------ -------- TNM stage (III+IV vs. I+II) 6 766 2.18 (1.08--4.38) 0.03 76 0.0008 random Differentiation (poor vs. well+ moderate) 5 435 1.96 (1.22--3.17) 0.006 0 0.58 fixed Intrahepatic metastasis (pos vs. neg) 3 258 2.94 (1.56--5.54) 0.0009 0 0.86 fixed Vascular invasion (pos vs. neg) 2 218 3.09 (1.67--5.73) 0.0003 0 0.98 fixed Age (\> 55 vs. ≤ 55) 2 219 1.27 (0.71--2.26) 0.42 48 0.16 fixed Gender (male vs. female) 7 817 1.32 (0.89--1.96) 0.17 0 0.80 fixed Tumor size (\> 5 cm vs. ≤ 5 cm) 6 687 1.09 (0.62--1.91) 0.76 63 0.02 random Cirrhosis (yes vs. no) 3 458 0.83 (0.55--1.25) 0.38 50 0.13 fixed HBSAg (pos vs. neg) 4 498 1.30 (0.75--2.26) 0.35 0 0.89 fixed AFP (\> 20 ng/ml vs. ≤ 20 ng/ml ) 4 483 0.90 (0.42--1.96) 0.80 68 0.02 random Abbreviations: HBSAg = hepatitis B surface antigen; AFP = alpha fetoprotein; pos = positive; neg = negative. Assessment of possible publication bias and sensitivity analysis {#s2_5} ---------------------------------------------------------------- The possible publication bias among these eligible studies was evaluated by applying the Begg\'s funnel plot and the Egger\'s test. As illustrated in Figure [4](#F4){ref-type="fig"}, visual assessment of the funnel plots shapes revealed no obvious publication bias for OS, and evaluation using Egger\'s test also failed to discover solid evidence for significant publication bias (*t* = 1.33; *p* = 0.220, Figure not shown). When the number of studies was smaller than 10, publication bias was not investigated because of the low sensitivity of the quantitative and qualitative tests \[[@R20]\]. In such cases, we performed the sensitivity analysis by removing one study at each time. The result demonstrated that not a single study had remarkable impact on the overall HRs. Thus, the above results further verified that the general conclusions of this current meta-analysis were credible. ![Publication bias using Begg\'s funnel plots for OS](oncotarget-08-59500-g004){#F4} DISCUSSION {#s3} ========== Early diagnostic and surgical techniques of HCC have been improved greatly within the past decade. However, recurrence and metastasis remains one of the major threats and the most critical aspect of HCC, because it is the key event causing most cancer-related deaths \[[@R1], [@R21]\]. EMT is considered to be one of the key initial steps in cancer development, progression and metastasis, knowing that EMT can induce dissemination of malignant cells, thereby increasing cell migration and invasion \[[@R22], [@R23]\]. The concept of EMT and its reverse process, mesenchymal to epithelial transition (MET), was first recognized in the field of embryology, and is now known to play diverse roles in embryonic development and a series of physiological processes such as gastrulation, neural tube formation, tissue homeostasis, wound healing, stem cell plasticity, and organ fibrosis \[[@R24], [@R25]\]. There are growing studies reporting that EMT is involved not only in tumor metastasis and progression but in cancer recurrence and resistance to conventional adjuvant therapies \[[@R26], [@R27]\]. Recent studies found that EMT-TFs were overexpressed in cancer patients, suggesting that numerous EMT-inducing transcription factors may act as primary molecular switches to induce the EMT process by activating or inhibiting the known signaling pathways \[[@R8], [@R28]\]. A meta-analysis of 3218 patients from 14 studies was published in 2016 demonstrated that the overexpression of EMT-TFs was a poor prognostic factor for metastatic breast cancer \[HR = 1.72; 95% CI: 1.53--1.93; *p* = 0.001\] \[[@R29]\]. Accordingly, there has been great interest to confirm whether EMT-TFs high expression could be used as a potential prognostic biomarker for HCC to help guide surveillance and clinical decision-making regarding adjunctive therapies. However, there is no comprehensive analysis to draw a generally accepted conclusion. In the current meta-analysis, we collected all data available from published articles to assess the correlation between EMT-TFs expression and HCC prognosis after resection for the first time. The pooled HR results suggest that the up-regulated expression of EMT-TFs (ZEB1, Snail, Slug, and Twist1) may contribute to the adverse prognosis of HCC. In addition, our study also indicates the predictive value of EMT-TFs high expression for HCC metastasis and progression. According to the results of evidence synthesis, we consider EMT-TFs high expression as a new biomarker and a risk factor for the prediction of the HCC outcome after resection. There are some possible explanations for the close association of EMT-TF high expression with poor prognosis in HCC. First of all, EMT-TFs together with other factors can specifically bind to the E-box DNA sequences within the E-cadherin promoter, recruit transcriptional corepressors and histone deacetylases, thereby repressing E-cadherin expression and acquiring the expression of mesenchymal markers, such as N-cadherin, Vimentin, and α--SMA \[[@R30], [@R31]\]. Afterwards, it regulates the EMT process directly or indirectly by activating or inactivating the known signaling pathways. Second, recent evidence has supported the discovery that EMT-TFs overexpression is closely linked to the induction of cancer stem cell (CSC) phenotype that possesses self-renewal properties in various types of human cancers, thus enhancing tumorigenesis and helping resistance to chemo/radiation therapy associated with CSC characteristics \[[@R32]--[@R34]\]. Third, the up-regulated expression of EMT-TFs induces tumor invasion and metastasis. For instance, Snail was found to induce cancer cell invasion through regulating the expression of MMP proteins in HCC \[[@R35]\]. In addition, EMT-TFs also can regulate angiogenic factors and hypoxia-inducible factor-1 alpha (HIF-1α) to promote tumor angiogenesis in HCC \[[@R36]\]. Finally, several studies have also suggested that EMT-TFs play a critical role in the regulation of anti-apoptosis and anti-cancer drug resistance \[[@R37], [@R38]\]. Consequently, EMT, CSC generation, tumor invasion and metastasis, and angiogenesis are closely associated with the transformation of cancer cells to more aggressive behavior. These roles of EMT-TFs may help partially explain why HCC patients with EMT-TF overexpression had significantly shorter OS than those with EMT-TF low expression. During EMT, tumor cells gradually lose the epithelial markers (E-cadherin, tight junction protein-1, laminin and cytokeratin) and obtain the expression of mesenchymal markers (N-cadherin, Vimentin and α--SMA). Among them, one of the essential hallmarks of EMT is the loss of E-cadherin function, which is really important to adequately understand the whole regulation mechanism of EMT-TFs as the upstream molecules of E-cadherin \[[@R39]\]. A variety of signaling pathways are triggered by EMT-TFs, including the Akt, MAPK, STAT3, transforming growth factor beta (TGFβ), β-catenin, Wnt, Ras, and Notch pathways. In addition to the classical triggering signaling pathways, some signaling molecules such as epidermal growth factor (EGF), nuclear factor kappa B (NF-κB), fibroblast growth factor (FGF), neurotrophic receptor tyrosine kinase B (Tr-kB), hepatocyte growth factor (HGF), steroid receptor co-activator (SRC)-3 protein, necrosis factor alpha (TNF-α), and HIF-1α are all activated \[[@R8], [@R9]\]. The coordination of these factors results in the repression of E-cadherin expression. Thus, success in targeting EMT-TFs via RNA interference (RNAi) technology or specific chemotherapeutic drugs will provide a new approach for the control of cancer metastasis. There are several limitations in this meta-analysis, even though efforts have been made to comprehensively evaluate clinicopathological and prognostic significance of EMT-TFs overexpression in HCC. First, different antibodies, dilution solubility and cut-off values will impact the accuracy of assessment that the positive expression of EMT-TFs. Hence, a large multicenter clinical study using the same antibody and cut-off values may be helpful to gain more credible results. Second, there may be potential language bias in this meta-analysis, because the search strategy was limited to studies published in English only. In addition, the eligible articles included only Asian populations, thus lacking the homogeneity of the population distribution. Third, not all the studies directly provided HRs and 95% CIs, so the data extracted by using Tierney\'s methods may also cause the imprecision of the original data. Despite these limitations, the results of our meta-analysis initially support the hypothesis that EMT-TF overexpression is associated with malignant phenotype features and poor postoperative OS of HCC patients in Asian populations. More investigations are needed in order to fully understand the pivotal role of each individual EMT-TF so as to provide new insights into tumor metastasis and progression, and lay a theoretical foundation for innovating target-specific drug therapies and molecular prognostic biomarkers of HCC after resection. MATERIALS AND METHODS {#s4} ===================== Literature search strategy {#s4_1} -------------------------- A comprehensive systematic literature search in the PubMed, Web of Science database and Cochrane Library was performed to retrieve all the relevant articles (deadline until December 31, 2016 ), with the limit to "human" and papers published in English. The initial electronic search strategies included using the random combination of following Medical Subject Heading (MeSH) search terms: "ZEB, Snail, Slug or Twist1 ", "hepatocellular carcinoma", and "prognosis". In addition, reference lists from identified primary articles were then once again manual cross-searched to identify any studies that were omitted by the search strategies. In the situation when multiple studies overlapped patient cohorts, only the published research with the largest sample size was included in the analysis. Data extraction {#s4_2} --------------- The titles and abstracts of all candidate articles were read independently by two reviewers (TW. and TZ.), and irrelevant ones were subsequently excluded according to the PICO principle \[[@R40]\]. Then, articles that could not be classified based on the abstracts alone were required for full-text scrutinization. Finally, eligible studies were carefully selected according to the following inclusion criteria. If any disagreement or discrepancy occurred in the eligibility of studies, the two reviewers would conduct a debate or consult the third reviewer (YZ.) until a consensus was reached. Quality assessment was conducted for each of the acceptable studies by two reviewers independently (YZ. and TZ.) using the Newcastle--Ottawa Quality Assessment Scale (NOS) \[[@R41]\]. Parameters were extracted from each included paper, including the first author\'s name, publication year, country, number of total patients, cases with positive expression rates of EMT-TFs, TNM stage, follow-up period, HRs and 95% CIs and *P*-values for OS. OS was defined as the period from the time of confirmed diagnosis of HCC to death, regardless of the patients receiving treatment or not. If the HRs were not directly shown in the article, we tried to contact the authors for additional data. If the authors did not reply, we extracted data from Kaplan-Meier survival curves by applied the Engauge Digitizer V4.1, and then the Tierney\'s methods was utilized to calculate the HRs and 95% CIs \[[@R42]\]. Criteria for inclusion and exclusion {#s4_3} ------------------------------------ To be eligible for selection of this meta-analysis, studies were required to fulfill the following criteria: (1) patients were histologically confirmed as HCC; (2) the expression of EMT-TFs (ZEB1, ZEB2, Snail, Slug, Twist1) was measured by IHC or WB; (3) studies provided the correlation between EMT-TFs and OS; (4) studies reported HRs with 95% CIs, or calculation of these statistics from the data and survival curves presented; and (5) articles were published as papers in English. Letters, reviews, editorials, abstracts, expert opinions, experiments that were performed on cell lines or animals, and articles that had inadequate original survival data for further analysis were excluded from this meta-analysis Statistical analysis {#s4_4} -------------------- All the statistical analyses were performed via Review Manager 5.3 (The Cochrane Collaboration, Oxford, UK) and Stata 12 (Stata Corporation, College Station, TX, USA) in the meta-analysis. For the pooled analysis of the correlation between EMT-TFs expression and the clinical prognosis, HRs and 95% CIs for OS were combined to calculate the effective value (logHR and SE). As for the impact of EMT-TFs on clinicopathologic parameters of HCC, the pooled ORs and 95% CIs were used. Statistical heterogeneity was evaluated through the chi-squared test and *I*^2^ test. A chi-squared *P* value \< 0.10 indicated the presence of statistically significant heterogeneity \[[@R43]\]. Pooled effects were calculated using either a fixed-effect or random-effects model \[[@R44]\]. A pooled HR \> 1 indicated a higher risk of poor survival. The potential publication bias was analyzed by the Egger\'s test and Begg\'s Funnel plots \[[@R45]\]. Sensitivity analysis was also tested by excluding each study individually. Two-tailed *P* values \< 0.05 were considered to denote statistical significance. **Authors' contributions** Conception/Design: Yanming Zhou. Provision of study materials: Tao Wan, Tianwei Zhang, Xiaoying Si. Collection and/or extract data: Xiaoying Si, Yanming Zhou. Data analysis and statistical guidance: Tao Wan, Tianwei Zhang, Yanming Zhou. Final approval of the manuscript: Tao Wan, Tianwei Zhang, Xiaoying Si, Yanming Zhou. **CONFLICTS OF INTEREST** The authors indicated no financial relationships. **FUNDING** The study was supported by Foundation of Health and Family Planning Commission of Fujian Province of China (Project no.2013-ZQN-JC-31). HCC : hepatocellular carcinoma EMT-TFs : epithelial to mesenchymal transition -inducing transcription factors ZEB : zinc finger E-box binding homeobox SNAI : zinc-finger transcriptional repressor Twist : basic helix-loop-helix transcription factor OR : odds ratios HR : hazard ratios OS : overall survival CI : 95% confidence interval IHC : immunohistochemistry WB : western blot analysis CSC : cancer stem cell MeSH : Medical Subject Heading NOS : Newcastle--Ottawa Scale
{ "pile_set_name": "PubMed Central" }
Inhibition by ammonium ion of germination of unactivated spores of Bacillus cereus T induced by l-alanine and inosine. Studies were carried out on the inhibitory effect of NH4+ on germination of spores of Bacillus cereus T induced by L-alanine and inosine. Kinetic analysis showed that NH4+ inhibited the germination competitively. Its inhibitory effect was greater when the unactivated spores had been preincubated with L-alanine. NH4+ did not inhibit the response of unactivated spores to L-alanine during preincubation. These results suggest that L-alanine sensitizes the spores to the inhibitory effect of NH4+.
{ "pile_set_name": "PubMed Abstracts" }
Cuc Phuong is very diverse in flora species composition structure. With such an area equaling to 0.07 % out of the total area nationwide, it accounts for 57.93 % of flora families, 36.09 % of genetic diversity and 17.27 % of the species as compared with total figures for the country. Cuc Phuong NP has 20,473 ha of forest out of the total land area of 22,200 ha (accounting for 92.2 %). The vegetation cover here is the type of evergreen tropical rainforest. According to Thai Van Trung (1976), Cuc Phuong belongs to the type of closed humid evergreen tropical rainforest. Cuc Phuong has a considerable area of primary forest, mainly focused on the limestone mountain area and at valleys in the centre of the NP. It is the special location that leads to the rich species composition structure of the park. Cuc Phuong contains many non-indigenous plant species established with many indigenous ones. Representation of the indigenous species is those of Lauraceae, Magnoliaceae and Meliaceae families while those species of Dipterocarpaceae family is representative of non-indigenous species from the warmer southern region. Representative of those coming from the north are those of Fagaceae species. Cuc Phuongisan area of​​significantprimary forest, mainlyonlimestonemountainsandvalleysinthecenter ofthe park.As aspecial placetoholdthe rich plant species inCuc Phuong. Survey results of recent years (2008) recorded 2,234 species of 917 genera, 231 families. Many of them are of high value: 430 medicinal plant species, 229 edible plant species, 240 species can be used as medicine, dye, 137 species can provide tannin, etc; 13 species are listed in Vietnam Red Data Book 2000 and IUCN Red List 2004. Some outstanding species are Dalbergia tonkinensis; Parashorea chinensis, Erythrophloeum fordii; and Nageia fleyri. There are 11 endemic plant species, including Camellia cucphuongensis; Begonia cucphuongensis; Pistacia cucphuongensis; Amorphophallus dzui; Vietorchis aurea; Carex trongii, etc. The vertebrate animal species in Cuc Phuong is rich and diverse, there are 133 species, accounting for 51.35 % of the total nationwide (259 species). For birds, Cuc Phuong NP is assessed by Birdlife International as an Important Bird Area of Vietnam. It has recorded here now 336 species, accounting for 39.25 % of the total bird species nationwide (856 species). For reptiles, Cuc Phuong NP has 76 species, accounting for 26.67 % of the nation’s total figure (296 species). For amphibians, Cuc Phuong NP has 46 species, accounting for 28.39% of the nation’s total figure (162 species). For fish, Cuc Phuong NP has 66 species, accounting for 10.81 % of the nation’s total figure of fresh water fish species (610 species).In total 659 vertebrate species that 85 species have recorded in Vietnam Red Book, some Cuc Phuong endemic species such as Trachipythecus francoisi delacouri, Callosciurus erythraeus cucphuongensis, Tropidophorus cucphuongensis, Rana maosonensis, Pterocryptis cucphuongensis etc. - Invertebratefauna: The invertebrate fauna in Cuc Phuong is even more abundant and diverse. In the period from 2000-2008, about 7,400 invertebrate animal samples have been collected, including 1,670 species and species types of insect, 14 crustacean species, 18 species and types of species of myriapod, 16 spider-shaped species, 52 species and species types of annelid, 129 species and species types of mollusc, and many other species of lower animal. However, it is due to the fact that the lower animal species did not get much attention and research on these species have been rarely done; the figures mentioned are the preliminary ones only. In reality, the invertebrates in Cuc Phuong are extremely rich and diverse, it is estimated the real figures are much higher. - Palaeontology: In addition to the relics and fossils of prehistoric animals have been discovered and excavated and published before. Recently, in 2000, a marine animal fossil has been found in CucPhuongNational Park. Fossils exposed on the suface of limestone rock, it appeared in Dong Giao fomation to Middle Triassic (T2), it is about 200 to 230 million years, including at least 12 intact vertebra, 10 ribs and some others. The fossil has been preliminarily determined Placodontia species (reptiles tooth blade). According to scientists, this is first discovery in Southeast Asiaon Placodontia. Socio-economicsituation - Ethnic: CucPhuongNational Park is located in the region of 14 communes that include two mainly ethnic groups. Muong ethnic accounted for 76.6% of the total population in the region and Kinh ethnic is accounted for 23.4%. Two ethnic groups have been the oldest community living both in terms of economic and cultural. In recent years, in the process of innovation, market economy has penetrated into the Muong villages that are gradually losing the cultural characteristics. However, there are some villages in remote areas still retaining the customs, festival of Gongs ... that bring imbued Muong’s culture. The values of intangible culture are human resources that are likely to serve to promote eco-tourism development, culture and the humanity in the future.
{ "pile_set_name": "Pile-CC" }
Q: Ajax in MVC @Ajax. Helpers and in JQuery.Ajax I know a bit of Ajax. And now I am learning MVC + JQuery. I want to know if the 2 Ajaxs in MVC Ajax.Helper and JQuery.Ajax are using the same base? Are they the same as the normal Ajax I learned using XMLHttpRequest xhr? If not, what is the preferred way of doing it? I am new to this, and a bit confused, so please don't mind if my question doesn't make sense to you. Thank you, Tom (edited) I wrote some mvc3 Razor: <div id="MyAjaxDiv">@DateTime.Now.ToString("hh:mm:ss tt")</div> @Ajax.ActionLink("Update", "GetTime", new AjaxOptions { UpdateTargetId = "MyAjaxDiv", InsertionMode = InsertionMode.Replace, HttpMethod = "GET" }) When I open up the source code in notepad, I get: <div id="MyAjaxDiv">06:21:10 PM</div> <a data-ajax="true" data-ajax-method="GET" data-ajax-mode="replace" data-ajax-update="#MyAjaxDiv" href="/Home/GetTime">Update</a> So, because I have only included ~/Scripts/jquery.unobtrusive-ajax.min.js, the MVC helpers must be using the JQuery to work. I had the impression that they might need the MicrosoftAjax.js, MicrosoftMVCAjax.js ....etc, doesn't look like it now. Are they for MVC2 and aspx pages? A: Here's an excerpt from an MVC book, MVC version 3 introduced support for jQuery Validation, whereas earlier versions relied on JavaScript libraries that Microsoft produced. These were not highly regarded, and although they are still included in the MVC Framework, there is no reason to use them. JQuery has really become the standard for ajax based requests. You may still use your "XMLHttpRequest xhr" way but Jquery has made it easier to perform the same thing.
{ "pile_set_name": "StackExchange" }
Q: CSS attribute selector not working when the attribute is applied using javascript? .variations_button[style*="display: none;"] + div This is my CSS selector which works fine if the style attribute is already in the DOM on page load: http://jsfiddle.net/xn3y3hu0/ However, if i hide the .variations_button div using javascript, the selector is not working anymore: $(document).click(function(){ $('.variations_button').hide(); }); http://jsfiddle.net/55seee1r/ Any idea whats wrong? Looks like the CSS is not refreshing, because if i edit another property using the inspector, the color changes red instantly. A: Because the selector you use, [style*="display: none;"], is looking for the presence of the exact string of "display: none;" in the style attribute, it requires the browser's JavaScript engine inserts that precise string, including the white-space character (incidentally in Chrome 39/Windows 8.1 it does). For your particular browser you may need to remove the space, and to target most1 browsers, use both versions of the attribute-value string, giving: .variations_button[style*="display: none;"] + div, .variations_button[style*="display:none;"] + div .variations_button[style*="display: none;"]+div, .variations_button[style*="display:none;"]+div { color: red; } <div class="variations_button" style="display: none;">asd</div> <div>test</div> Of course, it remains much simpler to use classes to hide an element, toggling that class with JavaScript, and using the class as part of the CSS selector, for example: $('.variations_button + div').on('click', function() { $('.variations_button').toggleClass('hidden'); }); .hidden { display: none; } .hidden + div { color: red; } .variations_button + div { cursor: pointer; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="variations_button">asd</div> <div>test</div> As I understand it, the problem of the above not working once jQuery is involved, is because jQuery's hide(),show() and toggle() methods seem to update the display property of the element's style property, rather than setting the attribute directly. The updated attribute-value (as represented in the style attribute) seems to be a representation of the style property (derived, presumably, from its cssText). Because the attribute is unchanged, and merely serves as a representation of a property, the CSS attribute-selectors don't, or perhaps can't, match. That said, a somewhat clunky workaround is to directly set the attribute; in the following demo this uses jQuery's attr() method (though the native DOM node.setAttribute() would work equally well): $(document).click(function() { // setting the style attribute of the selected element(s), // using the attr() method, and the available anonymous function: $('.variations_button').attr('style', function(i, style) { // i: the index of the current element from the collection, // style: the current value (before manipulation) of the attribute. // caching the cssText of the node's style object: var css = this.style.cssText; // if the string 'display' is not found in the cssText: if (css.indexOf('display') === -1) { // we return the current text plus the appended 'display: none;' string: return css + 'display: none;'; // otherwise: } else { // we replace the string starting with 'display:', followed by an // optional white-space ('\s?'), followed by a matching string of // one or more alphabetic characters (grouping that last string, // parentheses): return css.replace(/display:\s?([a-z]+)/i, function(a, b) { // using the anonymous function available to 'replace()', // a: the complete match, b: the grouped match (a-z), // if b is equal to none we return 'display: block', otherwise // we return 'display: none': return 'display: ' + (b === 'none' ? 'block' : 'none'); }); } }); }); jQuery(document).ready(function($) { $(document).click(function() { $('.variations_button').attr('style', function(i, style) { var css = this.style.cssText; if (css.indexOf('display') === -1) { return css + 'display: none;'; } else { return css.replace(/display:\s?([a-z]+)/i, function(a, b) { return 'display: ' + (b === 'none' ? 'block' : 'none'); }); } }); }); }); .variations_button[style*="display: none;"]+div { color: red; } <script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script> <div class="variations_button">asd</div> <div>test</div> References: CSS: Substring Matching Attribute-selectors. JavaScript: HTMLElement.style. JavaScript Regular Expressions. String.prototype.indexOf(). String.prototype.replace(). jQuery: attr(). hide(). show(). toggle().
{ "pile_set_name": "StackExchange" }
Q: Question about positive recurrence of a Markov chain Q) Let $\{X_n\}$ be an irreducible Markov chain on a countable set $S$. Suppose that for some $x_0$ and a non-negative function $f:S\to(0,\infty)$, there is a constant $0<\alpha<1$ s.t. $$\mathbb{E}_xf(X_1)\leq \alpha f(x)\text{ for all }x\neq x_0$$ Suppose also that $f(x_0)\leq f(x)$ for all $x$. Show that $\{X_n\}$ is positive recurrent. Let $Y_n = f(X_n)/\alpha^n$ and I can show that $Y_n$ is a supermartingale using the hypothesis. But to show $X_n$ is positive recurrent, since $x_0$ is mentioned, I was thinking of the stopping time $$\tau = \inf\{n\geq 0:X_n = x_0\}$$ Then $Y_{n\wedge \tau}$ is also a supermartingale, $\mathbb{E}_xY_{n\wedge \tau}\leq f(x)$. 1) How can I show that $\mathbb{E}_x\tau < \infty$ which shows that $x_0$ is positive recurrent? 2) How does that show the chain is positive recurrent? A: You need to make use of the following theorem. Theorem. If $X_n$ is a nonnegative supermartingale and $N \le \infty$ is a stopping time, then $EX_0 \ge EX_N$ where $X_\infty = \lim X_n$ (which exists by the martingale convergence theorem). I will continue from where you left off. You've already proven that $E_xY_n$ is a nonnegative supermartingale, and $\tau$ is a stopping time. Hence $$ f(x) = E_xY_0 \ge E_xY_\tau = E_x(Y_\tau;\tau=\infty) + E_x(Y_\tau;\tau<\infty)$$ $$\ge E_x(Y_\infty; \tau = \infty) = E_x(\lim_{n \rightarrow \infty}f(X_n)/\alpha^n; \tau=\infty) $$ $$ \ge E_x(\lim_{n \rightarrow \infty} f(x_0)/\alpha^n; \tau=\infty) . $$ Since $f(x_0) > 0, 0 < \alpha < 1$ and the term $f(x_0)/\alpha^n \rightarrow \infty$, it follows that $P_x(\tau = \infty) = 0$, and $P_x(\tau < \infty) = 1$ for all $x \in S$. Hence $P_x(X_n=x_0\; i.o.) =1$, and $x_0$ is recurrent. Recall the chain is irreducible and $x_0$ is recurrent, hence all states $x \in S$ are recurrent (irreducibility $ \Rightarrow \rho_{x_0x} > 0$, additionally $x_0$ is recurrent $\Rightarrow \rho_{xx} = 1$ where $\rho_{xy}\equiv P_x(T_y < \infty)$).
{ "pile_set_name": "StackExchange" }
An evaluation of 222Rn concentrations in Idaho groundwater. Factors potentially correlated with 222Rn concentrations in groundwater were evaluated using a database compiled by the U.S. Geological Survey. These included chemical and radiological factors, and both well depth and discharge rate. The 222Rn concentrations contained within this database were examined as a function of latitude and longitude. It was observed that the U.S. Geological Survey sample locations for 222Rn were not uniformly distributed throughout the state. Hence, additional samples were collected in southeastern Idaho, a region where few 222Rn in water analyses had been performed. 222Rn concentrations in groundwater, in Idaho, were found using ANOVA (alpha = 0.05) to be independent of the chemical, radiological, and well parameters thus far examined. This lack of correlation with other water quality and well parameters is consistent with findings in other geographical locations. It was observed that an inverse relationship between radon concentration and water hardness may exist.
{ "pile_set_name": "PubMed Abstracts" }
Experience the legendary battle between the Autobots and Decepticons before their exodus to Earth in the untold story of the civil war for their home planet, Cybertron. Two distinct and intertwined campaigns chronicle the Autobots heroism in the face of total annihilation and the Decepticons unquenchable thirst for power. Play both campaigns and battle as your favorite Transformer characters in the war that spawned one of the most brutal conflicts of all time.
{ "pile_set_name": "Pile-CC" }
Native Village of Pedro Bay Pedro Bay is located at the head of Pedro Bay in Lake Iliamna, 30 miles northeast of Iliamna and 180 miles southwest of Anchorage. Located in a heavily wooded area, with birch, cottonwood, alders, willow and white spruce trees, Pedro Bay has one of the most attractive settings in southwest Alaska. Pedro Bay is accessible by air and water. There is a State-owned 3,000' long by 60' wide gravel airstrip. Scheduled and charter air services are available from Iliamna and Anchorage. Barge service is available from Naknek via the Kvichak River. Goods are also sent by barge from Homer to Iliamna Bay on the Cook Inlet side and portaged over a 14-mile road to Pile Bay, 10 miles to the east. The Dena'ina Indians have inhabited this area for hundreds of years, and still live in the area. The community was named for a man known as "Old Pedro," who lived in this area in the early 1900s. A post office was established in the village in 1936. St. Nicholas Russian Orthodox Chapel, built in 1890, is on the National Register of Historic Places.
{ "pile_set_name": "Pile-CC" }
Cummings Machine Works Cummings Machine Works was a Boston, Massachusetts based business. It was founded by Henry Havelock Cummings in 1881, when Cummings was 23 years old. The company was awarded a United States Defense Department contract to manufacture fixtures in March 1941. The contract amounted to $17,893. The company was among the firms which contributed to the building of the Boston Opera House, completed in 1909, supplying steelworks used in the construction of the stage. Cummings Machine Works has been credited with the development of the sally saw. A patent filed in 1945, and assigned to the company, describes a saw with a circular blade. The blade could be rotated between horizontal and vertical, thus allowing a tree to be felled, limbed, and bucked with one saw. Other inventions included a hydraulic hospital bed, automatic doughnut machine, teardrop vehicle and Hookups. Last owners were Robert M. Mustard, Sr., Pres., and Lewis W. Mustard, Treas. Last known address was 10 Melcher Street in Boston, MA. Went out of business in 1958. References Category:Manufacturing companies based in Boston Category:History of Boston Category:Defunct manufacturing companies of the United States Category:Defunct companies based in Massachusetts Category:Manufacturing companies established in 1881
{ "pile_set_name": "Wikipedia (en)" }
Q: How to type String content:encoded = "Hello"; in java? How to type String content:encoded = "Hello"; in java ? Eclipse keep telling me syntax error on tokens delete these tokens ? setDescription(String content:encoded) { _description = content:encoded; } A: Because content:encoded is a syntax error. Name in java only accept letters numbers $ and "_". The rule might allow some other characters but it should be pretty much it. Also a variable cannot start with a number. To be clear, remove the : from the variable name because : is illegal in a name and might have an other meaning in the language. Quote from the article below: Variable names are case-sensitive. A variable's name can be any legal identifier — an unlimited-length sequence of Unicode letters and digits, beginning with a letter, the dollar sign $, or the underscore character _. The convention, however, is to always begin your variable names with a letter, not $ or _. Additionally, the dollar sign character, by convention, is never used at all. You may find some situations where auto-generated names will contain the dollar sign, but your variable names should always avoid using it. A similar convention exists for the underscore character; while it's technically legal to begin your variable's name with _, this practice is discouraged. White space is not permitted. Subsequent characters may be letters, digits, dollar signs, or underscore characters. Here read more about it: http://docs.oracle.com/javase/tutorial/java/nutsandbolts/variables.html A: if you are creating method setDescription then it whould be: public void setDescription(String content_encoded) { _description = content_encoded; } Here public is modifier void is return type setDescription is method name String is parameter type content_encoded is Variable that is holding string value.
{ "pile_set_name": "StackExchange" }
Q: Handling worker death in multiprocessing Pool I have a simple server: from multiprocessing import Pool, TimeoutError import time import os if __name__ == '__main__': # start worker processes pool = Pool(processes=1) while True: # evaluate "os.getpid()" asynchronously res = pool.apply_async(os.getpid, ()) # runs in *only* one process try: print(res.get(timeout=1)) # prints the PID of that process except TimeoutError: print('worker timed out') time.sleep(5) pool.close() print("Now the pool is closed and no longer available") pool.join() print("Done") If I run this I get something like: 47292 47292 Then I kill 47292 while the server is running. A new worker process is started but the output of the server is: 47292 47292 worker timed out worker timed out worker timed out The pool is still trying to send requests to the old worker process. I've done some work with catching signals in both server and workers and I can get slightly better behaviour but the server still seems to be waiting for dead children on shutdown (ie. pool.join() never ends) after a worker is killed. What is the proper way to handle workers dying? Graceful shutdown of workers from a server process only seems to work if none of the workers has died. (On Python 3.4.4 but happy to upgrade if that would help.) UPDATE: Interestingly, this worker timeout problem does NOT happen if the pool is created with processes=2 and you kill one worker process, wait a few seconds and kill the other one. However, if you kill both worker processes in rapid succession then the "worker timed out" problem manifests itself again. Perhaps related is that when the problem occurs, killing the server process will leave the worker processes running. A: This behavior comes from the design of the multiprocessing.Pool. When you kill a worker, you might kill the one holding the call_queue.rlock. When this process is killed while holding the lock, no other process will ever be able to read in the call_queue anymore, breaking the Pool as it cannot communicate with its worker anymore. So there is actually no way to kill a worker and be sure that your Pool will still be okay after, because you might end up in a deadlock. multiprocessing.Pool does not handle the worker dying. You can try using concurrent.futures.ProcessPoolExecutor instead (with a slightly different API) which handles the failure of a process by default. When a process dies in ProcessPoolExecutor, the entire executor is shutdown and you get back a BrokenProcessPool error. Note that there are other deadlocks in this implementation, that should be fixed in loky. (DISCLAIMER: I am a maintainer of this library). Also, loky let you resize an existing executor using a ReusablePoolExecutor and the method _resize. Let me know if you are interested, I can provide you some help starting with this package. (I realized we still need a bit of work on the documentation... 0_0)
{ "pile_set_name": "StackExchange" }
Nudists stripped down and shouted anti-corporate slogans when a second approval for the nude ban passed in City Hall. You've got to admire their tenacity. In January, a judge dismissed a lawsuit filed by San Francisco nudists that claimed their First Amendment right to free speech was violated by Supervisor Scott Wiener’s nudity ban. But the decision didn’t deter nudists from their fight to strip down in The City – they’ve filed an amended suit and are staging a protest and ‘body freedom parade’ this Saturday. The protest is scheduled at noon on July 20, in front of City Hall. The parade will follow, but there’s no word on what route it will take. Under the ban, nudists face a $100 fine for their first offense, with increases for additional infractions. The amended lawsuit includes five plaintiffs: Russell “Trey” Allen, Mitch Hightower, George Davis, Russell Mills, and Oxane “Gypsy” Taub. Although they initially sued The City for alleged free speech violations, they’ve now taken a different approach. The new suit alleges that the nudity ban has been enforced against them in a discriminatory fashion. The lawsuit states that several plaintiffs were arrested for baring it all at a rally on February 1, and the others were arrested during a nude dance on February 27. However, plaintiffs were not arrested when they participated in nude activities led by other organizations, leading them to believe that they are being targeted by The City.
{ "pile_set_name": "Pile-CC" }
Menu What’s your WHY? What’s your WHY?! Did you know one of the things I hate the most is pictures and videos of myself. Yes, you got that right, I’ve made a living off of posting pictures and videos of myself to social media. Something I highly DISLIKE. . So WHY?! On the days when I really don’t feel like taking another picture, making another post, coming up with the perfect thing to say, I remind myself why the heck I started this in the first place. . Sharing my story. Sharing my struggles. Helping other women not feel ALONE. Helping other women reach their true potential. This is what I get to do on a daily basis. . To me there is nothing more REWARDING than helping another woman not feel the feelings I felt for so many years. And yes I have my days where I just don’t feel like it too, but then I remember WHY I started this whole thing in the first place. I realize all that I’ve accomplished. And I remember that I’m doing EXACTLY what I set out to do. 💕 . What’s your WHY? Drop it in the comments
{ "pile_set_name": "Pile-CC" }
Presenting features and diagnosis of rabies. The early clinical features important in the establishment of a diagnosis of rabies are described from experience of 23 fatal cases in Sri Lanka. The importance of the "fan test" as a diagnostic sign is stressed. The earliest features of the disease may suggest hysteria if a history of a bite from a rabid animal is not obtained. In a district in which there is an outbreak of rabies cases of rabies hysteria may also develop.
{ "pile_set_name": "PubMed Abstracts" }
Q: Restrict Action in ASP.Net MVC I am trying to restrict the action to not to be called if it has the required parameter available in the url. for example I have a Login Action ut it only be access with it hit on an other web application and it redirect with query string parameter. but it can also be accessible with out parameter I want to restrict this. Ristrict it https://localhost:44300/account/login Right Url https://localhost:44300/Account/Login?returnUrl=https%3A%2F%2Fdevatea.portal.azure-api.net%2F%2F A: Based on your requirements, I think the easiest way would be to just add a check to the login action and return a 404 Not Found if the returnUrl is empty or null. public ActionResult Login(string returnUrl) { if (string.IsNullOrEmpty(returnUrl)) return HttpNotFound(); //remaining code for login //... }
{ "pile_set_name": "StackExchange" }
1942 Iowa Pre-Flight Seahawks football team The 1942 Iowa Pre-Flight Seahawks football team represented the United States Navy pre-flight aviation training school at the University of Iowa as an independent during the 1942 college football season. The team compiled a 7–3 record and outscored opponents by a total of 211 to 121. The 1942 team was known for its difficult schedule, including Notre Dame, Michigan, Ohio State, Minnesota, Indiana, Nebraska, and Missouri. The team was ranked No. 2 among the service teams in a poll of 91 sports writers conducted by the Associated Press. The Navy's pre-flight aviation training school opened on April 15, 1942, with a 27-minute ceremony during which Iowa Governor George A. Wilson turned over certain facilities at the University of Iowa to be used for the training of naval aviators. At the time, Wilson said, "We are glad it is possible to place the facilities of this university and all the force and power of the state of Iowa in a service that is today most vital to safeguarding our liberties." The first group of 600 air cadets was schedule to arrive on May 28. Bernie Bierman, then holding the rank of major, was placed in charge of the physical conditioning program at the school. Bieman had been the head coach of Minnesota from 1932 to 1941 and served as the head coach of the Iowa Pre-Flight team in 1942. Larry Snyder, previously the track coach at Ohio State, was assigned as Bierman's assistant. Don Heap, Dallas Ward, Babe LeVoir, and Trevor Reese were assigned as assistant coaches for the football team. In June 1942, Bierman addressed the "misconception" that the Iowa pre-flight school was "merely a place for varsity athletics." He said: "Our purpose here is to turn out the toughest bunch of flyers the world has ever seen and not first class athletes." Two Seahawks were named to the 1942 All-Navy All-America football team: George Svendsen at center and Dick Fisher at left halfback. In addition, Bill Kolens (right tackle), Judd Ringer (right end), George Benson (quarterback), and Bill Schatzer (left halfback) were named to the 1942 All-Navy Preflight Cadet All-America team. Schedule Roster References Iowa Pre-Flight Category:Iowa Pre-Flight Seahawks football seasons Iowa Pre
{ "pile_set_name": "Wikipedia (en)" }
The effect of inhibiting the parasympathetic nervous system on insulin, glucagon, and glucose will be examined in normal weight and obese men and women. Furthermore, the importance of early insulin release will be examined. Each subject will undergo 4 treatments: 1) saline infusion, 2) brief infusion 3) atropine infusion and 4) atropine and insulin infusion. During each of the treatments, subjects will ingest a mixed meal containing 600 kcal and will undergo a blood sampling protocol in which arterialized venous blood samples will be drawn over a 4 hour period of time.
{ "pile_set_name": "NIH ExPorter" }
/* Copyright The Kubernetes Authors. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ // Code generated by client-gen. DO NOT EDIT. package v1 type JobExpansion interface{}
{ "pile_set_name": "Github" }
Project Summary/Abstract. Group A streptococci (GAS; Streptococcus pyogenes) are remarkable for the wide range of diseases they cause in humans, their sole biological host. Yet, most infections are mild and involve one of two tissues - the epithelial surface of the throat or skin - giving rise to pharyngitis or impetigo, respectively. A long-term goal is to better understand the distinct pathogenic mechanisms leading to pharyngitis and impetigo. A primary focus of the proposal is the regulation of pili expression in GAS. Pilus-associated proteins mediate adherence to epithelial cells and enhance superficial infection at the skin. Pili correspond to the T-antigens of GAS. All strains examined have pilus genes, however, many natural GAS isolates lack T-antigen. The hypothesis to be tested in Aim 1 states that organisms recovered from a carrier state and/or invasive disease are significantly more likely to have defects in pilus production, as compared to isolates derived from cases of pharyngitis or impetigo. Aim 1 seeks to define the relationship between defects in pilus expression and disease. The nra/rofA locus encodes a [unreadable]stand alone[unreadable] response regulator that affects the transcription of pilus genes; nra and rofA denote discrete lineages of alleles. Both Nra and RofA can have positive or negative regulatory effects on pilus gene transcription, depending on the GAS isolate or strain. The hypothesis to be tested in Aim 2 states that there are strain-specific differences among modulators of pilus gene expression that lie in a pathway upstream of Nra/RofA. Aim 2 seeks to identify regulators of pilus gene transcription having a differential presence among strains. The distribution of Nra and RofA among GAS is strongly correlated with subpopulations of strains having a tendency to cause infection at either the throat or skin. Nra and RofA are global regulators of GAS gene transcription. Two hypotheses will be addressed in Aim 3: (i), that co-regulated non-pilus genes act in concert with pili to cause disease; and (ii), that Nra and RofA confer differential transcription of downstream genes. Aim 3 seeks to identify genes of the Nra and RofA regulons, and to test their role in virulence. Through a better understanding of the molecular mechanisms used by GAS to persist in their primary ecological niches - the throat and skin of the human host - will come new knowledge on how best to interfere with these vital processes. Effective control and prevention measures that disrupt the chain of transmission of GAS will result in a decreased burden of the more severe GAS diseases (toxic shock syndrome, rheumatic heart disease) which have a high morbidity and mortality for many people throughout the world.
{ "pile_set_name": "NIH ExPorter" }
[Cite as State v. McDougald, 2016-Ohio-5080.] IN THE COURT OF APPEALS OF OHIO FOURTH APPELLATE DISTRICT SCIOTO COUNTY STATE OF OHIO, : Case No. 16CA3736 Plaintiff-Appellee, : v. : DECISION AND JUDGMENT ENTRY JERONE MCDOUGALD, : RELEASED: 7/15/2016 Defendant-Appellant. : APPEARANCES: Jerone McDougald, Lucasville, OH, pro se appellant. Mark E. Kuhn, Scioto County Prosecuting Attorney, and Jay S. Willis, Scioto County Assistant Prosecuting Attorney, Portsmouth, OH, for appellee. Harsha, J. {¶1} Jerone McDougald appeals the judgment denying his fifth petition for postconviction relief and his motion for leave to file a motion for new trial. McDougald contends that the court erred in denying his petition, which raised claims of ineffective assistance of his trial counsel. He additionally argues that the court erred in denying his motion for leave to file a motion for new trial, but did not assign any errors regarding this decision. {¶2} We reject McDougald’s claims. He failed to demonstrate the requirements necessary for the trial court to address the merits of his untimely claims in his fifth petition for postconviction relief. Moreover, res judicata barred this successive petition because he could have raised these claims on direct appeal or in one of his earlier postconviction petitions. Finally, because he failed to assign any error regarding the trial court’s denial of his motion for leave to file a motion for new trial, we need not address his arguments regarding that decision. Scioto App. No. 16CA3736 2 {¶3} Therefore, we affirm the judgment of the trial court denying his petition and motion. I. FACTS1 {¶4} Authorities searched a premises in Portsmouth and found crack cocaine, money, digital scales, and a pistol. They arrested the two occupants of the residence, McDougald and Kendra White, at the scene. Subsequently, the Scioto County Grand Jury returned an indictment charging McDougald with drug possession, drug trafficking, possession of criminal tools, and the possession of a firearm while under disability. McDougald pleaded not guilty to all charges. {¶5} At the jury trial Kendra White testified that McDougald used her home to sell crack cocaine and that she sold drugs on his behalf as well. She also testified that the digital scales belonged to McDougald and, although the pistol belonged to her ex- boyfriend, Benny Simpson (who was then incarcerated), McDougald asked her to bring it inside the home so that he would feel more secure. White explained that Simpson previously used the pistol to shoot at her, but threw it somewhere in the backyard when he left. Simpson then allegedly called White from jail and instructed her to retrieve the pistol. White complied and then hid it “under the tool shed” until McDougald instructed her to retrieve it and bring it inside the house. White confirmed that she saw McDougald at the premises with the gun on his person. {¶6} Jesse Dixon and Melinda Elrod both testified that they purchased crack cocaine from McDougald at the residence. Shawna Lattimore testified that she served 1Except where otherwise noted, these facts are taken from our opinion in State v. McDougald, 4th Dist. Scioto Nos. 14CA3649 and 15CA3679, 2015-Ohio-5590, appeal not accepted for review, State v. McDougald, 144 Ohio St.3d 147, 2016-Ohio-467, 845 N.E.3d 245. Scioto App. No. 16CA3736 3 as a “middleman” for McDougald's drug operation and also helped him transport drugs from Dayton. She testified that she also saw McDougald carry the pistol. {¶7} The jury returned guilty verdicts on all counts. The trial court sentenced McDougald to serve five years on the possession count, nine years for trafficking, one year for the possession of criminal tools, and five years for the possession of a firearm while under disability. The court ordered the sentences to be served consecutively for a total of twenty years imprisonment. The sentences were included in a judgment entry filed April 30, 2007, as well as a nunc pro tunc judgment entry filed May 16, 2007. {¶8} In McDougald's direct appeal, where he was represented by different counsel than his trial attorney, we affirmed his convictions and sentence. State v. McDougald, 4th Dist. Scioto No. 07CA3157, 2008-Ohio-1398. We rejected McDougald's contention that because the only evidence to link him to the crimes was “the testimony of admitted drug addicts and felons,” the verdicts were against the manifest weight of the evidence: * * * appellant's trial counsel skillfully cross-examined the prosecution's witnesses as to their statuses as drug addicts and convicted felons. Counsel also drew attention to the fact that some of the witnesses may actually benefit from the testimony that they gave. That evidence notwithstanding, the jury obviously chose to believe the prosecution's version of the events. Because the jury was in a better position to view those witnesses and determine witness credibility, we will not second- guess them on these issues. Id. at ¶ 8, 10. {¶9} In January 2009, McDougald filed his first petition for postconviction relief. He claimed that he was denied his Sixth Amendment right to confrontation when the trial court admitted a drug laboratory analysis report into evidence over his objection. Scioto App. No. 16CA3736 4 The trial court denied the petition, and we affirmed the trial court's judgment. State v. McDougald, 4th Dist. Scioto No. 09CA3278, 2009-Ohio-4417. {¶10} In October 2009, McDougald filed his second petition for postconviction relief. He again claimed that he was denied his Sixth Amendment right of confrontation when the trial court admitted the drug laboratory analysis report. The trial court denied the petition, and McDougald did not appeal the judgment. {¶11} In July 2014, McDougald filed his third petition for postconviction relief. He claimed that: (1) the trial court lacked jurisdiction to convict and sentence him because the original complaint filed in the Portsmouth Municipal Court was based on false statements sworn to by the officers; (2) the prosecuting attorney knowingly used and relied on false and perjured testimony in procuring the convictions against him; and (3) the state denied him his right to due process by withholding exculpatory evidence, i.e., a drug task force report. McDougald attached the report, the municipal court complaints, a portion of the trial transcript testimony of Kendra White, his request for discovery, and the state's answer to his request for discovery to his petition. The trial court denied the petition because it was untimely and did not fall within an exception justifying its late filing. McDougald appealed from the trial court's judgment denying his third petition for postconviction relief. {¶12} In December 2014, McDougald filed his fourth petition for postconviction relief. He claimed that his sentence is void because the trial court never properly entered a final order in his criminal case. The trial court denied the petition. McDougald appealed from the trial court's judgment denying his fourth petition for postconviction relief. Scioto App. No. 16CA3736 5 {¶13} We consolidated the appeals and affirmed the judgments of the trial court denying his third and fourth petitions for postconviction relief. McDougald, 2015-Ohio- 5590. We held that McDougald failed to establish the requirements necessary for the trial court to address the merits of his untimely claims and that res judicata barred the claims because he either raised them on direct appeal or could have raised them on direct appeal or in one of his previous petitions for postconviction relief. Id. {¶14} In November 2015, over eight and one-half years after he was sentenced, McDougald filed his fifth petition for postconviction relief. He argued that his trial counsel had provided ineffective assistance by failing to conduct an independent investigation of various matters, failing to use preliminary hearing testimony of the arresting officer to impeach the state’s case, failing to emphasize Kendra White’s prior statements to the police to impeach her testimony, failing to object to the arresting officer’s testimony that the firearm found at the scene was operable and had a clip and bullets, and failing to counter the state’s response to his objection concerning testimony about an Ohio Bureau of Criminal Investigation (“BCI”) report with evidence that the BCI employee had been timely subpoenaed. {¶15} In December 2015, McDougald filed a motion for leave to file a motion for new trial. He claimed that the state withheld a drug task force report that contained strong exculpatory evidence and that the report proved that the state presented false and perjured testimony at trial. {¶16} After the state responded, the trial court denied the petition and the motion, and this appeal ensued. II. ASSIGNMENTS OF ERROR Scioto App. No. 16CA3736 6 {¶17} McDougald assigns the following errors for our review: 1. Defendant was prejudiced by trial counsel’s failure to conduct independent investigation to rebut state’s theory of prior acts of the defendant or ask for a mistrial prejudicing defendant’s trial. 2. Defendant was prejudiced by trial counsel’s failure to conduct independ[e]nt investigation and failed to present that the prosecutor knowingly used false and fabricated testimony concerning the gun in violation of defendant[’]s due process prejudicing defendant[’]s trial. 3. Defendant was prejudiced by trial counsel[’]s failure to conduct independent investigation and failed to present that the state knowingly used false and fabricated evidence in violation of defendant’s due process rights and prejudicing defendant’s trial. 4. Defendant was prejudiced by trial counsel’s failure to conduct independent investigation and failed to present that the arresting officer[’]s conduct in admitting and establishing the op[]erability of the f[i]rearm violat[ed] defendant’s due process rights and also evidence [rule] 702-703. 5. Defendant was prejudiced by trial counsel’s failure to raise that BCI tech was subpoenaed within the 7 day requirement pursuant to R.C. 2925.51(C) prejudicing defendant’s 6th amendment rights to confrontation. Trial attorney was ineffective in this regard. III. STANDARD OF REVIEW {¶18} McDougald’s assignments of error contest the trial court’s denial of his fifth petition for postconviction relief. {¶19} The postconviction relief process is a collateral civil attack on a criminal judgment rather than an appeal of the judgment. State v. Calhoun, 86 Ohio St.3d 279, 281, 714 N.E.2d 905 (1999). Postconviction relief is not a constitutional right; instead, it is a narrow remedy that gives the petitioner no more rights than those granted by statute. Id. It is a means to resolve constitutional claims that cannot be addressed on direct appeal because the evidence supporting the claims is not contained in the record. State v. Knauff, 4th Dist. Adams No. 13CA976, 2014-Ohio-308, ¶ 18. Scioto App. No. 16CA3736 7 {¶20} “[A] trial court's decision granting or denying a postconviction relief petition filed pursuant to R.C. 2953.21 should be upheld absent an abuse of discretion; a reviewing court should not overrule the trial court's finding on a petition for postconviction relief that is supported by competent and credible evidence.” State v. Gondor, 112 Ohio St.3d 377, 2006-Ohio-6679, 860 N.E.2d 77, ¶ 58. A trial court abuses its discretion when its decision is unreasonable, arbitrary, or unconscionable. In re H. V., 138 Ohio St.3d 408, 2014-Ohio-812, 7 N.E.3d 1173, ¶ 8. IV. LAW AND ANALYSIS A. Fifth Petition for Postconviction Relief {¶21} In his five assignments of error McDougald asserts that his trial counsel was ineffective for failing to investigate his case and failing to take certain actions during his jury trial. {¶22} R.C. 2953.21(A)(2) provides that a petition for postconviction relief must be filed “no later than three hundred sixty-five days after the expiration of the time for filing the appeal.” McDougald’s fifth petition for postconviction relief was filed over eight years after the expiration of time for filing an appeal from his convictions and sentence so it was untimely. See, e.g., State v. Heid, 4th Dist. Scioto No. 15CA3710, 2016-Ohio- 2756, ¶ 15. {¶23} R.C. 2953.23(A)(1) authorizes a trial court to address the merits of an untimely filed petition for postconviction relief only if: (1) the petitioner shows either that he was unavoidably prevented from discovery of the facts upon which he must rely to present the claim for relief or that the United States Supreme Court recognized a new federal or state right that applies retroactively to him; and (2) the petitioner shows by Scioto App. No. 16CA3736 8 clear and convincing evidence that no reasonable factfinder would have found him guilty but for constitutional error at trial. {¶24} McDougald does not contend that the United States Supreme Court recognized a new right that applied retroactively to him, so he had to prove that he was unavoidably prevented from the discovery of the facts upon which he relied to present his ineffective-assistance-of-counsel claim. “A defendant is ‘unavoidably prevented’ from the discovery of facts if he had no knowledge of the existence of those facts and could not have, in the exercise of reasonable diligence, learned of their existence within the time specified for filing his petition for postconviction relief.” State v. Cunningham, 3d Dist. Allen No. 1-15-61, 2016-Ohio-3106, ¶ 19, citing State v. Holnapy, 11th Dist. Lake No. 2013-L-002, 2013-Ohio-4307, ¶ 32, and State v. Roark, 10th Dist. Franklin No. 15AP-142, 2015-Ohio-3206, ¶ 11. {¶25} The only “new” evidence cited by McDougald in his petition for postconviction relief consisted of an excerpt from the arresting officer’s preliminary hearing testimony, a subpoena issued to a BCI employee, and a CD of Kendra White’s police interview following her arrest. He does not explain how either he or his appellate counsel were unavoidably prevented from having access to this evidence at the time he filed his direct appeal. Nor does he indicate how he was unavoidably prevented from discovering them before he filed any of his previous four petitions for postconviction relief. “Moreover, ‘[t]he fact that appellant raises claims of ineffective assistance of counsel suggests that the bases for his claims could have been uncovered if “reasonable diligence” had been exercised.’ ” Cunningham, 2016-Ohio-3106, at ¶ 22, quoting State v. Creech, 4th Dist. Scioto No. 12CA3500, 2013-Ohio-3791, ¶ 18. Scioto App. No. 16CA3736 9 Therefore, McDougald did not establish that the trial court possessed the authority to address the merits of his untimely fifth petition for postconviction relief. {¶26} Furthermore, res judicata barred his successive petition because he could have raised his claims of ineffective assistance of trial counsel on direct appeal, when he was represented by different counsel, or in one of his earlier petitions for postconviction relief. See State v. Griffin, 9th Dist. Lorain No. 14CA010680, 2016-Ohio- 2988, ¶ 12, citing State v. Cole, 2 Ohio St.3d 112 (1982), syllabus (“When the issue of competent trial counsel could have been determined on direct appeal without resort to evidence outside the record, res judicata is a proper basis to dismiss a petition for postconviction relief”); Heid, 2016-Ohio-2756, at ¶ 18 (res judicata barred petitioner from raising ineffective-assistance claim that he raised or could have raised in prior petitions for postconviction relief); State v. Edwards, 4th Dist. Ross No. 14CA3474, 2015-Ohio-3039, ¶ 10 (“claims of ineffective assistance of trial counsel are barred from being raised on postconviction relief by the doctrine of res judicata”). This is not a case where the exception to the general rule of res judicata applies, i.e., this is not a case where the defendant was represented by the same counsel at both the trial and on direct appeal. See State v. Ulmer, 4th Dist. Scioto No. 15CA3708, 2016-Ohio-2873, ¶ 15. {¶27} Therefore, the trial court did not act in an unreasonable, arbitrary, or unconscionable manner by denying McDougald’s fifth petition for postconviction relief. We overrule his assignments of error. B. Motion for Leave to File Motion for New Trial Scioto App. No. 16CA3736 10 {¶28} McDougald also argues that the trial court erred by denying his motion for leave to file a motion for new trial. But he failed to assign any error regarding the court’s decision, and we thus need not address his arguments. See State v. Owens, 2016- Ohio-176, __ N.E.3d __, ¶ 59 (4th Dist.), quoting State v. Nguyen, 4th Dist. Athens No. 14CA42, 2015–Ohio–4414, ¶ 41 (“ ‘we need not address this contention because we review assignments of error and not mere arguments’ ”). {¶29} In addition, even if we exercised our discretion and treated McDougald’s “issues presented for review” as assignments of error, they would lack merit. The trial court did not abuse its considerable discretion by denying McDougald’s motion, which was based on his claim that the state withheld a drug task force report. McDougald did not establish by clear and convincing evidence that he was unavoidably prevented from discovering the report long before he filed his motion for leave over eight years after the verdict in his jury trial. See State v. N.D.C., 10th Dist. Franklin No. 15AP-63, 2015- Ohio-3643, ¶ 13. Moreover, we held in McDougald’s appeal from the denial of his fourth and fifth petitions for postconviction relief that the drug task force report did not establish that the state’s case was false because “[t]he report would merely have been cumulative to the other evidence admitted at trial” and it “did not constitute material, exculpatory evidence that the state improperly withheld from McDougald.” McDougald, 2015-Ohio-5590, at ¶ 24. V. CONCLUSION {¶30} Having overruled McDougald’s assignments of error, we affirm the judgment of the trial court. JUDGMENT AFFIRMED. Scioto App. No. 16CA3736 11 JUDGMENT ENTRY It is ordered that the JUDGMENT IS AFFIRMED and that Appellant shall pay the costs. The Court finds there were reasonable grounds for this appeal. It is ordered that a special mandate issue out of this Court directing the Scioto County Court of Common Pleas to carry this judgment into execution. Any stay previously granted by this Court is hereby terminated as of the date of this entry. A certified copy of this entry shall constitute the mandate pursuant to Rule 27 of the Rules of Appellate Procedure. McFarland, J. & Hoover, J.: Concur in Judgment and Opinion. For the Court BY: ________________________________ William H. Harsha, Judge NOTICE TO COUNSEL Pursuant to Local Rule No. 14, this document constitutes a final judgment entry and the time period for further appeal commences from the date of filing with the clerk.
{ "pile_set_name": "FreeLaw" }
using System; using ModuleManager.Progress; namespace ModuleManager.Patches.PassSpecifiers { public class LegacyPassSpecifier : IPassSpecifier { public bool CheckNeeds(INeedsChecker needsChecker, IPatchProgress progress) { if (needsChecker == null) throw new ArgumentNullException(nameof(needsChecker)); if (progress == null) throw new ArgumentNullException(nameof(progress)); return true; } public string Descriptor => ":LEGACY (default)"; } }
{ "pile_set_name": "Github" }
When doing queries to a database it’s very common to have a unified way to obtain data from it. In quepy we called it keyword. To use the Keywords in a quepy project you must first configurate what’s the relationship that you’re using. You do this by defining the class attribute of the quepy.dsl.HasKeyword. For example, if you want to use rdfs:label as Keyword relationship you do: fromquepy.dslimportHasKeywordHasKeyword.relation="rdfs:label" If your Keyword uses language specification you can configure this by doing: HasKeyword.language="en" Quepy provides some utils to work with Keywords, like quepy.dsl.handle_keywords(). This function will take some text and extract IRkeys from it. If you need to define some sanitize function to be applied to the extracted Keywords, you have define the staticmethod sanitize. It’s very common to find patterns that are repeated on several regex so quepy provides a mechanism to do this easily. For example, in the DBpedia example, a country it’s used several times as regex and it has always the same interpretation. In order to do this in a clean way, one can define a Particle by doing:
{ "pile_set_name": "Pile-CC" }
Q: MySQL Command Line Client: Selecting records from a table which links two other tables I have three tables. Two of them are separate irrelevant tables (students and subjects), the third (entries) is one which links them both with foreign keys (student_id and subject_id). Here are all the tables with the records: students: +------------+------------+-----------+---------------------+----------------------+ | student_id | first_name | surname | email | reg_date | +------------+------------+-----------+---------------------+----------------------+ | 1 | Emma | Harvey | emmah@gmail.com | 2012-10-14 11:14:13 | | 2 | Daniel | ALexander | daniela@hotmail.com | 2014-08-19 08:08:23 | | 3 | Sarah | Bell | sbell@gmail.com | 1998-07-04 13:16:32 | +------------+------------+-----------+---------------------+--------------- ------+ subjects: +------------+--------------+------------+----------------+ | subject_id | subject_name | exam_board | level_of_entry | +------------+--------------+------------+----------------+ | 1 | Art | CCEA | AS | | 2 | Biology | CCEA | A | | 3 | Computing | OCR | GCSE | | 4 | French | CCEA | GCSE | | 5 | Maths | OCR | AS | | 6 | Chemistry | CCEA | GCSE | | 7 | Physics | OCR | AS | | 8 | RS | CCEA | GCSE | +------------+--------------+------------+----------------+ entries: +----------+---------------+---------------+------------+ | entry_id | student_id_fk | subject_id_fk | entry_date | +----------+---------------+---------------+------------+ | 1 | 1 | 1 | 2012-10-15 | | 2 | 1 | 4 | 2011-09-21 | | 3 | 1 | 3 | 2015-08-10 | | 4 | 2 | 6 | 1992-07-13 | | 5 | 3 | 7 | 2013-02-12 | | 6 | 3 | 8 | 2016-01-14 | +----------+---------------+---------------+------------+ I want to know how to select all the first_names of the students in the students table, who have entries with a with the OCR exam_board from the subjects table, using the entries table. I'm sure it has to do with joins, but which one to use and the general syntax of it, I don't know. I'm generally awful at explaining things, so sorry if these doesn't make a ton of sense and if I've missed out something important. I'll gladly go into more specifics if necessary. I've got an answer, but what I was looking for as the output was this: +------------+ | first_name | +------------+ | Emma | | Sarah | +------------+ A: You should use INNER JOINS in your query, like: SELECT students.first_name FROM students INNER JOIN entries ON entries.student_id_fk = students.student_id INNER JOIN subjects ON subjects.subject_id = entries.subject_id_fk WHERE subjects.exam_board = 'OCR'; This query will join the tables on the matching key values, select the ones with exam_board OCR and return the student first_name.
{ "pile_set_name": "StackExchange" }
Introduction ============ Better adjuvant therapy, improved metal implants, and innovative surgical techniques have led surgeons to consider limb salvage surgery as an alternative treatment for malignant bone tumour other than amputation. Orthopaedic oncology patients have a chance for an active, disease-free life after limb salvage surgery. In the first evidence-based study, Simon *et al.* had reported the benefits of limb-salvaging procedures for bone tumours.[@b1-rado-46-03-189] Their multicentre study reported the rates of local recurrence, metastasis and survival in 227 patients with osteosarcoma in the distal femur and suggested that the Kaplan-Meier curves of the patients without recurrence were not statistically different between limb-salvaging surgery and amputation patients during a 5.5-year follow-up. Limb-salvage surgery was considered as safe as an amputation in the management of patients with high-grade osteosarcoma. The goal of limb-salvaging surgery is to preserve the function of limbs, prevent tumour recurrence, and enable the rapid administration of chemotherapy or radiotherapy.[@b2-rado-46-03-189] It can be reached with meticulous technique, detailed operative planning, and the use of endoprosthetic replacements and/or bone grafting. For a successful limb-salvage surgery in high-grade malignant tumour, such as sarcomas, a wide margin is necessary to obtain a local control.[@b3-rado-46-03-189]--[@b5-rado-46-03-189] Since marginal and intralesional margins are related to local recurrence, the reconstruction with limb-salvaging options should be carefully considered. The clinical outcome of the limb-salvage surgery with arthroplasty is closely related to the accuracy of the surgical procedure. To improve the final outcome, one must take into account the length of the osteotomy plane, as well as the alignment of the prosthesis with respect to the mechanical axis in order to keep the balance of the soft tissues. Furthermore, the parameters measured with the 3D imagine must be used during the individual manufacture of implant in order to reconstruct the skeletal structure accurately. Therefore, geometric data (such as length of leg, offset) and morphologic data are required. Magnetic resonance imaging (MRI) was beneficial for tumour detection and consequently staging of musculoskeletal neoplasia. MRI became an ideal imaging modality for musculoskeletal neoplasia because of superior soft-tissue resolution and multiplanar imaging capabilities and had a significant impact on the ability to appropriately stage lesions and adequately plan for limb-salvage surgery.[@b6-rado-46-03-189],[@b7-rado-46-03-189] In contrast, multi-slice spiral computed tomography (CT) could provide super three-dimensional morphological delineation of the diseased bone. Theoretically, the complimentary use of these two imaging modalities could give the surgeon a more accurate way to implement preoperative planning than the conventional application of 2D images. The purpose of this prospective study was to report our initial experience with limb salvage surgery for orthopaedic oncology patients by using both MR imaging and multi-slice spiral CT for preoperative planning. Patients and methods ==================== Patients and preparation ------------------------ The study protocol has complied with all relevant national regulations and institutional policies and has been approved by the local institutional ethics committee. Informed consent was obtained from all patients before the procedure. Patients with malignant bone tumours of lower/upper limb were enrolled in the study. Preoperative work-up consisted of history and clinical examination, routine laboratory tests and an aesthetic assessment, plain radiography of the limb, 64-slice spiral CT scan of the limb and chest, Technetium-99m bone scan, and, in all of the cases, MRI of the affected limb. Antibiotics were administrated before the surgery. Biopsy was performed for pathological examination. Chemotherapy was commenced 6 weeks before the surgery in those cases which were diagnosed as osteosarcoma and dedifferentiated chondrosarcoma. Patients were classified according to the Enneking staging system.[@b8-rado-46-03-189],[@b9-rado-46-03-189] The patients received a detailed narrative of conventional, surgical and amputation options after the limb salvage surgery at their own request. Nine consecutive patients with lower/upper limb malignant tumour of bone (5 women, 4 man, mean age 28.6 years, range: 19--52 years) were treated with limb-salvaging procedures. Lesion size (longitudinal direction), location and histology are summarized in [Table 1](#t1-rado-46-03-189){ref-type="table"}. MR imaging ---------- MR images were performed at a 1.5-T superconductive unit (Gyroscan Intera, Philips Medical Systems, Netherlands) and a synergy surface coil was used. The sequences included transverse, sagittal and coronal turbo spin echo T1- and fat-suppressed T2-weighted images. The parameters of these sequences were TR/TE=400 /20 ms for T1-weighted imaging, TR/TE=3500 /120 ms for T2-weighted imaging and a field of view of 480 mm×480 mm for sagittal imaging and 40 mm×40 mm for transverse imaging and 480 mm×480 mm for coronal imaging with a matrix of 512×512, 4--6 signals acquisition and a slice thickness/gap = 5/0.5 mm. Contrast enhanced sagittal, coronal and transverse T1-weighted imaging were obtained after the intravenous injection of gadopentetate dimeglumine (Magnevist, Schering, Berlin, Germany) with a dosage of 0.2 mmol/kg of body weight. Multi-slice spiral CT --------------------- CT scan was performed by using a 64-slice spiral CT (Sensation 64, Siemens Medical Systems, Germany). The raw data obtained using an axial collimation of 64×0.6 mm, a pitch of 1.0, a tubular voltage of 120 KV and a tubular current of 360 mAs, were reconstructed into contiguous 1-mm thick slices with an increment of 0.5 mm and a field of view of 376 mm × 376 mm and a matrix of 512 × 512 by using the standard soft tissue and bone algorithm. These thin-slice images were postprocessed by using the techniques of multiplanar reformation (MPR) and volume rendering (VR) to demonstrate the lesion details and perform related measurements. Preoperative planning --------------------- All preoperative radiographs were evaluated by one radiologist and two consultant orthopaedic surgeons, who were members of the surgical team performing the operations. First, the osteotomy plane was determined separately on CT and MRI. On orthogonal coronal enhanced MR images and CT MPR images, the bulk margin of the tumour in the medullary cavity was defined according to the different signal characteristics or attenuation of the tumour itself and the marrow oedema around the tumour. Then, the maximum distance from the top of the greater trochanter to this tumour margin was measured on orthogonal coronal T1-weigthted MRI images, if the tumour was located in the proximal part of femur. The maximum distance from the knee joint line to the tumour margin was measured for the tumours located in the distal part of femur. The maximum distance was defined as the intramedullary extension of the primary tumour and subsequently was used as a reference for the CT measurement. The osteotomy plane on CT MPR images was defined 30 mm distal from the margin of tumour. This distance was also used to determine the length of the extra medullary part of the prosthesis. After the osteotomy plane had been determined, the detailed shape of the medullary cavity of the preserved part of the femur was assessed using the orthogonal MPR technique for determining the diameter and length of the intra medullary part of prosthesis. Diameters of the medullary cavity at the level of the osteotomy plane and the level of the narrowest plane were measured to determine the diameter of the intra medullary stem of the prosthesis. The length of the intra- medullary stem of the prosthesis should be well matched to the length of medullary cavity of the preserved part of femur, which would be optimal if it had an equal length to the extra medullary part of prosthesis. Finally, the centre axis of the femoral shaft measured on CT was used as a reference. Offset, the distance from the central axis of the femoral shaft and the rotation centre of the femoral head, was the index used to determine the neck length of the prosthesis. Surgery ------- All patients underwent en bloc resection and customized prosthetic reconstruction. An anterolateral incision encircling the biopsy scar was used. Limb-salvage surgery consisted of intentional marginal excision, preserving important structures such as major neurovascular bundles, tendons, and ligaments. The osteotomy plane, 30 mm distal from the primary tumour was confirmed based on MRI for all patients. For patients with lesion in the proximal part of femur/humerus, the customized prosthesis was secured using methylmethacrylate cement after the resection. For patients with the tumour in the distal part of the femur, en bloc resection including the tibial plateau was performed and the customized prosthesis was secured using methylmethacrylate cement in both the tibia and femur after the resection. The extensor mechanism was reconstructed by reattachment of the patellar tendon to the slot on the tibial component. After surgery, functional rehabilitation and neoadjuvant chemotherapy were performed. Postoperative measurement ------------------------- After surgery, the patients were followed with a mean of 13 months (range, 9 to 20 months). The postoperative assessment of prosthesis was performed on plain radiography. The central axis of the femoral or humeral shaft and offset were defined. The vertical distance from the line between the top of bilateral ischial tuberosities to the femoral condylar plane was assessed to evaluate the change of the length of the lower limbs. The change of the length of the upper limbs was not assessed for those humeral tumour cases. Functional evaluation was performed in all patients using the 30-point functional classification system of the Musculoskeletal Tumour Society.[@b8-rado-46-03-189] Statistical analysis -------------------- Data were expressed as mean ± SD. All measured values were normally distributed (Kolmogorov-Smirnov test). A paired Student's *t* test was used to evaluate the differences between preoperative planning and post-operative measurements. Values for *p* \< 0.05 were considered statistically significant. The statistical analysis was done with SPSS, version 12.0 (SPSS, Inc.). Results ======= The mean postoperative functional evaluation score was 23.3 ± 2.7 (range, 15--27) according to Enneking's evaluation. Excellent or good function was achieved in all patients and all patients had preserved stable joint ([Table 2](#t2-rado-46-03-189){ref-type="table"}). There were no local recurrences, metastases or aseptic loosening determined by bone scan, CT scan, ultrasonic examination and laboratory tests in all patients until the end of the follow-up. Accuracy of determination for tumour's boundary ----------------------------------------------- To determine the accuracy of tumour boundary defined by MRT and CT, the specimens were collected from 1cm, 2cm proximal to the tumour plane and 1cm, 2cm distal as determined by MRI and CT and were examined for histopathology ([Figure 1](#f1-rado-46-03-189){ref-type="fig"},[2](#f2-rado-46-03-189){ref-type="fig"}). There was significant difference in tumour extension between MRI and CT measurements (P\<0.05). The tumour extension measured on MRI was not statistically different from the actual extension (P\>0.05), while the extension measured on CT was less than the actual extension ([Table 3](#t3-rado-46-03-189){ref-type="table"}). Accuracy of reconstruction of the limb length --------------------------------------------- Before and after operation, there was no significant difference in the length and offset of affected lower limb ([Table 4](#t4-rado-46-03-189){ref-type="table"}, [Figure 3](#f3-rado-46-03-189){ref-type="fig"}, [4](#f4-rado-46-03-189){ref-type="fig"}). Discussion ========== The effect of CT combined with MRI on the determination of invasiveness range of malignant bone tumour ------------------------------------------------------------------------------------------------------ Preoperative imaging plays an important role in determining the stage of bone tumours and then an appropriate choice of therapy for affected patients. An appropriate imaging protocol should always begin with plain radiography. If an aggressive or malignant lesion was suspected, further evaluation with cross-sectional imaging such as CT or MR imaging was needed. CT and MRI are imaging methods, often combined in diagnostic procedures of many oncology tumours.[@b10-rado-46-03-189],[@b11-rado-46-03-189] CT is useful for a detailed assessment of subtle bony lesions and anatomically complex bones. MRI is particularly useful for determining the tumour extension within medullary compartments and is able to detect tumour involvement of the adjacent muscle compartments, neurovascular structures, and joints. Fat-suppressed T2-weighted imaging proton-density weighted imaging, and contrast-enhanced T1-weighted sequences were frequently used to evaluate neurovascular bundle involvement.[@b12-rado-46-03-189]--[@b13-rado-46-03-189] Currently, MR imaging has become the modality of choice in the local staging of the primary bone tumour. Many studies have investigated the accuracy of MRI in determining the infiltration range of osteosarcoma. Sundaram *et al.* first reported that MRI would not overestimate the range of osteosarcoma, compared with histology.[@b14-rado-46-03-189] Compared with gross and microscopy examination, MRI did not overestimate or underestimate the extent of the tumour, and the false positive and false negative rate were zero. Later, O'Flanagan *et al.* found that MRI could determine the aggression radius of osteosarcoma within the accuracy of 1cm.[@b15-rado-46-03-189] For high-grade sarcomas, a wide margin is essential to obtain the local control in order to achieve a successful limb-salvage surgery.[@b16-rado-46-03-189]--[@b17-rado-46-03-189] Meyer *et al.* designed the osteotomy plane according to MRI and found that osteotomy plane could be successfully determined by MRI.[@b18-rado-46-03-189] In the present study, the aggression radius of the tumour determined by MRI and the postoperative histological examination was comparable and MRI is superior to CT for determining the tumour extension. Moreover, we found that the result of MRI was slightly larger than the actual extent. The reasons might be that the low signals of peri-tumour oedema was also assigned to the radius of the tumour, resulting in overestimation of tumour size or the preoperative chemotherapy further reduced the aggression radius of the tumour. This result was consistent with the report of O'Flanagan, who found that the aggression radius of the tumour could be evaluated accurately in coronal and sagittal views of T1-weighted images. In contrast it would be overestimated on T2-weighted or fat-suppressed T2-weighted images because of the presence of the peri-tumour oedema. We suggest that MRI was better to demonstrate peri-tumour oedema in comparison to the histological findings. Since this study does not include a long-term follow-up and a large number of patients, a further study is necessary to determine the eventual effect of MRI osteotomy plane on the long-term survival rate. The value of three-dimensional CT in the reconstruction of limbs ---------------------------------------------------------------- There is a huge variety in the human skeleton structure as to the size and shape. Therefore, an implant needs to be custom-made to be more suitable for the patient's bone structure and mechanical requirements. One major challenge is to restore the leg length adequately after the operation.[@b19-rado-46-03-189] The leg length discrepancy can affect the joint stability, can cause sciatica and low back pain, and inequable stress on the hip.[@b20-rado-46-03-189] Anja *et al* reported that in 1171 cases of total hip replacement most patients with the length of the difference less than 1 cm walked without limp, while 1/4 patients with more than 2cm difference suffered from claudication.[@b21-rado-46-03-189] Morrey found that inappropriate eccentricity was one of the factors that could induce dislocation of prosthesis.[@b22-rado-46-03-189] Therefore, reducing the eccentricity would increase the risk of dislocation. Dorr *et al.* found that both lack of strength of abductor muscles and impingement of the hip, were the important reasons for dislocation.[@b23-rado-46-03-189] Clinically, many factors could lead to hip dislocation. In the presence of the release of soft tissue around the hip and lack of strength of abductor, the decreased offset would significantly increase the incidence of hip impingement syndrome and dislocation, which would increase the instability of the hip joint and may lead to dislocation after slight changes in posture. A smaller offset might lead to excessive loads on prosthesis, and increase the incidence of proximal femoral osteolysis, prosthetic loosening and revision. Theoretically, increasing the offset can reduce the joint reaction force and then may reduce wearing of polyethylene.[@b24-rado-46-03-189] Each additional 10 mm of the offset can reduce 10% of the abductor force and 10% less force for the acetabular cup. But if the offset is too large, it can easily lead to malposition of the implant, trochanter projections, local bursitis and pain, and also can affect the transfer of stress and lead to the unequal length of limb. With the advent of multi-slice spiral CT, the development of an individualized prosthesis became realistic. High accuracy of CT provides a reliable basis for designing the individual prostheses. In this study, the three-dimensional reconstruction of CT images was performed. After the osteotomy plane was initially determined on MRI, the detailed morphological parameters were measured on MPR othorgonal planes. The prosthesis was accordingly designed. This combined use of MRI and CT measurement provided high precision for the fit of the prosthesis and excellent functional results.[@b25-rado-46-03-189] Conclusions =========== Preoperative evaluation and planning, meticulous surgical technique, and adequate postoperative management are essential for the bone tumour management. In the present study, MRI was found to be superior to CT for determining the tumour extension; the combined use of MRI and CT measurement provided high precision for the fit of the prosthesis and excellent functional results. ![CT and MRI determining of tumour extension. A male, 31-year-old patient with chondrosarcoma in the proximal femur. Coronal MPR image (1), Volume rendering image (2) fat-suppressed coronal T1-weighted image (3) and T1-weighted image (4) showed the tumour in the proximal femur. Distance from the rotation centre of femoral head to the tumour margin in orthogonal coronal CT image and coronal T2-weigthted image was 4.2 10.0 cm respectively. The tumour boundary as determined by MRI and CT were in line c and h respectively. Line a, b, d and e represent the plane 1cm, 2cm around tumour and 1cm, 2 cm to the normal tissue distant from the plane determined by CT. Line f, g, i and j were the plane 1cm, 2cm around tumour and 1cm, 2cm to the normal tissue distant from the plane determined by MRI respectively. A-J are corresponding histologic images (HE, ×200) of line a-j. There was no tumour cells found on the plane h, i, j (Figures H, I, J).](rado-46-03-189f1){#f1-rado-46-03-189} ![CT and MRI determining of tumour extension. A female, 19-year-old patient with osteosarcoma in the distal femur. Coronal MPR image (1), volume rendering CT image (2), coronal enhanced T1-weighted image (3) and fat-suppressed T2-weighted image (4) showed the tumour in the proximal femur. Distance from the gap of the knee to the tumour margin in orthogonal coronal CT image and on orthogonal coronal T2-weigthed image was respectively 7.2 cm and 8.4 cm. The boundary of tumour as determined by MRI and CT were shown in line c and f respectively. Line i, h, d and b were the plane 1cm, 2cm around tumour and 1cm, 2cm to the normal tissue as determined by CT respectively. Line g, e and a were the plane 1cm, 2cm around tumour and 1cm to the normal tissue as determined by MRI. A--J are corresponding histologic images (HE, ×200) of line a--j. No tumour cells were found on the plane a, b, c (Figures A, B, C).](rado-46-03-189f2){#f2-rado-46-03-189} ![Postoperative assessment of prosthesis. A female, 19-year-old patient with osteosarcoma in the distal femur. Preoperative anterior-posterior plain film (A) and postoperative anterior-posterior plain film (B) reveal that the length and alignment were accurate after reconstruction. The red line showed the alignment of lower limb.](rado-46-03-189f3){#f3-rado-46-03-189} ![Postoperative assessment of prosthesis. A male, 31-year-old patient with chondrosarcoma in the proximal femur. Preoperative volume rendering images (A) and postoperative anterior-posterior plain film (B) demonstrate that the length and offset were accurately reconstructed.](rado-46-03-189f4){#f4-rado-46-03-189} ###### Lesion features in six patients **NO.** **Primary tumor** **Sex** **Age(y)** **Tumor characteristics** **Tumor edge disparity between CT and MR(cm)** --------- ------------------- --------- ------------ --------------------------- ------------------------------------------------ ------ ----- 1 Osteosarcoma M 20 Proximal Femur 6.5 6.0 0.5 2 Chondrosarcoma F 29 Proximal Femur 7.1 5.5 1.6 3 Osteosarcoma F 21 Distal Femur 15.5 13.3 2.2 4 Chondrosarcoma M 31 Proximal Femur 10.0 4.2 5.8 5 Osteosarcoma F 19 Proximal Femur 8.5 7.0 1.5 6 Osteosarcoma F 52 Distal Femur 9.0 7.6 1.4 7 Osteosarcoma M 41 Distal Femur 12.9 11.0 1.9 8 Osteosarcoma F 22 Proximal humerus 14.2 12.3 1.9 9 Osteosarcoma M 22 Proximal humerus 13.0 11.5 1.5 measured on MRI; measured on CT imaging. ###### Functional evaluation according to the 30-point functional classification system of the Musculoskeletal Tumour Society **Classification(points)** **Patients Score** -------------------------- ---------------------------- -------------------- -------------------------- -------------- ---------------------------------------- ---------------------------------------- --------- **Pain** None Intermediate Modest Intermediate Moderate Severe 4.5±0.7 **Emotional Acceptance** Enthusiastic Intermediate Satisfied Intermediate Accepts Dislikes 4.4±0.5 **Function** NO Restriction Intermediate Recreational Restriction Intermediate Partial Disability Total Disability 4.6±0.5 **Supports** None Intermediate Brace Intermediate One cane, One crutch Two canes, two crutches 3.8±0.5 **Walking Ability** Unlimited Intermediate Limited Intermediate Inside only Unable 4.3±0.7 **Gait** Normal Intermediate Minor Cosmetic problem Intermediate Major cosmetic problem. Minor handicap Major cosmetic problem. Major handicap 3.7±1.0 The score of postoperative functional evaluation was given as the mean and the standard deviation, which showed that excellent or good function was achieved in all patients. ###### Accuracy of CT and MRI for determining the tumour extension **Tumor margin on CT** **Tumor margin on MRI** **Position from tumor margins on CT** **Position from tumor margins on MRI** --------------------------------------------------------------- ------------------------ ------------------------- --------------------------------------- ---------------------------------------- --- --- --- --- --- --- **Positive result determined by histopathologic examination** 9 7 1 9 9 9 0 1 9 9 **Negative result determined by histopathologic examination** 0 2 8 0 0 0 9 7 0 0 The specimens were collected from 1cm, 2cm proximal to the tumours and 1cm, 2cm distal as determined by MRI and CT and were examined for histopathology (which were simplified to 1cm, 2cm, −1cm, −2cm respectively). In 9 cases, which were underestimated by CT, positive result of histopathology was determined on 1-cm-point which was distal from CT-determined boundary. In 2 cases, which were overestimated by MRI, negative result of histopathology was determined on MR-determined boundary (overestimate). ###### Preoperative and postoperative measurements of leg length and offset **No.** **Contraleral side** **Preoperative planning** **Postoperative measurement** **Disparity between preoperative and postoperative measurement** --------- ---------------------- --------------------------- ------------------------------- ------------------------------------------------------------------ ------ ----- ----- ----- **1** 39.2 4.1 38.7 4.2 39.4 4.0 0.7 0.2 **2** 36.0 4.0 37.1 4.2 36.6 4.4 0.5 0.2 **3** 38.0 3.6 38.0 3.6 38.0 3.6 0.5 0 **4** 37.3 3.4 37.0 3.4 36.5 3.2 0.5 0.2 **5** 36.5 3.5 36.0 3.6 35.5 4.0 0.5 0.4 **6** 37.5 3.7 37.0 3.7 37.2 3.7 0.2 0 **7** 37.9 3.9 37.7 3.9 37.4 3.9 0.3 0 [^1]: Jie Xu and Jun Shen contrebuted equally to this work. [^2]: Disclosure: No potential conflicts of interest were disclosed.
{ "pile_set_name": "PubMed Central" }
Q: Tank Tread Mathematical Model I am struggiling with tank tread behaviour. The tank treads moving indivually if I move only the left tread the tank will go to the right direction for sure it depends on tread’s speed value subatraction , if ı am not wrong. İf the left track moves 50 km and right track moves with 40km tank will go to the right direction but if i decrease the right track speed around 30 tank has to turn right again but Which Angle ? When I drive a tank 90 degree forward with remote control I want to turn left 5 degree how much speed difference should be realize to turn 5 degree or 45 degree or 275 degree ? I tried to put 2 force on a stick which is show the lenght of 2 tread distance. The net force should be locate somewhere on this lenght. It is easy to find if i know the force value. By the way I tried to imagine with tread’s speed. Tank treads must have angular speed respectively. How can i associate with turning angle between angular speed or do you have another view! A: Calling $$ \cases{ r = \text{Tread's wheel radius}\\ d = \text{mid tread's front distance}\\ \vec v_i = \text{Wheels center velocities} } $$ assuming the whole set as a rigid body, we can apply the Poisson kinematics law. $$ \vec v_1 \times(p_1-O) = \vec v_2 \times(p_2-O) $$ where $p_i$ are application points and $O$ is the rotation instantaneous center. Calling $G$ the arrangement geometrical center, $\vec V = V(\cos\theta,\sin\theta)$ and $p_G = (x_G,y_G)$ we have the equivalent kinematics $$ \left( \begin{array}{c} \dot x_G\\ \dot y_G\\ \dot\theta \end{array} \right) = \left( \begin{array}{cc} \cos\theta & 0\\ \sin\theta & 0\\ 0 & 1 \end{array} \right) \left( \begin{array}{c} V\\ \omega \end{array} \right) $$ and also $$ \left( \begin{array}{c} V\\ \omega \end{array} \right) = \left( \begin{array}{cc} \frac r2 & \frac r2\\ \frac{r}{2d} &-\frac{r}{2d}\ \end{array} \right) \left( \begin{array}{c} \omega_1\\ \omega_2 \end{array} \right) $$ Here $\omega$ is the rigid body angular rotation velocity, $\omega_i$ is the wheels angular rotation velocity. Assuming that the wheels do not skid laterally, should be considered the following restriction of movement: $$ \dot x_G\sin\theta +\dot y_G\cos\theta = d_0\dot\theta $$ where $d_0$ is the distance between $p_G$ and the tread center. This is a rough qualitative approximation. The real tank kinematics are a lot more complex. NOTE Attached a MATHEMATICA script simulating the movement kinematics. parms = {r -> 0.5, d -> 2, d0 -> 0.1, wr -> UnitStep[t] - UnitStep[t - 30], wl -> UnitStep[t - 10] - 2 UnitStep[t - 50]}; M = {{Cos[theta[t]], 0}, {Sin[theta[t]], 0}, {0, 1}}; D0 = {{r/2, r/2}, {r/(2 d), -r/(2 d)}}; equs = Thread[D[{x[t], y[t], theta[t]}, t] == M.D0.{wr, wl}]; equstot = equs /. parms; cinits = {x[0] == 0, y[0] == 0, theta[0] == 0}; tmax = 100; solmov = NDSolve[Join[equstot, cinits], {x, y, theta}, {t, 0,tmax}][[1]]; gr0 = ParametricPlot[Evaluate[{x[t], y[t]} /. solmov], {t, 0,tmax}]; car[x_, y_, theta_, e_] := Module[{p1, p2, p3, bc, M, p1r, p2r, p3r}, p1 = {0, e}; p2 = {2 e, 0}; p3 = {0, -e}; bc = (p1 + p2 + p3)/3; M = RotationMatrix[theta]; p1r = M.(p1 - bc) + {x, y}; p2r = M.(p2 - bc) + {x, y}; p3r = M.(p3 - bc) + {x, y}; Return[{p1r, p2r, p3r, p1r}] ] nshots = 100; dt = Floor[tmax/nshots]; path0 = Evaluate[{x[t], y[t], theta[t]} /. solmov]; path = Table[path0 /. {t -> k dt}, {k, 0, Floor[tmax/dt]}]; grpath = Table[ListLinePlot[car[path[[k, 1]], path[[k, 2]],path[[k, 3]], 0.2], PlotStyle -> Red, PlotRange -> All], {k, 1, Length[path]}]; Show[gr0, grpath, PlotRange -> All]
{ "pile_set_name": "StackExchange" }
<!DOCTYPE html> <html lang="en" data-navbar="/account/navbar-profile.html"> <head> <meta charset="utf-8" /> <title translate="yes">Establecer o perfil predeterminado</title> <link href="/public/pure-min.css" rel="stylesheet"> <link href="/public/content.css" rel="stylesheet"> <link href="/public/content-additional.css" rel="stylesheet"> <base target="_top" href="/"> </head> <body> <h1 translate="yes">Establecer o perfil predeterminado</h1> <p translate="yes">O teu perfil predeterminado serve como principal punto de contacto da túa conta.</p> <div id="message-container"></div> <form id="submit-form" method="post" class="pure-form" action="/account/set-default-profile" name="submit-form"> <fieldset> <div class="pure-control-group"> <select id="profileid" name="profileid"> <option value="" translate="yes"> Selecciona perfil </option> </select> </div> <button id="submit-button" type="submit" class="pure-button pure-button-primary" translate="yes">Establecer o perfil predeterminado</button> </fieldset> </form> <template id="success"> <div class="success message" translate="yes"> Éxito! O perfil é o teu estándar </div> </template> <template id="unknown-error"> <div class="error message" translate="yes"> Erro! Produciuse un erro descoñecido </div> </template> <template id="default-profile"> <div class="error message" translate="yes"> Erro! Este é xa o teu perfil predeterminado </div> </template> <template id="profile-option"> <option value="${profile.profileid}"> ${profile.contactEmail}, ${profile.firstName} ${profile.lastName} </option> </template> </body> </html>
{ "pile_set_name": "Github" }
Building upon the pioneering work of Vicsek *et al.*[@b1], physicists, mathematicians and biologists have contemplated the self-organization of living-organism groups into flocks as an emergent process stemming from simple interaction rules at the individual level[@b2][@b3][@b4]. This idea has been supported by quantitative trajectory analysis in animal groups[@b5][@b6][@b7], together with a vast number of numerical and theoretical models[@b3][@b4], and more recently by the observations of flocking behaviour in ensembles of non-living motile particles such as shaken grains, active colloids, and mixtures of biofilaments and molecular motors[@b8][@b9][@b10][@b11][@b12]. From a physicist\'s perspective, these various systems are considered as different instances of polar active matter, which encompasses any ensemble of motile bodies endowed with local velocity--alignment interactions. The current paradigm for flocking physics is the following. Active particles are persistent random walkers, which when dilute form a homogeneous isotropic gas. Upon increasing density, collective motion emerges in the form of spatially localized swarms that may cruise in a sea of randomly moving particles; further increasing density, a homogeneous polar liquid forms and spontaneously flows along a well-defined direction[@b1][@b13][@b14]. This picture is the outcome of experiments, simulations and theories mostly performed in unbounded or periodic domains. Beyond this picture, significant attention has been devoted over the last five years to confined active matter[@b3][@b12][@b15][@b16][@b17][@b18][@b19][@b20][@b21][@b22][@b23][@b24][@b25][@b26]. Confined active particles have consistently, yet not systematically, been reported to self-organize into vortex-like structures. However, unlike for our understanding of flocking, we are still lacking a unified picture to account for the emergence and structure of such vortex patterns. This situation is mostly due to the extreme diversity in the nature and symmetries of the interactions between the active particles that have been hitherto considered. Do active vortices exist only in finite-size systems as in the case of bacterial suspensions[@b17], which lose this beautiful order and display intermittent turbulent dynamics[@b27] when unconfined? What are the necessary interactions required to observe and/or engineer bona fide stationary swirling states of active matter? In this paper, we answer these questions by considering the impact of geometrical boundaries on the collective behaviour of motile particles endowed with velocity--alignment interactions. Combining quantitative experiments on motile colloids, numerical simulations and analytical theory, we elucidate the phase behaviour of *polar* active matter restrained by geometrical boundaries. We use colloidal rollers, which, unlike most of the available biological self-propelled bodies, interact via well-established dynamical interactions[@b11]. We first exploit this unique model system to show that above a critical concentration populations of motile colloids undergo a non-equilibrium phase transition from an isotropic gaseous state to a novel ordered state where the entire population self-organizes into a single heterogeneous steadily rotating vortex. This self-organization is *not* the consequence of the finite system size. Rather, this emergent vortex is a genuine state of polar active matter lying on the verge of a macroscopic phase separation. This novel state is the only ordered phase found when unidirectional directed motion is hindered by convex isotropic boundaries. We then demonstrate theoretically that a competition between alignment, repulsive interactions and confinement is necessary to yield large-scale vortical motion in ensembles of motile particles interacting via alignment interactions, thereby extending the relevance of our findings to a broad class of active materials. Results ======= Experiments ----------- The experimental setup is fully described in the *Methods* section and in [Fig. 1a,b](#f1){ref-type="fig"}. Briefly, we use colloidal rollers powered by the Quincke electrorotation mechanism as thoroughly explained in ref. [@b11]. An electric field **E**~**0**~ is applied to insulating colloidal beads immersed in a conducting fluid. Above a critical field amplitude *E*~Q~, the symmetry of the electric charge distribution at the bead surface is spontaneously broken. As a result, a net electric torque acts on the beads causing them to rotate at a constant rate around a random axis transverse to the electric field[@b28][@b29][@b30]. When the colloids sediment, or are electrophoretically driven, onto one of the two electrodes, rotation is converted into a net rolling motion along a random direction. Here, we use poly(methyl methacrylate) (PMMA) spheres of radius *a*=2.4 μm immersed in a hexadecane solution. As sketched in [Fig. 1a](#f1){ref-type="fig"}, the colloids are handled and observed in a microfluidic device made of double-sided scotch tape and of two glass slides coated with an indium-tin-oxide layer. The ITO layers are used to apply a uniform DC field in the *z*-direction, with *E*~0~=1.6 V μm^−1^ (*E*~0~=1.1*E*~Q~). Importantly, the electric current is nonzero solely in a disc-shaped chamber at the centre of the main channel. As exemplified by the trajectories shown in [Fig. 1b](#f1){ref-type="fig"} and in [Supplementary Movie 1](#S1){ref-type="supplementary-material"}, Quincke rotation is hence restrained to this circular region in which the rollers are trapped. We henceforth characterize the collective dynamics of the roller population for increasing values of the colloid packing fraction *φ*~0~. Individual self-propulsion -------------------------- For area fractions smaller than , the ensemble of rollers uniformly explores the circular confinement as illustrated by the flat profile of the local packing fraction averaged along the azimuthal direction *φ*(*r*) in [Fig. 2a](#f2){ref-type="fig"}. The rollers undergo uncorrelated persistent random walks as demonstrated in [Fig. 2b,c](#f2){ref-type="fig"}. The probability distribution of the roller velocities is isotropic and sharply peaked on the typical speed *v*~0~=493±17 μm s^−1^. In addition, the velocity autocorrelation function decays exponentially at short time as expected from a simple model of self-propelled particles having a constant speed *v*~0~ and undergoing rotational diffusion with a rotational diffusivity *D*^−1^=0.31±0.02 s that hardly depends on the area fraction (see [Supplementary Note 1](#S1){ref-type="supplementary-material"}). These quantities correspond to a persistence length of that is about a decade smaller than the confinement radius *R*~c~ used in our experiments: 0.9 mm\<*R*~c~\<1.8 mm. At long time, because of the collisions on the disc boundary, the velocity autocorrelation function sharply drops to 0 as seen in [Fig. 2c](#f2){ref-type="fig"}. Unlike swimming cells[@b26][@b31], self-propelled grains[@b8][@b22][@b23] or autophoretic colloids[@b32], dilute ensembles of rollers do not accumulate at the boundary. Instead, they bounce off the walls of this virtual box as shown in a close-up of a typical roller trajectory in [Fig. 2d](#f2){ref-type="fig"}, and in the [Supplementary Movie 1](#S1){ref-type="supplementary-material"}. As a result, the outer region of the circular chamber is depleted, and the local packing fraction vanishes as *r* goes to *R*~c~, [Fig. 2a](#f2){ref-type="fig"}. The repulsion from the edges of the circular hole in the microchannel stems from another electrohydrodynamic phenomenon[@b33]. When an electric field is applied, a toroidal flow sketched in [Fig. 1a](#f1){ref-type="fig"} is osmotically induced by the transport of the electric charges at the surface of the insulating adhesive films. Consequently, a net inward flow sets in at the vicinity of the bottom electrode. As the colloidal rollers are prone to reorient in the direction of the local fluid velocity[@b11], this vortical flow repels the rollers at a distance typically set by the channel height *H* while leaving unchanged the colloid trajectories in the centre of the disc. This electrokinetic flow will be thoroughly characterized elsewhere. Collective motion in confinement -------------------------------- As the area fraction is increased above , collective motion emerges spontaneously at the entire population level. When the electric field is applied, large groups of rollers akin to the band-shaped swarms reported in[@b11] form and collide. However, unlike what was observed in periodic geometries, the colloidal swarms are merely transient and ultimately self-organize into a single vortex pattern spanning the entire confining disc as shown in [Fig. 3a](#f3){ref-type="fig"} and [Supplementary Movie 2](#S1){ref-type="supplementary-material"}. Once formed, the vortex is very robust, rotates steadily and retains an axisymmetric shape. To go beyond this qualitative picture, we measured the local colloid velocity field **v**(**r**, *t*) and use it to define the polarization field **Π**(**r**, *t*)≡**v**/*v*~0~, which quantifies local orientational ordering. The spatial average of **Π** vanishes when a coherent vortex forms, therefore we use its projection along the azimuthal direction as a macroscopic order parameter to probe the transition from an isotropic gas to a polar-vortex state. As illustrated in [Fig. 3b](#f3){ref-type="fig"}, displays a sharp bifurcation from an isotropic state with to a globally ordered state with equal probability for left- and right-handed vortices above . Furthermore, [Fig. 3b](#f3){ref-type="fig"} demonstrates that this bifurcation curve does not depend on the confinement radius *R*~c~. The vortex pattern is spatially heterogeneous. The order parameter and density fields averaged over time are displayed in [Fig. 3c,d](#f3){ref-type="fig"}, respectively. At first glance, the system looks phase-separated: a dense and ordered polar-liquid ring where all the colloids cruise along the azimuthal direction encloses a dilute and weakly ordered core at the centre of the disc. We shall also stress that regardless of the average packing fraction, the packing fraction in the vortex core is measured to be very close to , the average concentration below which the population is in a gaseous state, see [Fig. 3e](#f3){ref-type="fig"}. This phase-separation picture is consistent with the variations of the area occupied by the ordered outer ring, *A*~ring~, for different confinement radii *R*~c~, as shown in [Fig. 3e](#f3){ref-type="fig"}. We define *A*~ring~ as the area of the region where the order parameter exceeds 0.5, and none of the results reported below depend on this arbitrary choice for the definition of the outer-ring region. *A*~ring~ also bifurcates as *φ*~0~ exceeds , and increases with *R*~c~. Remarkably, all the bifurcation curves collapse on a single master curve when *A*~ring~ is rescaled by the overall confinement area , [Fig. 3f](#f3){ref-type="fig"}. In other words, the strongly polarized outer ring always occupies the same area fraction irrespective of the system size, as would a molecular liquid coexisting with a vapour phase at equilibrium. However, if the system were genuinely phase-separated, one should be able to define an interface between the dense outer ring and the dilute inner core, and this interface should have a constant width. This requirement is not borne out by our measurements. The shape of the radial density profiles of the rollers in [Fig. 3g](#f3){ref-type="fig"} indeed makes it difficult to unambiguously define two homogeneous phases separated by a clear interface. Repeating the same experiments in discs of increasing radii, we found that the density profiles are self-similar, [Fig. 3h](#f3){ref-type="fig"}. The width of the region separating the strongly polarized outer ring from the inner core scales with the system size, which is the only characteristic scale of the vortex patterns. The colloidal vortices therefore correspond to a monophasic yet spatially heterogeneous liquid state. To elucidate the physical mechanisms responsible for this intriguing structure, we now introduce a theoretical model that we solve both numerically and analytically. Numerical simulations --------------------- The Quincke rollers are electrically powered and move in a viscous fluid, and hence interact at a distance both hydrodynamically and electrostatically. In ref. [@b11], starting from the Stokes and Maxwell equations, we established the equations of motion of a dilute ensemble of Quincke rollers within a pairwise additive approximation. When isolated, the *i*th roller located at **r**~*i*~ moves at a speed *v*~0~ along the direction opposite to the in-plane component of the electrostatic dipole responsible for Quincke rotation[@b11]. When interacting via contact and electrostatic repulsive forces, the roller velocity and orientation are related by: Inertia is obviously ignored, and for the sake of simplicity we model all the central forces acting on the colloids as an effective hard-disc exclusion of range *b*. In addition, *θ*~*i*~ follows an overdamped dynamics in an effective angular potential capturing both the electrostatic and hydrodynamic torques acting on the colloids[@b11]: The *ξ*~*i*~\'s account for rotational diffusion of the rollers. They are uncorrelated white noise variables with zero mean and variance 〈*ξ*~*i*~(*t*)*ξ*~*j*~(*t*′)〉=2*Dδ*(*t*−*t*′)*δ*~*ij*~. The effective potential in [equation 2](#eq15){ref-type="disp-formula"} is composed of three terms with clear physical interpretations: where and **I** is the identity matrix. The symmetry of these interactions is not specific to colloidal rollers and could have been anticipated phenomenologically exploiting both the translational invariance and the polar symmetry of the surface-charge distribution of the colloids[@b34]. The first term promotes alignment and is such that the effective potential is minimized when interacting rollers propel along the same direction. *A*(*r*) is positive, decays exponentially with *r*/*H*, and results both from hydrodynamic and electrostatic interactions. The second term gives rise to repulsive *torques*, and is minimized when the roller orientation points away from its interacting neighbour. *B*(*r*) also decays exponentially with *r*/*H* but solely stems from electrostatics. The third term has a less intuitive meaning, and promotes the alignment of along a dipolar field oriented along . This term is a combination of hydrodynamic and electrostatic interactions, and includes a long-ranged contribution. The functions *A*(*r*), *B*(*r*) and *C*(*r*) are provided in the [Supplementary Note 2](#S1){ref-type="supplementary-material"}. As it turns out, all the physical parameters (roller velocity, field amplitude, fluid viscosity, etc.) that are needed to compute their exact expressions have been measured, or estimated up to logarithmic corrections, see [Supplementary Note 2](#S1){ref-type="supplementary-material"}. We are then left with a model having a single free parameter that is the range, *b*, of the repulsive *forces* between colloids. We numerically solved this model in circular simulation boxes of radius *R*~c~ with reflecting boundary conditions using an explicit Euler scheme with adaptive time-stepping. All the numerical results are discussed using the same units as in the experiments to facilitate quantitative comparisons. The simulations revealed a richer phenomenology than the experiments, as captured by the phase diagram in [Fig. 4a](#f4){ref-type="fig"} corresponding to *R*~c~=0.5 mm. By systematically varying the range of the repulsive forces and the particle concentration, we found that the (*φ*~0~, *b*) plane is typically divided into three regions. At small packing fractions, the particles hardly interact and form an isotropic gaseous phase. At high fractions, after a transient dynamics strikingly similar to that observed in the experiments, the rollers self-organize into a macroscopic vortex pattern, [Fig. 4b](#f4){ref-type="fig"} and [Supplementary Movie 3](#S1){ref-type="supplementary-material"}. However, at intermediate densities, we found that collective motion emerges in the form of a macroscopic swarm cruising around the circular box through an ensemble of randomly moving particles, [Fig. 4c](#f4){ref-type="fig"} and [Supplementary Movie 4](#S1){ref-type="supplementary-material"}. These swarms are akin to the band patterns consistently reported for polar active particles at the onset of collective motion in periodic domains[@b11][@b14]. This seeming conflict between our experimental and numerical findings is solved by looking at the variations of the swarm length *ξ*~s~ with the confinement radius *R*~c~ in [Fig. 4d](#f4){ref-type="fig"}. We define *ξ*~s~ as the correlation length of the density fluctuations in the azimuthal direction. The angular extension of the swarms *ξ*~s~/*R*~c~ increases linearly with the box radius. Therefore, for a given value of the interaction parameters, there exists a critical box size above which the population undergoes a direct transition from a gaseous to an axisymmetric vortex state. For *b*=3*a*, which was measured to be the typical interparticle distance in the polar liquid state[@b11], this critical confinement is *R*~c~=1 mm. This value is close to the smallest radius accessible in our experiments where localized swarms were never observed, thereby solving the apparent discrepancy with the experimental phenomenology. More quantitatively, we systematically compare our numerical and experimental measurements in [Fig. 3b,c](#f3){ref-type="fig"} for *R*~c~=1 mm. Even though a number of simplifications were needed to establish [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"} (ref. [@b11]), the simulations account very well for the sharp bifurcation yielding the vortex patterns as well as their self-similar structure. This last point is proven quantitatively in [Fig. 3h](#f3){ref-type="fig"}, which demonstrates that the concentration increases away from the vortex core, where , over a scale that is solely set by the confinement radius. We shall note however that the numerical simulations underestimate the critical packing fraction at which collective motion occurs, which is not really surprising given the number of approximations required to establish the interaction parameters in the equations of motion [equation 3](#eq16){ref-type="disp-formula"}. We unambiguously conclude from this set of results that [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"} include all the physical ingredients that chiefly dictate the collective dynamics of the colloidal rollers. We now exploit the opportunity offered by the numerics to turn on and off the four roller-roller interactions one at a time, namely the alignment torque, *A*, the repulsion torque *B* and force *b*, and the dipolar coupling *C*. Snapshots of the resulting particle distributions are reported in [Fig. 4e](#f4){ref-type="fig"}. None of these four interactions alone yields a coherent macroscopic vortex. We stress that when the particles solely interact via pairwise-additive alignment torques, *B*=*C*=*b*=0, the population condenses into a single compact polarized swarm. Potential velocity-alignment interactions are *not* sufficient to yield macroscopic vortical motion. We evidence in [Fig. 4e](#f4){ref-type="fig"} (top-right and bottom-left panels) that the combination of alignment (*A*≠0) and of repulsive interactions (*B*≠0 and/or *b*≠0) is necessary and sufficient to observe spontaneously flowing vortices. Analytical theory ----------------- Having identified the very ingredients necessary to account for our observations, we can now gain more detailed physical insight into the spatial structure of the vortices by constructing a minimal hydrodynamic theory. We start from [equations 1](#eq13){ref-type="disp-formula"}, [2](#eq15){ref-type="disp-formula"} and [3](#eq16){ref-type="disp-formula"}, ignoring the *C* term in [equation 3](#eq16){ref-type="disp-formula"}. The model can be further simplified by inspecting the experimental variations of the individual roller velocity with the local packing fraction, see [Supplementary Fig. 1](#S1){ref-type="supplementary-material"}. The roller speed only displays variations of 10% as *φ*(**r**) increases from 10^−2^ to 4 × 10^−2^. These minute variations suggest ignoring the contributions of the repulsive forces in [equation 1](#eq13){ref-type="disp-formula"}, and solely considering the interplay between the alignment and repulsion torques on the orientational dynamics of [equation 2](#eq15){ref-type="disp-formula"}. These simplified equations of motion are coarse-grained following a conventional kinetic-theory framework reviewed in[@b4] to establish the equivalent to the Navier-Stokes equations for this two-dimensional active fluid. In a nutshell, the two observables we need to describe are the local area fraction φ and the local momentum field *φ***Π**. They are related to the first two angular moments of the one-particle distribution function , which evolves according to a Fokker-Plank equation derived from the conservation of and [equations 1](#eq13){ref-type="disp-formula"} and [2](#eq15){ref-type="disp-formula"}. This equation is then recast into an infinite hierarchy of equations for the angular moments of . The two first equations of this hierarchy, corresponding to the mass conservation equation and to the momentum dynamics, are akin to the continuous theory first introduced phenomenologically by Toner and Tu[@b2][@b4]: where **Q** is the usual nematic order parameter. The meaning of the first equation is straightforward, while the second calls for some clarifications. The divergence term on the left-hand side of [equation 5](#eq26){ref-type="disp-formula"} is a convective kinematic term associated with the self-propulsion of the particles. The force field **F** on the right-hand side would vanish for non-interacting particles. Here, at first order in a gradient expansion, **F** is given by: This force field has a clear physical interpretation. The first term reflects the damping of the polarization by the rotational diffusion of the rollers. The second term, defined by the time rate *α*=(∫~*r*\>2*a*~*rA*(*r*)d*r*)/*a*^2^, echoes the alignment rule at the microscopic level and promotes a nonzero local polarization. The third term, involving *β*=(∫~*r*\>2*a*~*r*^2^*B*(*r*)d*r*)/(2*a*^2^), is an anisotropic pressure reflecting the repulsive interactions between rollers at the microscopic level. [equations 4](#eq25){ref-type="disp-formula"} and [5](#eq26){ref-type="disp-formula"} are usually complemented by a dynamical equation for **Q** and a closure relation. This additional approximation, however, is not needed to demonstrate the existence of vortex patterns and to rationalize their spatial structure. Looking for axisymmetric steady states, it readily follows from mass conservation, [equation 4](#eq25){ref-type="disp-formula"}, that the local fields must take the simple forms: *φ*=*φ*(*r*), and where *Q*(*r*)\>0. We also infer the relation from the projection of the momentum equation, [equation 5](#eq26){ref-type="disp-formula"}, on the azimuthal direction. This relation tells us that the competition between rotational diffusion and local alignment results in a mean-field transition from an isotropic state with to a polarized vortex state with and *Q*=(1)/(2)(1−*D*/(*αφ*)). This transition occurs when φ exceeds , the ratio of the rotational diffusivity to the alignment strength at the hydrodynamic level. In addition, the projection of [equation 5](#eq26){ref-type="disp-formula"} on the radial direction sets the spatial structure of the ordered phase: with again in the ordered polar phase. This equation has a clear physical meaning and expresses the balance between the centrifugal force arising from the advection of momentum along a circular trajectory and the anisotropic pressure induced by the repulsive interactions between rollers. It has an implicit solution given by *φ*(*r*) is therefore parametrized by the dimensionless number reflecting the interplay between self-propulsion and repulsive interactions. Given the experimental values of the microscopic parameters, Λ is much smaller that unity . An asymptotic analysis reveals that is the typical core radius of the vortex. For , the density increases slowly as for all *φ*~0~ and *R*~c~. As *r* reaches , it increases significantly and then grows logarithmically as away from the vortex core. However, is an integration constant, which is solely defined via the mass conservation relation: and therefore only depends on *φ*~0~ and *R*~c~. does not provide any intrinsic structural scale, and the vortex patterns formed in different confinements are predicted to be self-similar in agreement with our experiments and simulations despite the simplification made in the model, [Fig. 3e](#f3){ref-type="fig"}. In addition, [equation 8](#eq36){ref-type="disp-formula"} implies that the rollers self-organize by reducing their density at the centre of the vortex down to , the mean area fraction at the onset of collective motion, again in excellent agreement with our measurements in [Fig. 3e](#f3){ref-type="fig"}. To characterize the orientational structure of the vortices, an additional closure relation is now required. The simplest possible choice consists in neglecting correlations of the orientational fluctuations, and therefore assuming . This choice implies that [Equations 8](#eq36){ref-type="disp-formula"} and [9](#eq49){ref-type="disp-formula"} provide a very nice fit of the experimental polarization curve as shown in [Fig. 3b](#f3){ref-type="fig"}, and therefore capture both the pitchfork bifurcation scenario at the onset of collective motion and the saturation of the polarization at high packing fractions. The best fit is obtained for values of and *β*, respectively, five and two times larger than those deduced from the microscopic parameters. Given the number of simplifications needed to establish both the microscopic and hydrodynamic models, the agreement is very convincing. We are then left with a hydrodynamic theory with no free fitting parameter, which we use to compute the area fraction of the outer polarized ring where . The comparison with the experimental data in [Fig. 3f](#f3){ref-type="fig"} is excellent. Furthermore, [equations 8](#eq36){ref-type="disp-formula"} and [9](#eq49){ref-type="disp-formula"} predict that the rollers are on the verge of a phase separation. If the roller fraction in the vortex core were smaller , orientational order could not be supported and an isotropic bubble would nucleate in a polar liquid. This phase separation is avoided by the self-regulation of *φ*(*r*=0) at . Discussion ========== Altogether our theoretical results confirm that the vortex patterns stem from the interplay between self-propulsion, alignment, repulsion and confinement. Self-propulsion and alignment interactions promote a global azimuthal flow. The repulsive interactions prevent condensation of the population on the geometrical boundary and allow for extended vortex patterns. If the rollers were not confined, the population would evaporate as self-propulsion induces a centrifugal force despite the absence of inertia. We close this discussion by stressing on the generality of this scenario. First, the vortex patterns do not rely on the perfect rotational symmetry of the boundaries. As illustrated in [Supplementary Fig. 2,](#S1){ref-type="supplementary-material"} the same spatial organization is observed for a variety of convex polygonal geometries. However, strongly anisotropic, and/or strongly non-convex confinements can yield other self-organized states such as vortex arrays, which we will characterize elsewhere. Second, neither the nature of the repulsive couplings nor the symmetry of the interactions yielding collective motion are crucial, thereby making the above results relevant to a much broader class of experimental systems. For instance, self-propelled particles endowed with nematic alignment rules are expected to display the same large-scale phenomenology. The existence of a centrifugal force does not rely on the direction of the individual trajectories. Shaken rods, concentrated suspensions of bacteria or motile biofilaments, among other possible realizations, are expected to have a similar phase behaviour. Quantitative local analysis of their spatial patterns[@b10][@b12][@b15][@b16][@b17] would make it possible to further test and elaborate our understanding of the structure of confined active matter. For instance, the polar order found in confined bacteria is destroyed upon increasing the size of the confinement. The analysis of the spacial distribution of the bacteria could be used to gain insight on the symmetries and the magnitude of the additional interactions mediated by the host fluid, which are responsible for the emergence of bacterial turbulence[@b17]. In conclusion, we take advantage of a model experimental system where ensembles of self-propelled colloids with well-established interactions self-organize into macrosopic vortices when confined by circular geometric boundaries. We identify the physical mechanism that chiefly dictates this emergent behaviour. Thanks to a combination of numerical simulations and analytical theory, we demonstrate that orientational couplings alone cannot account for collective circular motion. Repulsion between the motile individuals is necessary to balance the centrifugal flow intrinsic to any ordered active fluid and to stabilize heterogeneous yet monophasic states in a broad class of active fluids. A natural challenge is to extend this description to the compact vortices observed in the wild, for example, in shoals of fish. In the absence of confining boundaries, the centrifugal force has to be balanced by additional density-regulation mechanisms[@b35][@b36]. A structural investigation akin to the one introduced here for roller vortices could be a powerful tool to shed light on density regulation in natural flocks, which remains to be elucidated. Methods ======= Experiments ----------- We use fluorescent PMMA colloids (Thermo scientific G0500, 2.4 μm radius), dispersed in a 0.15 mol l^−1^ AOT/hexadecane solution. The suspension is injected in a wide microfluidic chamber made of double-sided scotch tapes. The tape is sandwiched between two ITO-coated glass slides (Solems, ITOSOL30, 80 nm thick). An additional layer of scotch tape including a hole having the desired confinement geometry is added to the upper ITO-coated slide. The holes are made with a precision plotting cutter (Graphtec robo CE 6,000). The gap between the two ITO electrodes is constant over the entire chamber *H*=220 μm. The electric field is applied by means of a voltage amplifier (Stanford Research Systems, PS350/5000 V-25 W). All the measurements were performed 5 min after the beginning of the rolling motion, when a steady state was reached for all the observables. The colloids are observed with a × 4 microscope objective for particle tracking, particle imaging velocimetry (PIV) and number-density measurements. High-speed movies are recorded with a CMOS camera (Basler ACE) at a frame rate of 190 fps. All images are 2,000 × 2,000 8-bit pictures. The particles are detected to sub-pixel accuracy and the particle trajectories are reconstructed using a MATLAB version of a conventional tracking code[@b37]. The PIV analysis was performed with the mpiv MATLAB code. A block size of 44 μm was used. Numerical simulations --------------------- The simulations are performed by numerically integrating the equations of motion ([equations 1](#eq13){ref-type="disp-formula"} and [2)](#eq15){ref-type="disp-formula"}. Particle positions and rolling directions are initialized randomly inside a circular domain. Integration is done using an Euler scheme with an adaptive time step *δt*, and the diffusive term in the equation for the rotational dynamics is modelled as a Gaussian variable with zero mean and with variance 2*D*/*δt*. Steric exclusion between particles is captured by correcting particle positions after each time step so as to prevent overlaps. Bouncing off of particles at the confining boundary is captured using a phenomenological torque that reorients the particles towards the centre of the disc; the form of the torque was chosen so at the reproduce the bouncing trajectories observed in the experiments. Additional information ====================== **How to cite this article:** Bricard, A. *et al.* Emergent vortices in populations of colloidal rollers. *Nat. Commun.* 6:7470 doi: 10.1038/ncomms8470 (2015). Supplementary Material {#S1} ====================== ###### Supplementary Information Supplementary Figures 1-3, Supplementary Notes 1-2 and Supplementary References ###### Supplementary Movie 1 Epifluorescence movie of a dilute ensemble of colloidal rollers exploring a circular chamber. Rc=1 mm. Packing Fraction: 0.3%. Movie recorded at 100 fps, played at 25 fps. ###### Supplementary Movie 2 Emergence of a macroscopic vortex pattern. Packing fraction: 3.6%. Rc=1 mm. Epifluorescence movie recorded at 100 fps, played at 11 fps. At t=3 s, the electric field is turned on and the rollers start propelling. ###### Supplementary Movie 3 Numerical simulation of a population of rollers showing the formation of an axisymmetric vortex. Packing fraction: 10%, range of repulsive forces: b=5a. ###### Supplementary Movie 4 Numerical simulation of a population of rollers showing the formation of a finitesized swarm. Packing fraction: 4.5%, range of repulsive forces: b=2a. We benefited from valuable discussions with Hugues Chaté, Nicolas Desreumaux, Olivier Dauchot, Cristina Marchetti, Julien Tailleur and John Toner. This work was partly funded by the ANR program MiTra, and Institut Universitaire de France. D.S. acknowledges partial support from the Donors of the American Chemical Society Petroleum Research Fund and from NSF CAREER Grant No. CBET-1151590. K.S. was supported by the JSPS Core-to-Core Program 'Non-equilibrium dynamics of soft matter and information\'. **Author contributions** A.B. and V.C. carried out the experiments and processed the data. D.D., C.S., O.C., F.P. and D.S. carried out the the numerical simulations. J.-B.C., K.S. and D.B. established the analytical model. All the authors discussed and interpreted results. D.B., J.-B.C. and D.S. wrote the manuscript. D.B. conceived the project. A.B. and J.-B.C. have equally contributed to this work. ![Experimental setup.\ (**a**) Sketch of the setup. Five5-micrometre PMMA colloids roll in a microchannel made of two ITO-coated glass slides assembled with double-sided scotch tape. An electrokinetic flow confines the rollers at the centre of the device in a circular chamber of radius *R*~c~. (**b**) Superimposed fluorescence pictures of a dilute ensemble of rollers (*E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^). The colloids propel only inside a circular disc of radius *R*~c~=1 mm and follow persistent random walks.](ncomms8470-f1){#f1} ![Dynamics of an isolated colloidal roller.\ (**a**) Local packing fraction *φ*(*r*), averaged over the azimuthal angle *φ*, plotted as a function of the radial distance. The dashed line indicates the radius of the circular chamber. (**b**) Probability distribution function of the roller velocities measured from the individual tracking of the trajectories. (**c**) Autocorrelation of the roller velocity 〈**v**~*i*~(*t*)·**v**~*i*~(*t*+*T*)〉 plotted as a function of *v*~0~*T* for packing fractions ranging from *φ*~0~=6 × 10^−3^ to *φ*~0~=10^−2^. Full line: best exponential fit. (**d**) Superimposed trajectories of colloidal rollers bouncing off the edge of the confining circle. Time interval: 5.3 ms (*E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^). Same parameters for the four panels: *R*~c~=1.4 mm, *E*~0~/*E*~*Q*~=1.1, *φ*~0~=6 × 10^−3^.](ncomms8470-f2){#f2} ![Collective-dynamics experiments.\ (**a**) Snapshot of a vortex of rollers. The dark dots show the position of one half of the ensemble of rollers. The blue vectors represent their instantaneous speed (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**b**) Average polarization plotted versus the average packing fraction for different confinement radii. Open symbols: experiments. Full line: best fit from the theory. Filled circles: numerical simulations (*b*=3*a*, *R*~c~=1 mm). (**c**) Time-averaged polarization field (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**d**) Time average of the local packing fraction (*R*~c~=1.35 mm, *φ*~0~=5 × 10^−2^). (**e**) Time-averaged packing fraction at the centre of the disc, normalized by and plotted versus the average packing fraction. Error bars: one standard deviation. (**f**) Fraction of the disc where versus the average packing fraction. Open symbols: experiments. Full line: theoretical prediction with no free fitting parameter. Filled circles: numerical simulations (*b*=3*a*, *R*~c~=1 mm). (**g**) Radial density profiles plotted as a function of the distance to the disc centre *r*. All the experiments correspond to *φ*~0~=0.032±0.002, error bars: 1*σ*. (**h**) Open symbols: same data as in **g**. The radial density profiles are rescaled by and plotted versus the rescaled distance to the centre *r*/*R*~c~. All the profiles are seen to collapse on a single master curve. Filled symbols: Numerical simulations. Solid line: theoretical prediction. All the data correspond to *E*~0~/*E*~*Q*~=1.1.](ncomms8470-f3){#f3} ![Collective-dynamics simulations.\ (**a**) The numerical phase diagram of the confined population is composed of three regions: isotropic gas (low *φ*~0~, small *b*), swarm coexisting with a gaseous phase (intermediate *φ*~0~ and *b*) and vortex state (high *φ*~0~ and *b*). *R*~c~=0.5 mm. (**b**) Snapshot of a vortex state. Numerical simulation for *φ*~0~=0.1 and *b*=5*a*. (**c**) Snapshot of a swarm. Numerical simulation for *φ*~0~=4.5 × 10^−2^ and *b*=2*a*. (**d**) Variation of the density correlation length as a function of *R*~c~. Above *R*~c~=1 mm, ξ plateaus and a vortex is reached (*φ*~0~=3 × 10^−2^, *b*=3*a*). (**e**) Four numerical snapshots of rollers interacting via: alignment interactions only (*A*), alignment interactions and repulsive torques (*A*+*B*, where the magnitude of *B* is five times the experimental value), alignment and excluded volume interactions (*A*+*b*, where the repulsion distance is *b*=5*a*), alignment and the *C*-term in [equation 3](#eq16){ref-type="disp-formula"} (*A*+*C*). Polarized vortices emerge solely when repulsive couplings exist (*A*+*B* and *A*+*b*).](ncomms8470-f4){#f4} [^1]: These authors equally contributed to this work
{ "pile_set_name": "PubMed Central" }
require([ 'gitbook' ], function (gitbook) { gitbook.events.bind('page.change', function () { mermaid.init(); }); });
{ "pile_set_name": "Github" }
TODO: Implement depth-major-sources packing paths for NEON Platforms: ARM NEON Coding time: M Experimentation time: M Skill required: M Prerequisite reading: doc/kernels.txt doc/packing.txt Model to follow/adapt: internal/pack_neon.h At the moment we have NEON optimized packing paths for WidthMajor sources. We also need paths for DepthMajor sources. This is harder because for DepthMajor sources, the size of each slice that we have to load is the kernel's width, which is typically 12 (for the LHS) or 4 (for the RHS). That's not very friendly to NEON vector-load instructions which would allow us to load 8 or 16 entries, but not 4 or 12. So you will have to load 4 entries at a time only. For that, the vld1q_lane_u32 seems to be as good as you'll get. The other possible approach would be to load (with plain scalar C++) four uint32's into a temporary local buffer, and use vld1q_u8 on that. Some experimentation will be useful here. For that, you can generate assembly with -save-temps and make assembly easier to inspect by inserting inline assembly comments such as asm volatile("#hello");
{ "pile_set_name": "Github" }
package de.peeeq.wurstscript.utils; import de.peeeq.wurstscript.WLogger; public class ExecutiontimeMeasure implements AutoCloseable { private String message; private long startTime; public ExecutiontimeMeasure(String message) { this.message = message; this.startTime = System.currentTimeMillis(); } @Override public void close() { long time = System.currentTimeMillis() - startTime; WLogger.info("Executed " + message + " in " + time + "ms."); } }
{ "pile_set_name": "Github" }
Pages Monday, February 27, 2017 With just a fortnight left till CNY, the OUG morning market was in a frenzy. You know things are getting serious when RELA is deployed to control traffic. Roads once accessible to traffic were closed and the extra space occupied by vendors selling fireworks, waxed meat, biscuits, sea cucumber, dried mushrooms (a lot of 'em!), etc. While I was walking past Restoran New Sun Ho, saw another example of the globalized Malaysian breakfast-- roti canai with salmon sashimi. One day I might start seeing roti sashimi on the menu. Lunch was a simple meal of chicken porridge with a can of fried dace. Very traditional fare. CNY was around the corner, so we spruced up the house a bit. Bought a can of white paint and repainted the metal grills at the living room. Was a little woozy after inhaling the paint fumes. Felt better after we had dinner at Dao Dao. A plate of tofu with mixed vegetables and Marmite chicken really hit the spot. Wednesday, February 22, 2017 Lego was very much a part of my childhood. Back then, I saved my pocket money, angpao money, and Cantonese Association award money to buy Lego. Back then, the cheapest set cost MYR5+. I think the most expensive model I bought was around MYR80+. Just the normal Lego, for age 5-12. Back then, the only other ranges were Duplo and Technic. So many varieties have popped up after the Lego resurgence. Today we have: Star Wars Creator Minifigures City Friends Nexo Knights Ninjago DC Comics Superheroes The Batman Movie Marvel Superheroes Elves Disney Juniors Architecture Mindstorms Scooby Doo Minecraft Speed Angry Birds Ideas Superhero Girls Pening kan? However, all those varieties don't interest me. Technic seems way cooler to me with all those gears, shafts, and moving parts. A sort of kiddy mechanical engineering. More worth it to pay for design and complex parts, rather than paying for copyright costs. Back then, they had pneumatic models, these days, electric motors. Goes without saying that I didn't buy any Technic models back then due to the cost factor. Twenty four years later, I got to assemble my first Technic model-- a Christmas gift from KH. Muacks. He bought me a Lego Technic Drag Racer (42050) which is kind of cool because it has a huge motor block with moving pistons, front wheel steering, big wheely bar, car body that can be raised, and a hidden jack that can be used to pose the drag racer in wheely position. Tuesday, February 21, 2017 Closer and closer to CNY, the market was getting redder and redder. The pace was catching up. Definitely more people than usual judging from the crowds and the lack of parking. Mum stopped at a fishmonger's truck and I was shocked by a large grouper that was on the weighing scale. It was not the size of it that caught my eye, but the fact that it looked like it was dripping in cum! Guess bukkake is not uncommon among fish. Haha. On a serious note, the slime is a layer of mucus that helps fish with gas transport, provide protection, and reduce turbulence in the water. Ventured to Taman Bukit Desa for Sarawak Laksa at Charlie's Cafe because we were bored of the food options at OUG. Taman Bukit Desa was very peaceful compared to the chaos at the market. Always quiet at that neck of the woods. On the way home, we turned into Pearl Shopping Gallery, an interesting new addition to Pearl Point. Located across the road from Jalan Sepadu, it's connected to the old wing via a bridge. Yes, it's always about bridges with shopping malls these days. Inside, there's a small Village Grocer and multiple food establishments (for some reason they have three Korean restaurants). Notable outlets are Paradise Dynasty, Kyochon, Go Noodle House, Kin Kin Pan Mee, and Powerplant. There's also Cremeo that serves premium soft serve ice cream. Will give an update as I try the food outlets there! Friday, February 17, 2017 After a whole month of babysitting, mum and I finally had the chance to go to the market. The first visit of 2017. With all the year end holidays out of the way, it finally dawned on people that the Lunar Chinese New Year was less than a month away! Start the panic buying! The earliest merchants to start the ball rolling were the those who sold religious paraphernalia. They had already started stocking all sorts of special paper offerings, fancy joss sticks, and even the stuff needed to Pai Ti Kong were already on sale. Breakfast was our favourite curry noodles in the alley. In the afternoon, KH came over to my place to chill (kind of like a chaperoned type of paktor). We would just laze on the sofa and chat. Sometimes, we would play our current addiction-- LINE Rangers. So cute, so pointless, and yet... Haha. Received a call from SK and we were out to The Coffee Sessions. More updates from SK regarding her family issues over a cup of mocha and a side of fries. People never fail to surprise. When you thought that they've hit rock bottom, there seems to be more to go! SK stayed on for dinner while KH returned home. Just a simple meal at nearby Restoran 83. A very satisfying dinner of pork noodles, grilled stingray, fried rice, and chicken soup. Remember when it was such a craze to have Portugese style grilled stingray at the Midvalley Megamall Food Court? Ages ago... Wednesday, February 15, 2017 For mum and I, church is the best place to usher in the new year. Bilingual mass started at 10:30 PM and ended fifteen minutes shy of midnight, giving parishioners enough time to find a good location to watch the fireworks display outside (some places actually started the fireworks five minutes before midnight). And a snack box was provided too. By the time we got home, it was past 1:00 AM and we had mass to attend on the 1st of January too! Not that it was mandatory, just that I had traffic warden duties to perform. A thankless job that nobody wants to do. You have to come to church before the crowd, and leave only after everyone does. And its not pleasant when the weather is hot. And drivers are a cranky bunch. Basically, traffic wardens are disconnected from what happens in the church. Been a while since we ate at Nihon-Kai, so we gave it a try for lunch on New Year's day. Got there at 1:30 PM and it was still crowded. Had to sit at the bar counter. Interesting to experience the Japanese way of celebrating the new year. Right at the cashier counter, they had a plastic kagami mochi, which translates to mirror rice cake. Basically a snowman made of rice cake with a bitter orange on top. Not too sure about the traditional symbolism though. Since they had a special set for the day, we ordered it. Perfect for sharing with a little bit of everything-- tempura moriawase, grilled Saba, tamago, maki, inari, arrowroot chips (a Chinese touch), and ozoni which is a special mochi soup that is prepared during the new year. Also added on some nigiri sushi to complete the meal. In the afternoon, I played Mr. Plumber. Tried to fix mum's leaky toilet. That attempt really taught me a thing or two about the 'anatomy' of a toilet. LOL. Managed to fix the leak, but I had to run out to the hardware store to get the right spare parts. Unfortunately, I discovered a more sinister problem with the plumbing. Something was definitely screwing up the pressure in my mum's bathroom. Probably some pesky leak. That's a job best left to the professionals. At night, we had a BEC gathering at KM1 West. Everyone had a fun time trying to get to the multi-purpose hall due to the security measures. Visitors with no access cards could not access the lift lobbies and it was raining heavily outside. The problem with all these 6-tier security condominiums. A hell when you have a lot of guests coming over. At the venue, we were experiencing blackouts because the power was overloaded. Goodness. When we got all of that ironed out, we started the festivities. Stress levels were a little high because both parish priests came for the party. Luckily for us, they didn't stay long. Plenty of food and games. The hostess looked very happy because her husband joined in for the first time ever. Perhaps he wanted to cheer his wife up, who was diagnosed with breast cancer earlier this year. A simple gesture but brings deep joy. Monday, February 13, 2017 On the last day of the Christmas long weekend, I finally managed to spend some time with KH. Normally, Gratitude would cook Christmas dinner, but fears of water disruption, and a lack of a maid to do the clean up made him change his mind. Instead, he invited us for a dim sum brunch at Mayflower Restaurant, Le Garden Hotel (not to be confused with KK Mayflower nearby). It was a nice reunion with KH, Gratitude's mum, my mum, Apollo and QueerRanter-D. The dim sum was pretty good, with a larger portion. They had all the standard varieties, but KH was disappointed that they didn't have custard buns. I liked their siew mai and egg tarts. For coffee, we moved to Brew and Bread. Now, they operate upstairs with the kitchen located in a nearby lot. Compared to last time, this arrangement gives them a whole lot more space to play with. With windows along the whole length of the back, the whole place is spilling with natural light. However, their furniture arrangement is a bit unconventional. A big round tables are quite out of place in a cafe. Why would people want to sit around it especially when there's a huge centerpiece in the middle. Then there's the super long table at the back. Reminded me of the school canteen. In terms of coffee, they now have two blends to choose from-- Driver and Sugar Daddy. Can't imagine why they chose those names. If you're into cold brew, they have something called a Bombshell that actually tastes like its spiked with alcohol. Their croissant is pretty good too, buttery and flaky. Before leaving Kota Kemuning, we stopped a while at AEON Big but that turned out to be a big disappointment. Not well-stocked at all. Walked into a Nagoya as well, a place which brought back memories of my childhood. When I was a kid, mum regularly brought me to fabric shops. As a seamstress, she sometimes had to source for fabrics. While she was choosing, I would be exploring the 'forest' of cloth rolls, running my hands over the cold, silky material. Sometimes, I would scrutinize the price tags attached to the cloth rolls with safety pins. They were usually stamped in blue ink with a small sample of the cloth taped to the tag. Later in the evening, KH and I had a dinner date with QueerRanter (the real deal!), Jin, Apollo, and QueerRanter-D. KH had no luck choosing the dinner venue in Publika. Plan A - Episode. Not open. Plan B - Silver Spoon. Not open. Plan C - Two Sons Bistro. A random choice based on what was open. Halal. Two Sons is a place famous for its mussels and clams. Sixteen varieties to choose from. We ordered a full size (900 grams) of lemon garlic butter mussels to share. Delicious with two portions of garlic bread to lap up the gravy. KH and I shared a Supreme Stuffed Chicken. Stuffed with what you may ask? Jalapeno peppers, sun-dried tomatoes, and cream cheese. Spicy and creamy at the same time. Not a very good feeling. Dessert and coffee was really at Plan B. We reminisced about the days of drama in the heyday of the BFF. So much nonsense from nonsense people. "My birthday party is better than your's" nonsense. "I'm manipulating your feelings" nonsense. "It's all about me" nonsense. "I'm a hypocrite who talks shit" nonsense. The list goes on. But without them, it would have been a bland existence with nothing to reminisce about. Haha. Friday, February 10, 2017 I did not wake up bright and early on Christmas morning. Just as I hit the send button on What's App asking my sister whether the kids had woken up, I heard a commotion downstairs. No mistaking it-- it was The Tribe. My sister had moved all the presents to my place. And it was a lot of presents. Combined with presents from SK and I, it was truly a formidable heap! The excitement and anticipation from the kids was palpable. However, they had to wait a little while longer. All of us donned our customized Christmas T-shirts and took a group photo first. Before we began, I brought out a laundry basket to contain all the discarded gift wrapping. The kids went through their presents like locusts. They got toys, shoes, clothes, bags, and water tumblers. Guess they were on Santa's Good list else they would have received white envelopes filled with cash from Sump'n Claus. Never actually played with Lego Technic before, so it was nice of KH to get me a set. Once we cleared all the debris, it was time to go out for breakfast. BIL wanted to go to a cafe that was Christmas-y. Unfortunately, most of the cafes in Sri Petaling only opened at 11:00 AM. In the end, our stomachs decided that we eat at Poppo Kanteen, a cafe famed for its nasi lemak. If you're not into that, they have noodles and bread too. Plenty of variety and value for money. A suprising MYR8.90 for a big plate with a whole fried chicken leg. Their sambal is pretty good, but the rice didn't quite make the cut. Right on their doorstep is another nasi lemak stall. That stall has been around for years and the rice is really good. Very savoury and creamy. Much better than Poppo Kanteen, but their sambal falls short. What we did was combine the best of both worlds. Haha. The staff didn't mind. Another thing I liked about Poppo Kanteen was the cham. Great taste and hot! Don't ever give me that lukewarm shit. A drink that makes you go "Ahhhhhh...." after every sip. Big Monster had a very good appetite. He walloped a plate of nasi lemak, roti Planta, and potato wedges! In the afternoon, we made a visit to Sunway Velocity Mall, one of KL's newest retail spots. Boy, what a mistake that was. The roads were choked and the mall was overflowing with people. Been a long time since I saw such good business at a mall. The escalators were packed and groups of people were seen just standing around. Crazy. Every ten minutes, there would be a public paging for lost kids and disoriented adults. But this took the cake: "Dear shoppers, please be advised that there is a traffic jam on Jalan Cheras and Jalan Peel. We advise you to continue shopping. Thank you." The Tribe watched "Moana" there at 4:00 PM, but mum and I had some time to kill before our movie. We did some shopping and had a late lunch / early dinner at Canton Kitchen. Not recommended at all. Lethargic service and lousy portions. They charge prices similar to Foong Lye for their set meals but what you get is a far cry. Lukewarm food with just a whole lot of Chinese cabbage in different forms. The only thing I liked was the Pumpkin Springroll. With an hour to go before our movie at Leisure Mall, I fired up the Uber app. Got a ride on a brand new HRV. The driver remarked that traffic was fine before the opening of the mall. Oh well. The ride to Leisure Mall took less than fifteen minutes. Believe it or not, it was my first time at this iconic Cheras mall. Looks pretty good for a neighbourhood mall. Pretty impressive Christmas decorations. Gave me some Singapore heartland mall vibes. We watched "Show Me Your Love", a local production with a cast fortified by Hong Kong actors. I expected more tears from the movie, but it fell a bit short. Overall a nice movie with funny and touching moments. The cinema hall was also surprisingly comfortable. Perhaps they had changed the seats in recent years. Got out of the cinema just in time for The Tribe to pick us up. Truly a fun Christmas spent with loved ones. Thursday, February 09, 2017 Unexpectedly, my sister turned up at our doorstep on the morning of Christmas eve. Naturally, the kids were thrilled to see her. We had a day out in Bukit Bintang ahead of us. First stop was Pavilion Kuala Lumpur to look at the Christmas decorations. Like past years, there was a huge tree out front. The only difference was that they had a train that traveled around the tree. In my opinion, its quite a nuisance to have that when so many people mill at the entrance. Swarovski returned as the main sponsors for the center atrium's Christmas decorations, but I wasn't very impressed by it. This time round, the center atrium is dominated by a castle and a crystal carousel. In the overall theme, it's not very attention-grabbing. Mum shopped while I kept the kids in check. The typical running around, and crawling in and out of clothes racks. Difficult to shop in peace with kids around. All that running around helped the kids build up an appetite. By noon, they were asking for food. At first, we thought of eating at D'Empire, but after looking at how limited the menu was, we walked out. Instead, we ate at Pigs and Wolf located on the Dining Loft. Much better options there. The Christmas set looked like a steal, so we ordered that. MYR60 bought us a soup, main, dessert, and ice lemon tea. They served a cream of potato soup that came with a side of bacon. Both kids gave a squeal of joy when they saw the bacon. For the main, we chose the Prawn and Salmon Pesto that came with generous portion of smoked salmon, and juicy prawns. Dessert was a slice of moist chocolate roll. In addition to that, we got a plate of Carbonara with pork sausages for Little Monster. Big Monster practically polished off a Mighty Piggy Burger all by himself. He loved the thick, juicy patty made from US pork. Since he ordered a burger, his uncle could get a pint of Asahi for MYR10. Their neighbours, Starz Kitchen and Rocku Yakiniku both had Santa Claus out front to attract customers and spread some Christmas cheer. But not all Santa Claus are created equal. Starz Kitchen obviously had the bigger budget. They actually got a fat gweilo to play the part. The wig, beard, and costume was also much better. The guy from Rocku Yakiniku was Cina-fied version in an ill-fitting SuperSave costume. Last stop for the day was Isetan the Japan Store. Had to be very careful in there. Porcelain art with a price tag of MYR14,000 located at child level? OK, shoooh, we are going to the other floors. Didn't stay long really. The kids were tired and Big Monster even started sneezing from all the air-conditioning. Mum and I attended Christmas Eve mass at church that night. There was the usual caroling before mass. During the procession, Baby Jesus was held aloft by the priest and subsequently placed into the manger, and incensed. Everyone was all dressed up and parishioners exchanged greetings to mark the happy occasion of Jesus' birth. Monday, February 06, 2017 In order to make it to English mass at SIC, we had to get up at 7:00 AM. After getting ready, one still needs to fuss over the sleepy kids. At church, we bundled them into the soundproof family room and hoped that they would not be too rowdy. Big Monster asked me a legit question:Big Monster: Is Jesus dead?Moi: Yes, but he resurrected after three days.Big Monster: Oh, someone threw him a Potion of Regeneration. In case you were wondering, he was talking about something from Minecraft! Obsessed with that game, they are. We ate breakfast at Mian then headed home. Mum needed to prepare for her event in the afternoon. She boiled the tang yuan and packed loads of pots and ladles. There was even a portable stove in the heap of stuff. Sent her out to Happy Garden where she would carpool to Brickfields with her friends. It's daunting to handle the two monsters alone so I employed the help of KH. Picked him up and went for a late lunch at Secret Loc Cafe, Kuchai Lama. Although it was already 3:00 PM, all of use weren't starving. Chose kid-friendly items from the menu-- American breakfast, French toast, and pizza. The little one was shouting his head off in the cafe. Buat macam rumah sendiri. When the little ones saw me feeding KH, they exclaimed: "Uncle KH is not a kid!" Went home soon after. Switched on the Google Chromecast for the kids then we stole off to my room for some snogging. In the heat of the moment, there would suddenly be a knock on the door: "Kau fu! I wanna watch a different YouTube clip!" Talk about coitus interruptus. In the end we just gave up and went downstairs to watch "Assassination Classroom: The Graduation" with the kids. The weird story line and even weirder main character, Korosensei resonated with my nephews. SK joined us for dinner at Restoran Mirasa, a mamak joint near my place. Big Monster finished a Maggi goreng all by himself while the little one only concentrated on the cup of Milo. The idea of waffles was tantalizing to the kids, so we had dessert at The New Chapter. Loitered there as long as we could, but the night was still young, and mum's event was far from over. Decided to send KH home first. On our way back home, received a call from mum. Aha! Duty would be over soon. :P. who i am ? I've been told that I look like Suneo Honekawa (how I wish it was Takuya Kimura). I'm fickle (so typical of Librans and other member's of the Alternate society). I laugh and grin a lot (must be some Cheshire Cat genes in there some where too). I have a loving family. Plenty of friends (SK's the best!). And a single person who completes me-- my dear KH. Life has been all ups and downs. But I'm a Thurday's child, so I guess there 's still far to go. Drop me a mail at tanduk7 [at] hotmail.com.
{ "pile_set_name": "Pile-CC" }
//------------------------------------------------------------------------------ // <auto-generated> // This code was generated by AsyncGenerator. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------ using System.Collections.Generic; using NUnit.Framework; using NHibernate.Criterion; namespace NHibernate.Test.NHSpecificTest.NH2546 { using System.Threading.Tasks; [TestFixture] public class SetCommandParameterSizesFalseFixtureAsync : BugTestCase { protected override bool AppliesTo(Dialect.Dialect dialect) { return dialect is Dialect.MsSql2008Dialect; } protected override void OnSetUp() { using (ISession session = Sfi.OpenSession()) { session.Persist(new Student() { StringTypeWithLengthDefined = "Julian Maughan" }); session.Persist(new Student() { StringTypeWithLengthDefined = "Bill Clinton" }); session.Flush(); } } protected override void OnTearDown() { using (ISession session = Sfi.OpenSession()) { session.CreateQuery("delete from Student").ExecuteUpdate(); session.Flush(); } base.OnTearDown(); } [Test] public async Task LikeExpressionWithinDefinedTypeSizeAsync() { using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "Julian%")); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } [Test] public async Task LikeExpressionExceedsDefinedTypeSizeAsync() { // In this case we are forcing the usage of LikeExpression class where the length of the associated property is ignored using (ISession session = Sfi.OpenSession()) { ICriteria criteria = session .CreateCriteria<Student>() .Add(Restrictions.Like("StringTypeWithLengthDefined", "[a-z][a-z][a-z]ian%", MatchMode.Exact, null)); IList<Student> list = await (criteria.ListAsync<Student>()); Assert.That(list.Count, Is.EqualTo(1)); } } } }
{ "pile_set_name": "Github" }
\[section\] \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Corollary]{} \[theorem\][Remark]{} \[theorem\][Fact]{} \[theorem\][Problem]{} @addtoreset ¶[[**P**]{}]{} **A subdiffusive behaviour of recurrent random walk** **in random environment on a regular tree** by Yueyun Hu $\;$and$\;$ Zhan Shi *Université Paris XIII & Université Paris VI* This version: March 11, 2006 =2truecm =2truecm [***Summary.***]{} We are interested in the random walk in random environment on an infinite tree. Lyons and Pemantle [@lyons-pemantle] give a precise recurrence/transience criterion. Our paper focuses on the almost sure asymptotic behaviours of a recurrent random walk $(X_n)$ in random environment on a regular tree, which is closely related to Mandelbrot [@mandelbrot]’s multiplicative cascade. We prove, under some general assumptions upon the distribution of the environment, the existence of a new exponent $\nu\in (0, {1\over 2}]$ such that $\max_{0\le i \le n} |X_i|$ behaves asymptotically like $n^{\nu}$. The value of $\nu$ is explicitly formulated in terms of the distribution of the environment. [***Keywords.***]{} Random walk, random environment, tree, Mandelbrot’s multiplicative cascade. [***2000 Mathematics Subject Classification.***]{} 60K37, 60G50. Introduction {#s:intro} ============ Random walk in random environment (RWRE) is a fundamental object in the study of random phenomena in random media. RWRE on $\z$ exhibits rich regimes in the transient case (Kesten, Kozlov and Spitzer [@kesten-kozlov-spitzer]), as well as a slow logarithmic movement in the recurrent case (Sinai [@sinai]). On $\z^d$ (for $d\ge 2$), the study of RWRE remains a big challenge to mathematicians (Sznitman [@sznitman], Zeitouni [@zeitouni]). The present paper focuses on RWRE on a regular rooted tree, which can be viewed as an infinite-dimensional RWRE. Our main result reveals a rich regime à la Kesten–Kozlov–Spitzer, but this time even in the recurrent case; it also strongly suggests the existence of a slow logarithmic regime à la Sinai. Let $\T$ be a $\deg$-ary tree ($\deg\ge 2$) rooted at $e$. For any vertex $x\in \T \backslash \{ e\}$, let ${\buildrel \leftarrow \over x}$ denote the first vertex on the shortest path from $x$ to the root $e$, and $|x|$ the number of edges on this path (notation: $|e|:= 0$). Thus, each vertex $x\in \T \backslash \{ e\}$ has one parent ${\buildrel \leftarrow \over x}$ and $\deg$ children, whereas the root $e$ has $\deg$ children but no parent. We also write ${\buildrel \Leftarrow \over x}$ for the parent of ${\buildrel \leftarrow \over x}$ (for $x\in \T$ such that $|x|\ge 2$). Let $\omega:= (\omega(x,y), \, x,y\in \T)$ be a family of non-negative random variables such that $\sum_{y\in \T} \omega(x,y)=1$ for any $x\in \T$. Given a realization of $\omega$, we define a Markov chain $X:= (X_n, \, n\ge 0)$ on $\T$ by $X_0 =e$, and whose transition probabilities are $$P_\omega(X_{n+1}= y \, | \, X_n =x) = \omega(x, y) .$$ Let $\P$ denote the distribution of $\omega$, and let $\p (\cdot) := \int P_\omega (\cdot) \P(\! \d \omega)$. The process $X$ is a $\T$-valued RWRE. (By informally taking $\deg=1$, $X$ would become a usual RWRE on the half-line $\z_+$.) For general properties of tree-valued processes, we refer to Peres [@peres] and Lyons and Peres [@lyons-peres]. See also Duquesne and Le Gall [@duquesne-le-gall] and Le Gall [@le-gall] for continuous random trees. For a list of motivations to study RWRE on a tree, see Pemantle and Peres [@pemantle-peres1], p. 106. We define $$A(x) := {\omega({\buildrel \leftarrow \over x}, x) \over \omega({\buildrel \leftarrow \over x}, {\buildrel \Leftarrow \over x})} , \qquad x\in \T, \; |x|\ge 2. \label{A}$$ Following Lyons and Pemantle [@lyons-pemantle], we assume throughout the paper that $(\omega(x,\bullet))_{x\in \T\backslash \{ e\} }$ is a family of i.i.d. [*non-degenerate*]{} random vectors and that $(A(x), \; x\in \T, \; |x|\ge 2)$ are identically distributed. We also assume the existence of $\varepsilon_0>0$ such that $\omega(x,y) \ge \varepsilon_0$ if either $x= {\buildrel \leftarrow \over y}$ or $y= {\buildrel \leftarrow \over x}$, and $\omega(x,y) =0$ otherwise; in words, $(X_n)$ is a nearest-neighbour walk, satisfying an ellipticity condition. Let $A$ denote a generic random variable having the common distribution of $A(x)$ (for $|x| \ge 2$). Define $$p := \inf_{t\in [0,1]} \E (A^t) . \label{p}$$ We recall a recurrence/transience criterion from Lyons and Pemantle ([@lyons-pemantle], Theorem 1 and Proposition 2). [**Theorem A (Lyons and Pemantle [@lyons-pemantle])**]{} [*With $\p$-probability one, the walk $(X_n)$ is recurrent or transient, according to whether $p\le {1\over \deg}$ or $p>{1\over \deg}$. It is, moreover, positive recurrent if $p<{1\over \deg}$.*]{} We study the recurrent case $p\le {1\over \deg}$ in this paper. Our first result, which is not deep, concerns the positive recurrent case $p< {1\over \deg}$. \[t:posrec\] If $p<{1\over \deg}$, then $$\lim_{n\to \infty} \, {1\over \log n} \, \max_{0\le i\le n} |X_i| = {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $\p$-a.s.}, \label{posrec}$$ where the constant $q$ is defined in $(\ref{q})$, and lies in $(0, {1\over \deg})$ when $p<{1\over \deg}$. Despite the warning of Pemantle [@pemantle] (“there are many papers proving results on trees as a somewhat unmotivated alternative …to Euclidean space"), it seems to be of particular interest to study the more delicate situation $p={1\over \deg}$ that turns out to possess rich regimes. We prove that, similarly to the Kesten–Kozlov–Spitzer theorem for [*transient*]{} RWRE on the line, $(X_n)$ enjoys, even in the recurrent case, an interesting subdiffusive behaviour. To state our main result, we define $$\begin{aligned} \kappa &:=& \inf\left\{ t>1: \; \E(A^t) = {1\over \deg} \right\} \in (1, \infty], \qquad (\inf \emptyset=\infty) \label{kappa} \\ \psi(t) &:=& \log \E \left( A^t \right) , \qquad t\ge 0. \label{psi}\end{aligned}$$ We use the notation $a_n \approx b_n$ to denote $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. \[t:nullrec\] If $p={1\over \deg}$ and if $\psi'(1)<0$, then $$\max_{0\le i\le n} |X_i| \; \approx\; n^\nu, \qquad \hbox{\rm $\p$-a.s.}, \label{nullrec}$$ where $\nu=\nu(\kappa)$ is defined by $$\nu := 1- {1\over \min\{ \kappa, 2\} } = \left\{ \begin{array}{ll} (\kappa-1)/\kappa, & \mbox{if $\;\kappa \in (1,2]$}, \\ \\ 1/2 & \mbox{if $\;\kappa \in (2, \infty].$} \end{array} \right. \label{theta}$$ [**Remark.**]{} (i) It is known (Menshikov and Petritis [@menshikov-petritis]) that if $p={1\over \deg}$ and $\psi'(1)<0$, then for $\P$-almost all environment $\omega$, $(X_n)$ is null recurrent. \(ii) For the value of $\kappa$, see Figure 1. Under the assumptions $p={1\over \deg}$ and $\psi'(1)<0$, the value of $\kappa$ lies in $(2, \infty]$ if and only if $\E (A^2) < {1\over \deg}$; and $\kappa=\infty$ if moreover $\hbox{ess sup}(A) \le 1$. \(iii) Since the walk is recurrent, $\max_{0\le i\le n} |X_i|$ cannot be replaced by $|X_n|$ in (\[posrec\]) and (\[nullrec\]). \(iv) Theorem \[t:nullrec\], which could be considered as a (weaker) analogue of the Kesten–Kozlov–Spitzer theorem, shows that tree-valued RWRE has even richer regimes than RWRE on $\z$. In fact, recurrent RWRE on $\z$ is of order of magnitude $(\log n)^2$, and has no $n^a$ (for $0<a<1$) regime. \(v) The case $\psi'(1)\ge 0$ leads to a phenomenon similar to Sinai’s slow movement, and is studied in a forthcoming paper. The rest of the paper is organized as follows. Section \[s:posrec\] is devoted to the proof of Theorem \[t:posrec\]. In Section \[s:proba\], we collect some elementary inequalities, which will be of frequent use later on. Theorem \[t:nullrec\] is proved in Section \[s:nullrec\], by means of a result (Proposition \[p:beta-gamma\]) concerning the solution of a recurrence equation which is closely related to Mandelbrot’s multiplicative cascade. We prove Proposition \[p:beta-gamma\] in Section \[s:beta-gamma\]. Throughout the paper, $c$ (possibly with a subscript) denotes a finite and positive constant; we write $c(\omega)$ instead of $c$ when the value of $c$ depends on the environment $\omega$. Proof of Theorem \[t:posrec\] {#s:posrec} ============================= We first introduce the constant $q$ in the statement of Theorem \[t:posrec\], which is defined without the assumption $p< {1\over \deg}$. Let $$\varrho(r) := \inf_{t\ge 0} \left\{ r^{-t} \, \E(A^t) \right\} , \qquad r>0.$$ Let $\underline{r} >0$ be such that $$\log \underline{r} = \E(\log A) .$$ We mention that $\varrho(r)=1$ for $r\in (0, \underline{r}]$, and that $\varrho(\cdot)$ is continuous and (strictly) decreasing on $[\underline{r}, \, \Theta)$ (where $\Theta:= \hbox{ess sup}(A) < \infty$), and $\varrho(\Theta) = \P (A= \Theta)$. Moreover, $\varrho(r)=0$ for $r> \Theta$. See Chernoff [@chernoff]. We define $$\overline{r} := \inf\left\{ r>0: \; \varrho(r) \le {1\over \deg} \right\}.$$ Clearly, $\underline{r} < \overline{r}$. We define $$q:= \sup_{r\in [\underline{r}, \, \overline{r}]} r \varrho(r). \label{q}$$ The following elementary lemma tells us that, instead of $p$, we can also use $q$ in the recurrence/transience criterion of Lyons and Pemantle. \[l:pq\] We have $q>{1\over \deg}$ $($resp., $q={1\over \deg}$, $q<{1\over \deg})$ if and only if $p>{1\over \deg}$ $($resp., $p={1\over \deg}$, $p<{1\over \deg})$. [*Proof of Lemma \[l:pq\].*]{} By Lyons and Pemantle ([@lyons-pemantle], p. 129), $p= \sup_{r\in (0, \, 1]} r \varrho (r)$. Since $\varrho(r) =1$ for $r\in (0, \, \underline{r}]$, there exists $\min\{\underline{r}, 1\}\le r^* \le 1$ such that $p= r^* \varrho (r^*)$. \(i) Assume $p<{1\over \deg}$. Then $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p < {1\over \deg}$, which, by definition of $\overline{r}$, implies $\overline{r} < 1$. Therefore, $q \le p <{1\over \deg}$. \(ii) Assume $p\ge {1\over \deg}$. We have $\varrho (r^*) \ge p \ge {1\over \deg}$, which yields $r^* \le \overline{r}$. If $\underline{r} \le 1$, then $r^*\ge \underline{r}$, and thus $p=r^* \varrho (r^*) \le q$. If $\underline{r} > 1$, then $p=1$, and thus $q\ge \underline{r}\, \varrho (\underline{r}) = \underline{r} > 1=p$. We have therefore proved that $p\ge {1\over \deg}$ implies $q\ge p$. If moreover $p>{1\over \deg}$, then $q \ge p>{1\over \deg}$. \(iii) Assume $p={1\over \deg}$. We already know from (ii) that $q \ge p$. On the other hand, $\varrho (1) \le \sup_{r\in (0, \, 1]} r \varrho (r) = p = {1\over \deg}$, implying $\overline{r} \le 1$. Thus $q \le p$. As a consequence, $q=p={1\over \deg}$.$\Box$ Having defined $q$, the next step in the proof of Theorem \[t:posrec\] is to compute invariant measures $\pi$ for $(X_n)$. We first introduce some notation on the tree. For any $m\ge 0$, let $$\T_m := \left\{x \in \T: \; |x| = m \right\} .$$ For any $x\in \T$, let $\{ x_i \}_{1\le i\le \deg}$ be the set of children of $x$. If $\pi$ is an invariant measure, then $$\pi (x) = {\omega ({\buildrel \leftarrow \over x}, x) \over \omega (x, {\buildrel \leftarrow \over x})} \, \pi({\buildrel \leftarrow \over x}), \qquad \forall \, x\in \T \backslash \{ e\}.$$ By induction, this leads to (recalling $A$ from (\[A\])): for $x\in \T_m$ ($m\ge 1$), $$\pi (x) = {\pi(e)\over \omega (x, {\buildrel \leftarrow \over x})} {\omega (e, x^{(1)}) \over A(x^{(1)})} \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) ,$$ where $]\! ] e, x]\! ]$ denotes the shortest path $x^{(1)}$, $x^{(2)}$, $\cdots$, $x^{(m)} =: x$ from the root $e$ (but excluded) to the vertex $x$. The identity holds for [*any*]{} choice of $(A(e_i), \, 1\le i\le \deg)$. We choose $(A(e_i), \, 1\le i\le \deg)$ to be a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. By the ellipticity condition on the environment, we can take $\pi(e)$ to be sufficiently small so that for some $c_0\in (0, 1]$, $$c_0\, \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) \le \pi (x) \le \exp\left( \, \sum_{z\in ]\! ] e, x]\! ]} \log A(z) \right) . \label{pi}$$ By Chebyshev’s inequality, for any $r>\underline{r}$, $$\max_{x\in \T_n} \P \left\{ \pi (x) \ge r^n\right\} \le \varrho(r)^n. \label{chernoff}$$ Since $\# \T_n = \deg^n$, this gives $\E (\#\{ x\in \T_n: \; \pi (x)\ge r^n \} ) \le \deg^n \varrho(r)^n$. By Chebyshev’s inequality and the Borel–Cantelli lemma, for any $r>\underline{r}$ and $\P$-almost surely for all large $n$, $$\#\left\{ x\in \T_n: \; \pi (x) \ge r^n \right\} \le n^2 \deg^n \varrho(r)^n. \label{Jn-ub1}$$ On the other hand, by (\[chernoff\]), $$\P \left\{ \exists x\in \T_n: \pi (x) \ge r^n\right\} \le \deg^n \varrho (r)^n.$$ For $r> \overline{r}$, the expression on the right-hand side is summable in $n$. By the Borel–Cantelli lemma, for any $r>\overline{r}$ and $\P$-almost surely for all large $n$, $$\max_{x\in \T_n} \pi (x) < r^n. \label{Jn-ub}$$ [*Proof of Theorem \[t:posrec\]: upper bound.*]{} Fix $\varepsilon>0$ such that $q+ 3\varepsilon < {1\over \deg}$. We follow the strategy given in Liggett ([@liggett], p. 103) by introducing a positive recurrent birth-and-death chain $(\widetilde{X_j}, \, j\ge 0)$, starting from $0$, with transition probability from $i$ to $i+1$ (for $i\ge 1$) equal to $${1\over \widetilde{\pi} (i)} \, \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x})) ,$$ where $\widetilde{\pi} (i) := \sum_{x\in \T_i} \pi(x)$. We note that $\widetilde{\pi}$ is a finite invariant measure for $(\widetilde{X_j})$. Let $$\tau_n := \inf \left\{ i\ge 1: \, X_i \in \T_n\right\}, \qquad n\ge 0.$$ By Liggett ([@liggett], Theorem II.6.10), for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le \widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0),$$ where $\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0)$ is the probability that $(\widetilde{X_j})$ hits $n$ before returning to $0$. According to Hoel et al. ([@hoel-port-stone], p. 32, Formula (61)), $$\widetilde{P}_\omega (\widetilde{\tau}_n< \widetilde{\tau}_0) = c_1(\omega) \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x) (1- \omega(x, {\buildrel \leftarrow \over x}))}Ê\right)^{\! \! -1} ,$$ where $c_1(\omega)\in (0, \infty)$ depends on $\omega$. We arrive at the following estimate: for any $n\ge 1$, $$P_\omega (\tau_n< \tau_0) \le c_1(\omega) \, \left( \, \sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}Ê\right)^{\! \! -1} . \label{liggett}$$ We now estimate $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)}$. For any fixed $0=r_0< \underline{r} < r_1 < \cdots < r_\ell = \overline{r} <r_{\ell +1}$, $$\sum_{x\in \T_i} \pi(x) \le \sum_{j=1}^{\ell+1} (r_j)^i \# \left\{ x\in \T_i: \pi(x) \ge (r_{j-1})^i \right\} + \sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x).$$ By (\[Jn-ub\]), $\sum_{x\in \T_i: \, \pi(x) \ge (r_{\ell +1})^i} \pi(x) =0$ $\P$-almost surely for all large $i$. It follows from (\[Jn-ub1\]) that $\P$-almost surely, for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} (r_j)^i i^2 \, \deg^i \varrho (r_{j-1})^i.$$ Recall that $q= \sup_{r\in [\underline{r}, \, \overline{r}] } r \, \varrho(r) \ge \underline{r} \, \varrho (\underline{r}) = \underline{r}$. We choose $r_1:= \underline{r} + \varepsilon \le q+\varepsilon$. We also choose $\ell$ sufficiently large and $(r_j)$ sufficiently close to each other so that $r_j \, \varrho(r_{j-1}) < q+\varepsilon$ for all $2\le j\le \ell+1$. Thus, $\P$-almost surely for all large $i$, $$\sum_{x\in \T_i} \pi(x) \le (r_1)^i \deg^i + \sum_{j=2}^{\ell+1} i^2 \, \deg^i (q+\varepsilon)^i = (r_1)^i \deg^i + \ell \, i^2 \, \deg^i (q+\varepsilon)^i,$$ which implies (recall: $\deg(q+\varepsilon)<1$) that $\sum_{i=0}^{n-1} {1\over \sum_{x\in \T_i} \pi(x)} \ge {c_2\over n^2\, \deg^n (q+\varepsilon)^n}$. Plugging this into (\[liggett\]) yields that, $\P$-almost surely for all large $n$, $$P_\omega (\tau_n< \tau_0) \le c_3(\omega)\, n^2\, \deg^n (q+\varepsilon)^n \le [(q+2\varepsilon)\deg]^n.$$ In particular, by writing $L(\tau_n):= \# \{ 1\le i \le \tau_n: \, X_i = e\}$, we obtain: $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \ge \left\{ 1- [(q+2\varepsilon)\deg]^n \right\}^j ,$$ which, by the Borel–Cantelli lemma, yields that, $\P$-almost surely for all large $n$, $$L(\tau_n) \ge {1\over [(q+3\varepsilon) \deg]^n} , \qquad \hbox{\rm $P_\omega$-a.s.}$$ Since $\{ L(\tau_n) \ge j \} \subset \{ \max_{0\le k \le 2j} |X_k| < n\}$, and since $\varepsilon$ can be as close to 0 as possible, we obtain the upper bound in Theorem \[t:posrec\].$\Box$ [*Proof of Theorem \[t:posrec\]: lower bound.*]{} Assume $p< {1\over \deg}$. Recall that in this case, we have $\overline{r}<1$. Let $\varepsilon>0$ be small. Let $r \in (\underline{r}, \, \overline{r})$ be such that $\varrho(r) > {1\over \deg} \ee^\varepsilon$ and that $r\varrho(r) \ge q\ee^{-\varepsilon}$. Let $L$ be a large integer with $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and satisfying (\[GW\]) below. We start by constructing a Galton–Watson tree $\G$, which is a certain subtree of $\T$. The first generation of $\G$, denoted by $\G_1$ and defined below, consists of vertices $x\in \T_L$ satisfying a certain property. The second generation of $\G$ is formed by applying the same procedure to each element of $\G_1$, and so on. To be precise, $$\G_1 = \G_1 (L,r) := \left\{ x\in \T_L: \, \min_{z\in ]\! ] e, \, x ]\! ]} \prod_{y\in ]\! ] e, \, z]\! ]} A(y) \ge r^L \right\} ,$$ where $]\! ]e, \, x ]\! ]$ denotes as before the set of vertices (excluding $e$) lying on the shortest path relating $e$ and $x$. More generally, if $\G_i$ denotes the $i$-th generation of $\G$, then $$\G_{n+1} := \bigcup_{u\in \G_n } \left\{ x\in \T_{(n+1)L}: \, \min_{z\in ]\! ] u, \, x ]\! ]} \prod_{y\in ]\! ] u, \, z]\! ]} A(y) \ge r^L \right\} , \qquad n=1,2, \dots$$ We claim that it is possible to choose $L$ sufficiently large such that $$\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L . \label{GW}$$ Note that $\ee^{-\varepsilon L} \deg^L \varrho(r)^L>1$, since $\varrho(r) > {1\over \deg} \ee^\varepsilon$. We admit (\[GW\]) for the moment, which implies that $\G$ is super-critical. By theory of branching processes (Harris [@harris], p. 13), when $n$ goes to infinity, ${\# \G_{n/L} \over [\E(\# \G_1)]^{n/L} }$ converges almost surely (and in $L^2$) to a limit $W$ with $\P(W>0)>0$. Therefore, on the event $\{ W>0\}$, for all large $n$, $$\# (\G_{n/L}) \ge c_4(\omega) [\E(\# \G_1)]^{n/L}. \label{GnL}$$ (For notational simplification, we only write our argument for the case when $n$ is a multiple of $L$. It is clear that our final conclusion holds for all large $n$.) Recall that according to the Dirichlet principle (Griffeath and Liggett [@griffeath-liggett]), $$\begin{aligned} 2\pi(e) P_\omega \left\{ \tau_n < \tau_0 \right\} &=&\inf_{h: \, h(e)=1, \, h(z)=0, \, \forall |z| \ge n} \sum_{x,y\in \T} \pi(x) \omega(x,y) (h(x)- h(y))^2 \nonumber \\ &\ge& c_5\, \inf_{h: \, h(e)=1, \, h(z)=0, \, \forall z\in \T_n} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2, \label{durrett}\end{aligned}$$ the last inequality following from ellipticity condition on the environment. Clearly, $$\begin{aligned} \sum_{|x|<n} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 &=&\sum_{i=0}^{(n/L)-1} \sum_{x: \, iL \le |x| < (i+1) L} \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2 \\ &:=&\sum_{i=0}^{(n/L)-1} I_i,\end{aligned}$$ with obvious notation. For any $i$, $$I_i \ge \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} \pi(x) (h(x)- h(y))^2,$$ where $v^\uparrow \in \G_i$ denotes the unique element of $\G_i$ lying on the path $[ \! [ e, v ]\! ]$ (in words, $v^\uparrow$ is the parent of $v$ in the Galton–Watson tree $\G$), and the factor $\deg^{-L}$ comes from the fact that each term $\pi(x) (h(x)- h(y))^2$ is counted at most $\deg^L$ times in the sum on the right-hand side. By (\[pi\]), for $x\in [\! [ v^\uparrow, v[\! [$, $\pi(x) \ge c_0 \, \prod_{u\in ]\! ]e, x]\! ]} A(u)$, which, by the definition of $\G$, is at least $c_0 \, r^{(i+1)L}$. Therefore, $$\begin{aligned} I_i &\ge& c_0 \, \deg^{-L} \sum_{v\in \G_{i+1}} \, \sum_{x\in [\! [ v^\uparrow, v[\! [} \, \sum_{y: \, x= {\buildrel \leftarrow \over y}} r^{(i+1)L} (h(x)- h(y))^2 \\ &\ge&c_0 \, \deg^{-L} r^{(i+1)L} \sum_{v\in \G_{i+1}} \, \sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 .\end{aligned}$$ By the Cauchy–Schwarz inequality, $\sum_{y\in ]\! ] v^\uparrow, v]\! ]} (h({\buildrel \leftarrow \over y})- h(y))^2 \ge {1\over L} (h(v^\uparrow)-h(v))^2$. Accordingly, $$I_i \ge c_0 \, {\deg^{-L} r^{(i+1)L}\over L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)-h(v))^2 ,$$ which yields $$\begin{aligned} \sum_{i=0}^{(n/L)-1} I_i &\ge& c_0 \, {\deg^{-L}\over L} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} \sum_{v\in \G_{i+1}} (h(v^\uparrow)- h(v))^2 \\ &\ge& c_0 \, {\deg^{-L}\over L} \deg^{-n/L} \sum_{v\in \G_{n/L}} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 ,\end{aligned}$$ where, $e=: v^{(0)}$, $v^{(1)}$, $v^{(2)}$, $\cdots$, $v^{(n/L)} := v$, is the shortest path (in $\G$) from $e$ to $v$, and the factor $\deg^{-n/L}$ results from the fact that each term $r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2$ is counted at most $\deg^{n/L}$ times in the sum on the right-hand side. By the Cauchy–Schwarz inequality, for all $h: \T\to \r$ with $h(e)=1$ and $h(z)=0$ ($\forall z\in \T_n$), we have $$\begin{aligned} \sum_{i=0}^{(n/L)-1} r^{(i+1)L} (h(v^{(i)})- h(v^{(i+1)}))^2 &\ge&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \, \left( \sum_{i=0}^{(n/L)-1} (h(v^{(i)})- h(v^{(i+1)})) \right)^{\! \! 2} \\ &=&{1\over \sum_{i=0}^{(n/L)-1} r^{-(i+1)L}} \ge c_6 \, r^n.\end{aligned}$$ Therefore, $$\sum_{i=0}^{(n/L)-1} I_i \ge c_0c_6 \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \# (\G_{n/L}) \ge c_0 c_6 c_4(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} },$$ the last inequality following from (\[GnL\]). Plugging this into (\[durrett\]) yields that for all large $n$, $$P_\omega \left\{ \tau_n < \tau_0 \right\} \ge c_7(\omega) \, r^n \, {\deg^{-L}\over L} \deg^{-n/L} \, [\E (\# \G_1)]^{n/L}\, {\bf 1}_{ \{ W>0 \} } .$$ Recall from (\[GW\]) that $\E(\# \G_1) \ge \ee^{-\varepsilon L} \deg^L \varrho(r)^L$. Therefore, on $\{W>0\}$, for all large $n$, $P_\omega \{ \tau_n < \tau_0 \} \ge c_8(\omega) (\ee^{-\varepsilon} \deg^{-1/L} \deg r \varrho(r))^n$, which is no smaller than $c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n$ (since $\deg^{-1/L} \ge \ee^{-\varepsilon}$ and $r \varrho(r) \ge q \ee^{-\varepsilon}$ by assumption). Thus, by writing $L(\tau_n) := \#\{ 1\le i\le n: \; X_i = e \}$ as before, we have, on $\{ W>0 \}$, $$P_\omega \left\{ L(\tau_n) \ge j \right\} = \left[ P_\omega (\tau_n> \tau_0) \right]^j \le [1- c_8(\omega) (\ee^{-3\varepsilon} q \deg)^n ]^j.$$ By the Borel–Cantelli lemma, for $\P$-almost all $\omega$, on $\{W>0\}$, we have, $P_\omega$-almost surely for all large $n$, $L(\tau_n) \le 1/(\ee^{-4\varepsilon} q \deg)^n$, i.e., $$\max_{0\le k\le \tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor )} |X_k| \ge n ,$$ where $0<\tau_0(1)<\tau_0(2)<\cdots$ are the successive return times to the root $e$ by the walk (thus $\tau_0(1) = \tau_0$). Since the walk is positive recurrent, $\tau_0(\lfloor 1/(\ee^{-4\varepsilon} q \deg)^n\rfloor ) \sim {1\over (\ee^{-4\varepsilon} q \deg)^n} E_\omega [\tau_0]$ (for $n\to \infty$), $P_\omega$-almost surely ($a_n \sim b_n$ meaning $\lim_{n\to \infty}Ê{a_n \over b_n} =1$). Therefore, for $\P$-almost all $\omega \in \{ W>0\}$, $$\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n} \ge {1\over \log[1/(q\deg)]}, \qquad \hbox{\rm $P_\omega$-a.s.}$$ Recall that $\P\{ W>0\}>0$. Since modifying a finite number of transition probabilities does not change the value of $\liminf_{n\to \infty} {\max_{0\le k\le n} |X_k| \over \log n}$, we obtain the lower bound in Theorem \[t:posrec\]. It remains to prove (\[GW\]). Let $(A^{(i)})_{i\ge 1}$ be an i.i.d. sequence of random variables distributed as $A$. Clearly, for any $\delta\in (0,1)$, $$\begin{aligned} \E( \# \G_1) &=& \deg^L \, \P\left( \, \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) \\ &\ge& \deg^L \, \P \left( \, (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\right) .\end{aligned}$$ We define a new probability $\Q$ by $${\mathrm{d} \Q \over \mathrm{d}\P} := {\ee^{t \log A} \over \E(\ee^{t \log A})} = {A^t \over \E(A^t)},$$ for some $t\ge 0$. Then $$\begin{aligned} \E(\# \G_1) &\ge& \deg^L \, \E_\Q \left[ \, {[\E(A^t)]^L \over \exp\{ t \sum_{i=1}^L \log A^{(i)}\} }\, {\bf 1}_{\{ (1-\delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} } \right] \\ &\ge& \deg^L \, {[\E(A^t)]^L \over r^{t (1- \delta) L} } \, \Q \left( (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L \right).\end{aligned}$$ To choose an optimal value of $t$, we fix $\widetilde{r}\in (r, \, \overline{r})$ with $\widetilde{r} < r^{1-\delta}$. Our choice of $t=t^*$ is such that $\varrho(\widetilde{r}) = \inf_{t\ge 0} \{ \widetilde{r}^{-t} \E(A^t)\} = \widetilde{r}^{-t^*} \E(A^{t^*})$. With this choice, we have $\E_\Q(\log A)=\log \widetilde{r}$, so that $\Q \{ (1- \delta) L \log r \ge \sum_{i=1}^\ell \log A^{(i)} \ge L \log r , \, \forall 1\le \ell \le L\} \ge c_9$. Consequently, $$\E(\# \G_1) \ge c_9 \, \deg^L \, {[\E(A^{t^*})]^L \over r^{t^* (1- \delta) L} }= c_9 \, \deg^L \, {[ \widetilde{r}^{\,t^*} \varrho(\widetilde{r})]^L \over r^{t^* (1- \delta) L} } \ge c_9 \, r^{\delta t^* L} \deg^L \varrho(\widetilde{r})^L .$$ Since $\delta>0$ can be as close to $0$ as possible, the continuity of $\varrho(\cdot)$ on $[\underline{r}, \, \overline{r})$ yields (\[GW\]), and thus completes the proof of Theorem \[t:posrec\].$\Box$ Some elementary inequalities {#s:proba} ============================ We collect some elementary inequalities in this section. They will be of use in the next sections, in the study of the null recurrence case. \[l:exp\] Let $\xi\ge 0$ be a random variable. [(i)]{} Assume that $\e(\xi^a)<\infty$ for some $a>1$. Then for any $x\ge 0$, $${\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a} \le {\e (\xi^a) \over [\e \xi]^a} . \label{RSD}$$ [(ii)]{} If $\e (\xi) < \infty$, then for any $0 \le \lambda \le 1$ and $t \ge 0$, $$\e \left\{ \exp \left( - t\, { (\lambda+\xi)/ (1+\xi) \over \e [(\lambda+\xi)/ (1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t\, { \xi \over \e (\xi)} \right) \right\} . \label{exp}$$ [**Remark.**]{} When $a=2$, (\[RSD\]) is a special case of Lemma 6.4 of Pemantle and Peres [@pemantle-peres2]. [*Proof of Lemma \[l:exp\].*]{} We actually prove a very general result, stated as follows. Let $\varphi : (0, \infty) \to \r$ be a convex ${\cal C}^1$-function. Let $x_0 \in \r$ and let $I$ be an open interval containing $x_0$. Assume that $\xi$ takes values in a Borel set $J \subset \r$ (for the moment, we do not assume $\xi\ge 0$). Let $h: I \times J \to (0, \infty)$ and ${\partial h\over \partial x}: I \times J \to \r$ be measurable functions such that - $\e \{ h(x_0, \xi)\} <\infty$ and $\e \{ |\varphi ({ h(x_0,\xi) \over \e h(x_0, \xi)} )| \} < \infty$; - $\e[\sup_{x\in I} \{ | {\partial h\over \partial x} (x, \xi)| + |\varphi' ({h(x, \xi) \over \e h(x, \xi)} ) | \, ({| {\partial h\over \partial x} (x, \xi) | \over \e \{ h(x, \xi)\} } + {h(x, \xi) \over [\e \{ h(x, \xi)\}]^2 } | \e \{ {\partial h\over \partial x} (x, \xi) \} | )\} ] < \infty$; - both $y \to h(x_0, y)$ and $y \to { \partial \over \partial x} \log h(x,y)|_{x=x_0}$ are monotone on $J$. Then $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x, \xi)}\right) \right\} \Big|_{x=x_0} \ge 0, \qquad \hbox{\rm or}\qquad \le 0, \label{monotonie}$$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity. To prove (\[monotonie\]), we observe that by the integrability assumptions, $$\begin{aligned} & &{\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \\ &=&{1 \over ( \e h(x_0, \xi))^2}\, \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) \e h(x_0, \xi) - h(x_0, \xi) \e {\partial h \over \partial x} (x_0, \xi) \right] \right) .\end{aligned}$$ Let $\widetilde \xi$ be an independent copy of $\xi$. The expectation expression $\e(\varphi'( h(x_0, \xi) ) [\cdots])$ on the right-hand side is $$\begin{aligned} &=& \e \left( \varphi'( h(x_0, \xi) ) \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) )\right] \left[ {\partial h \over \partial x} (x_0, \xi) h(x_0, \widetilde\xi) - h(x_0, \xi) {\partial h \over \partial x} (x_0, \widetilde\xi) \right] \right) \\ &=& {1 \over 2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) ,\end{aligned}$$ where $$\eta := \left[ \varphi'( h(x_0, \xi) ) - \varphi'( h(x_0, \widetilde\xi) ) \right] \, \left[ {\partial \log h \over \partial x} (x_0, \xi) - {\partial \log h \over \partial x} (x_0, \widetilde\xi) \right] .$$ Therefore, $${\d \over \d x} \e \left\{ \varphi\left({ h(x,\xi) \over \e h(x,\xi)}\right) \right\} \Big|_{x=x_0} \; = \; {1 \over 2( \e h(x_0, \xi))^2}\, \e \left( h(x_0, \xi) h(x_0, \widetilde \xi) \, \eta \right) .$$ Since $\eta \ge 0$ or $\le 0$ depending on whether $h(x_0, \cdot)$ and ${\partial \over \partial x} \log h(x_0,\cdot)$ have the same monotonicity, this yields (\[monotonie\]). To prove (\[RSD\]) in Lemma \[l:exp\], we take $x_0\in (0,\, \infty)$, $J= \r_+$, $I$ a finite open interval containing $x_0$ and away from 0, $\varphi(z)= z^a$, and $h(x,y)= { y \over x+ y}$, to see that the function $x\mapsto {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}$ is non-decreasing on $(0, \infty)$. By dominated convergence, $$\lim_{x \to\infty} {\e[({\xi\over x+\xi})^a] \over [\e ( {\xi\over x+\xi})]^a}= \lim_{x \to\infty} {\e[({\xi\over 1+\xi/x})^a] \over [\e ( {\xi\over 1+\xi/x})]^a} = {\e (\xi^a) \over [\e \xi]^a} ,$$ yielding (\[RSD\]). The proof of (\[exp\]) is similar. Indeed, applying (\[monotonie\]) to the functions $\varphi(z)= \ee^{-t z}$ and $ h(x, y) = {x + y \over 1+ y}$ with $x\in (0,1)$, we get that the function $x \mapsto \e \{ \exp ( - t { (x+\xi)/(1+\xi) \over \e [(x+\xi)/(1+\xi)]} )\}$ is non-increasing on $(0,1)$; hence for $\lambda \in [0,\, 1]$, $$\e \left\{ \exp \left( - t { (\lambda+\xi)/(1+\xi) \over \e [(\lambda+\xi)/(1+\xi)] } \right) \right\} \le \e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\}.$$ On the other hand, we take $\varphi(z)= \ee^{-t z}$ and $h(x,y) = {y \over 1+ xy}$ (for $x\in (0, 1)$) in (\[monotonie\]) to see that $x \mapsto \e \{ \exp ( - t { \xi /(1+x \xi) \over \e [\xi /(1+x\xi)] } ) \}$ is non-increasing on $(0,1)$. Therefore, $$\e \left\{ \exp \left( - t { \xi /(1+\xi) \over \e [\xi/(1+\xi)] } \right) \right\} \le \e \left\{ \exp\left( - t \, { \xi \over \e (\xi)}\right) \right\} ,$$ which implies (\[exp\]).$\Box$ \[l:moment\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent non-negative random variables such that for some $a\in [1,\, 2]$, $\e(\xi_i^a)<\infty$ $(1\le i\le k)$. Then $$\e \left[ (\xi_1 + \cdots + \xi_k)^a \right] \le \sum_{k=1}^k \e(\xi_i^a) + (k-1) \left( \sum_{i=1}^k \e \xi_i \right)^a.$$ [*Proof.*]{} By induction on $k$, we only need to prove the lemma in case $k=2$. Let $$h(t) := \e \left[ (\xi_1 + t\xi_2)^a \right] - \e(\xi_1^a) - t^a \e(\xi_2^a) - (\e \xi_1 + t \e \xi_2)^a, \qquad t\in [0,1].$$ Clearly, $h(0) = - (\e \xi_1)^a \le 0$. Moreover, $$h'(t) = a \e \left[ (\xi_1 + t\xi_2)^{a-1} \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1 + t \e \xi_2)^{a-1} \e(\xi_2) .$$ Since $(x+y)^{a-1} \le x^{a-1} + y^{a-1}$ (for $1\le a\le 2$), we have $$\begin{aligned} h'(t) &\le& a \e \left[ (\xi_1^{a-1} + t^{a-1}\xi_2^{a -1}) \xi_2 \right] - a t^{a-1} \e(\xi_2^a) - a(\e \xi_1)^{a-1} \e(\xi_2) \\ &=& a \e (\xi_1^{a-1}) \e(\xi_2) - a(\e \xi_1)^{a -1} \e(\xi_2) \le 0,\end{aligned}$$ by Jensen’s inequality (for $1\le a\le 2$). Therefore, $h \le 0$ on $[0,1]$. In particular, $h(1) \le 0$, which implies Lemma \[l:moment\].$\Box$ The following inequality, borrowed from page 82 of Petrov [@petrov], will be of frequent use. \[f:petrov\] Let $\xi_1$, $\cdots$, $\xi_k$ be independent random variables. We assume that for any $i$, $\e(\xi_i)=0$ and $\e(|\xi_i|^a) <\infty$, where $1\le a\le 2$. Then $$\e \left( \, \left| \sum_{i=1}^k \xi_i \right| ^a \, \right) \le 2 \sum_{i=1}^k \e( |\xi_i|^a).$$ \[l:abc\] Fix $a >1$. Let $(u_j)_{j\ge 1}$ be a sequence of positive numbers, and let $(\lambda_j)_{j\ge 1}$ be a sequence of non-negative numbers. [(i)]{} If there exists some constant $c_{10}>0$ such that for all $n\ge 2$, $$u_{j+1} \le \lambda_n + u_j - c_{10}\, u_j^{a}, \qquad \forall 1\le j \le n-1,$$ then we can find a constant $c_{11}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$, such that $$u_n \le c_{11} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)}), \qquad \forall n\ge 1.$$ [(ii)]{} Fix $K>0$. Assume that $\lim_{j\to\infty} u_j=0$ and that $\lambda_n \in [0, \, {K\over n}]$ for all $n\ge 1$. If there exist $c_{12}>0$ and $c_{13}>0$ such that for all $n\ge 2$, $$u_{j+1} \ge \lambda_n + (1- c_{12} \lambda_n) u_j - c_{13} \, u_j^a , \qquad \forall 1 \le j \le n-1,$$ then for some $c_{14}>0$ independent of $n$ and $(\lambda_j)_{j\ge 1}$ $(c_{14}$ may depend on $K)$, $$u_n \ge c_{14} \, ( \lambda_n^{1/a} + n^{- 1/(a-1)} ), \qquad \forall n\ge 1.$$ [*Proof.*]{} (i) Put $\ell = \ell(n) := \min\{n, \, \lambda_n^{- (a-1)/a} \}$. There are two possible situations. First situation: there exists some $j_0 \in [n- \ell, n-1]$ such that $u_{j_0} \le ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$. Since $u_{j+1} \le \lambda_n + u_j$ for all $j\in [j_0, n-1]$, we have $$u_n \le (n-j_0 ) \lambda_n + u_{j_0} \le \ell \lambda_n + ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a} \le (1+ ({2 \over c_{10}})^{1/a})\, \lambda_n^{1/a},$$ which implies the desired upper bound. Second situation: $u_j > ({2 \over c_{10}})^{1/a}\, \lambda_n^{1/a}$, $\forall \, j \in [n- \ell, n-1]$. Then $c_{10}\, u_j^{a} > 2\lambda_n$, which yields $$u_{j+1} \le u_j - {c_{10} \over 2} u_j^a, \qquad \forall \, j \in [n- \ell, n-1].$$ Since $a>1$ and $(1-y)^{1-a} \ge 1+ (a-1) y$ (for $0< y< 1$), this yields, for $j \in [n- \ell, n-1]$, $$u_{j+1}^{1-a} \ge u_j^{1-a} \, \left( 1 - {c_{10} \over 2} u_j^{a-1} \right)^{ 1-a} \ge u_j^{ 1-a} \, \left( 1 + {c_{10} \over 2} (a-1)\, u_j^{a-1} \right) = u_j^{1-a} + {c_{10} \over 2} (a-1) .$$ Therefore, $u_n^{1-a} \ge c_{15}\, \ell$ with $c_{15}:= {c_{10} \over 2} (a-1)$. As a consequence, $u_n \le (c_{15}\, \ell)^{- 1/(a-1)} \le (c_{15})^{- 1/(a-1)} \, ( n^{- 1/(a-1)} + \lambda_n^{1/a} )$, as desired. \(ii) Let us first prove: $$\label{c7} u_n \ge c_{16}\, n^{- 1/(a-1)}.$$ To this end, let $n$ be large and define $v_j := u_j \, (1- c_{12} \lambda_n)^{ -j} $ for $1 \le j \le n$. Since $u_{j+1} \ge (1- c_{12} \lambda_n) u_j - c_{13} u_j^a $ and $\lambda_n \le K/n$, we get $$v_{j+1} \ge v_j - c_{13} (1- c_{12} \lambda_n)^{(a-1)j-1}\, v_j^a\ge v_j - c_{17} \, v_j^a, \qquad \forall\, 1\le j \le n-1.$$ Since $u_j \to 0$, there exists some $j_0>0$ such that for all $n>j \ge j_0$, we have $c_{17} \, v_j^{a-1} < 1/2$, and $$v_{j+1}^{1-a} \le v_j^{1-a}\, \left( 1- c_{17} \, v_j^{a-1}\right)^{1-a} \le v_j^{1-a}\, \left( 1+ c_{18} \, v_j^{a-1}\right) = v_j^{1-a} + c_{18}.$$ It follows that $v_n^{1-a} \le c_{18}\, (n-j_0) + v_{j_0}^{1-a}$, which implies (\[c7\]). It remains to show that $u_n \ge c_{19} \, \lambda_n^{1/a}$. Consider a large $n$. The function $h(x):= \lambda_n + (1- c_{12} \lambda_n) x - c_{13} x^a$ is increasing on $[0, c_{20}]$ for some fixed constant $c_{20}>0$. Since $u_j \to 0$, there exists $j_0$ such that $u_j \le c_{20}$ for all $j \ge j_0$. We claim there exists $j \in [j_0, n-1]$ such that $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$: otherwise, we would have $c_{13}\, u_j^a \le {\lambda_n\over 2} \le \lambda_n$ for all $j \in [j_0, n-1]$, and thus $$u_{j+1} \ge (1- c_{12}\, \lambda_n) u_j \ge \cdots \ge (1- c_{12}\,\lambda_n)^{j-j_0} \, u_{j_0} ;$$ in particular, $u_n \ge (1- c_{12}\, \lambda_n)^{n-j_0} \, u_{j_0}$ which would contradict the assumption $u_n \to 0$ (since $\lambda_n \le K/n$). Therefore, $u_j > ({\lambda_n\over 2c_{13}})^{1/a}$ for some $j\ge j_0$. By monotonicity of $h(\cdot)$ on $[0, c_{20}]$, $$u_{j+1} \ge h(u_j) \ge h\left(({\lambda_n\over 2 c_{13}})^{1/a}\right) \ge ({\lambda_n\over 2 c_{13}})^{1/a},$$ the last inequality being elementary. This leads to: $u_{j+2} \ge h(u_{j+1}) \ge h(({\lambda_n\over 2 c_{13}})^{1/a} ) \ge ({\lambda_n\over 2 c_{13}})^{1/a}$. Iterating the procedure, we obtain: $u_n \ge ({\lambda_n\over 2 c_{13}})^{1/a}$ for all $n> j_0$, which completes the proof of the Lemma.$\Box$ Proof of Theorem \[t:nullrec\] {#s:nullrec} ============================== Let $n\ge 2$, and let as before $$\tau_n := \inf\left\{ i\ge 1: X_i \in \T_n \right\} .$$ We start with a characterization of the distribution of $\tau_n$ via its Laplace transform $\e ( \ee^{- \lambda \tau_n} )$, for $\lambda \ge 0$. To state the result, we define $\alpha_{n,\lambda}(\cdot)$, $\beta_{n,\lambda}(\cdot)$ and $\gamma_n(\cdot)$ by $\alpha_{n,\lambda}(x) = \beta_{n,\lambda} (x) = 1$ and $\gamma_n(x)=0$ (for $x\in \T_n$), and $$\begin{aligned} \alpha_{n,\lambda}(x) &=& \ee^{-\lambda} \, {\sum_{i=1}^\deg A(x_i) \alpha_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{alpha} \\ \beta_{n,\lambda}(x) &=& {(1-\ee^{-2\lambda}) + \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_{n,\lambda} (x_i)}, \label{beta} \\ \gamma_n(x) &=& {[1/\omega(x, {\buildrel \leftarrow \over x} )] + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \over 1+ \sum_{i=1}^\deg A(x_i) \beta_n(x_i)} , \qquad 1\le |x| < n, \label{gamma}\end{aligned}$$ where $\beta_n(\cdot) := \beta_{n,0}(\cdot)$, and for any $x\in \T$, $\{x_i\}_{1\le i\le \deg}$ stands as before for the set of children of $x$. \[p:tau\] We have, for $n\ge 2$, $$\begin{aligned} E_\omega\left( \ee^{- \lambda \tau_n} \right) &=&\ee^{-\lambda} \, {\sum_{i=1}^\deg \omega (e, e_i) \alpha_{n,\lambda} (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)}, \qquad \forall \lambda \ge 0, \label{Laplace-tau} \\ E_\omega(\tau_n) &=& {1+ \sum_{i=1}^\deg \omega(e,e_i) \gamma_n (e_i) \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)}. \label{E(tau)} \end{aligned}$$ [*Proof of Proposition \[p:tau\].*]{} Identity (\[E(tau)\]) can be found in Rozikov [@rozikov]. The proof of (\[Laplace-tau\]) is along similar lines; so we feel free to give an outline only. Let $g_{n, \lambda}(x) := E_\omega (\ee^{- \lambda \tau_n} \, | \, X_0=x)$. By the Markov property, $g_{n, \lambda}(x) = \ee^{-\lambda} \sum_{i=1}^\deg \omega(x, x_i)g_{n, \lambda}(x_i) + \ee^{-\lambda} \omega(x, {\buildrel \leftarrow \over x}) g_{n, \lambda}({\buildrel \leftarrow \over x})$, for $|x| < n$. By induction on $|x|$ (such that $1\le |x| \le n-1$), we obtain: $g_{n, \lambda}(x) = \ee^\lambda (1- \beta_{n, \lambda} (x)) g_{n, \lambda}({\buildrel \leftarrow \over x}) + \alpha_{n, \lambda} (x)$, from which (\[Laplace-tau\]) follows. Probabilistic interpretation: for $1\le |x| <n$, if $T_{\buildrel \leftarrow \over x} := \inf \{ k\ge 0: X_k= {\buildrel \leftarrow \over x} \}$, then $\alpha_{n, \lambda} (x) = E_\omega [ \ee^{-\lambda \tau_n} {\bf 1}_{ \{ \tau_n < T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, $\beta_{n, \lambda} (x) = 1- E_\omega [ \ee^{-\lambda (1+ T_{\buildrel \leftarrow \over x}) } {\bf 1}_{ \{ \tau_n > T_{\buildrel \leftarrow \over x} \} } \, | \, X_0=x]$, and $\gamma_n (x) = E_\omega [ (\tau_n \wedge T_{\buildrel \leftarrow \over x}) \, | \, X_0=x]$. We do not use these identities in the paper.$\Box$ It turns out that $\beta_{n,\lambda}(\cdot)$ is closely related to Mandelbrot’s multiplicative cascade [@mandelbrot]. Let $$M_n := \sum_{x\in \T_n} \prod_{y\in ] \! ] e, \, x] \! ] } A(y) , \qquad n\ge 1, \label{Mn}$$ where $] \! ] e, \,x] \! ]$ denotes as before the shortest path relating $e$ to $x$. We mention that $(A(e_i), \, 1\le i\le \deg)$ is a random vector independent of $(\omega(x,y), \, |x|\ge 1, \, y\in \T)$, and is distributed as $(A(x_i), \, 1\le i\le \deg)$, for any $x\in \T_m$ with $m\ge 1$. Let us recall some properties of $(M_n)$ from Theorem 2.2 of Liu [@liu00] and Theorem 2.5 of Liu [@liu01]: under the conditions $p={1\over \deg}$ and $\psi'(1)<0$, $(M_n)$ is a martingale, bounded in $L^a$ for any $a\in [1, \kappa)$; in particular, $$M_\infty := \lim_{n\to \infty} M_n \in (0, \infty), \label{cvg-M}$$ exists $\P$-almost surely and in $L^a(\P)$, and $$\E\left( \ee^{-s M_\infty} \right) \le \exp\left( - c_{21} \, s^{c_{22}}\right), \qquad \forall s\ge 1; \label{M-lowertail}$$ furthermore, if $1<\kappa< \infty$, then we also have $${c_{23}\over x^\kappa} \le \P\left( M_\infty > x\right) \le {c_{24}\over x^\kappa}, \qquad x\ge 1. \label{M-tail}$$ We now summarize the asymptotic properties of $\beta_{n,\lambda}(\cdot)$ which will be needed later on. \[p:beta-gamma\] Assume $p= {1\over \deg}$ and $\psi'(1)<0$. [(i)]{} For any $1\le i\le \deg$, $n\ge 2$, $t\ge 0$ and $\lambda \in [0, \, 1]$, we have $$\E \left\{ \exp \left[ -t \, {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{\E \left( \ee^{-t\, M_n/\Theta} \right) \right\}^{1/\deg} , \label{comp-Laplace}$$ where, as before, $\Theta:= \hbox{\rm ess sup}(A) < \infty$. [(ii)]{} If $\kappa\in (2, \infty]$, then for any $1\le i\le \deg$ and all $n\ge 2$ and $\lambda \in [0, \, {1\over n}]$, $$c_{25} \left( \sqrt {\lambda} + {1\over n} \right) \le \E[\beta_{n, \lambda}(e_i)] \le c_{26} \left( \sqrt {\lambda} + {1\over n} \right). \label{E(beta):kappa>2}$$ [(iii)]{} If $\kappa\in (1,2]$, then for any $1\le i\le \deg$, when $n\to \infty$ and uniformly in $\lambda \in [0, {1\over n}]$, $$\E[\beta_{n, \lambda}(e_i)] \; \approx \; \lambda^{1/\kappa} + {1\over n^{1/(\kappa-1)}} , \label{E(beta):kappa<2}$$ where $a_n \approx b_n$ denotes as before $\lim_{n\to \infty} \, {\log a_n \over \log b_n} =1$. The proof of Proposition \[p:beta-gamma\] is postponed until Section \[s:beta-gamma\]. By admitting it for the moment, we are able to prove Theorem \[t:nullrec\]. [*Proof of Theorem \[t:nullrec\].*]{} Assume $p= {1\over \deg}$ and $\psi'(1)<0$. Let $\pi$ be an invariant measure. By (\[pi\]) and the definition of $(M_n)$, $\sum_{x\in \T_n} \pi(x) \ge c_0 \, M_n$. Therefore by (\[cvg-M\]), we have $\sum_{x\in \T} \pi(x) =\infty$, $\P$-a.s., implying that $(X_n)$ is null recurrent. We proceed to prove the lower bound in (\[nullrec\]). By (\[gamma\]) and the ellipticity condition on the environment, $\gamma_n (x) \le {1\over \omega(x, {\buildrel \leftarrow \over x} )} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i) \le c_{27} + \sum_{i=1}^\deg A(x_i) \gamma_n(x_i)$. Iterating the argument yields $$\gamma_n (e_i) \le c_{27} \left( 1+ \sum_{j=2}^{n-1} M_j^{(e_i)}\right), \qquad n\ge 3,$$ where $$M_j^{(e_i)} := \sum_{x\in \T_j} \prod_{y\in ] \! ] e_i, x] \! ]} A(y).$$ For future use, we also observe that $$\label{defMei1} M_n= \sum_{i=1}^\deg \, A(e_i) \, M^{(e_i)}_n, \qquad n\ge 2.$$ Let $1\le i\le \deg$. Since $(M_j^{(e_i)}, \, j\ge 2)$ is distributed as $(M_{j-1}, \, j\ge 2)$, it follows from (\[cvg-M\]) that $M_j^{(e_i)}$ converges (when $j\to \infty$) almost surely, which implies $\gamma_n (e_i) \le c_{28}(\omega) \, n$. Plugging this into (\[E(tau)\]), we see that for all $n\ge 3$, $$E_\omega \left( \tau_n \right) \le {c_{29}(\omega) \, n \over \sum_{i=1}^\deg \omega(e,e_i) \beta_n(e_i)} \le {c_{30}(\omega) \, n \over \beta_n(e_1)}, \label{toto2}$$ the last inequality following from the ellipticity assumption on the environment. We now bound $\beta_n(e_1)$ from below (for large $n$). Let $1\le i\le \deg$. By (\[comp-Laplace\]), for $\lambda \in [0,\, 1]$ and $s\ge 0$, $$\E \left\{ \exp \left[ -s \, {\beta_{n, \lambda} (e_i) \over \E [\beta_{n, \lambda} (e_i)]} \right] \right\} \le \left\{ \E \left( \ee^{-s \, M_n/\Theta} \right) \right\}^{1/\deg} \le \left\{ \E \left(\ee^{-s \, M_\infty/\Theta} \right) \right\}^{1/\deg} ,$$ where, in the last inequality, we used the fact that $(M_n)$ is a uniformly integrable martingale. Let $\varepsilon>0$. Applying (\[M-lowertail\]) to $s:= n^{\varepsilon}$, we see that $$\sum_n \E \left\{ \exp \left[ -n^{\varepsilon} {\beta_{n, \lambda} (e_i) \over \E[\beta_{n, \lambda} (e_i)]} \right] \right\} <\infty . \label{toto3}$$ In particular, $\sum_n \exp [ -n^{\varepsilon} {\beta_n (e_1) \over \E [\beta_n (e_1)]} ]$ is $\P$-almost surely finite (by taking $\lambda=0$; recalling that $\beta_n (\cdot) := \beta_{n, 0} (\cdot)$). Thus, for $\P$-almost all $\omega$ and all sufficiently large $n$, $\beta_n (e_1) \ge n^{-\varepsilon} \, \E [\beta_n (e_1)]$. Going back to (\[toto2\]), we see that for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega \left( \tau_n \right) \le {c_{30}(\omega) \, n^{1+\varepsilon} \over \E [\beta_n (e_1)]}.$$ Let $m(n):= \lfloor {n^{1+2\varepsilon} \over \E [\beta_n (e_1)]} \rfloor$. By Chebyshev’s inequality, for $\P$-almost all $\omega$ and all sufficiently large $n$, $P_\omega ( \tau_n \ge m(n) ) \le c_{31}(\omega) \, n^{-\varepsilon}$. Considering the subsequence $n_k:= \lfloor k^{2/\varepsilon}\rfloor$, we see that $\sum_k P_\omega ( \tau_{n_k} \ge m(n_k) )< \infty$, $\P$-a.s. By the Borel–Cantelli lemma, for $\P$-almost all $\omega$ and $P_\omega$-almost all sufficiently large $k$, $\tau_{n_k} < m(n_k)$, which implies that for $n\in [n_{k-1}, n_k]$ and large $k$, we have $\tau_n < m(n_k) \le {n_k^{1+2\varepsilon} \over \E [\beta_{n_k} (e_1)]} \le {n^{1+3\varepsilon} \over \E [\beta_n(e_1)]}$ (the last inequality following from the estimate of $\E [\beta_n(e_1)]$ in Proposition \[p:beta-gamma\]). In view of Proposition \[p:beta-gamma\], and since $\varepsilon$ can be as small as possible, this gives the lower bound in (\[nullrec\]) of Theorem \[t:nullrec\]. To prove the upper bound, we note that $\alpha_{n,\lambda}(x) \le \beta_n(x)$ for any $\lambda\ge 0$ and any $0<|x|\le n$ (this is easily checked by induction on $|x|$). Thus, by (\[Laplace-tau\]), for any $\lambda\ge 0$, $$E_\omega\left( \ee^{- \lambda \tau_n} \right) \le {\sum_{i=1}^\deg \omega (e, e_i) \beta_n (e_i) \over \sum_{i=1}^\deg \omega (e, e_i) \beta_{n,\lambda} (e_i)} \le \sum_{i=1}^\deg {\beta_n (e_i) \over \beta_{n,\lambda} (e_i)}.$$ We now fix $r\in (1, \, {1\over \nu})$, where $\nu:= 1- {1\over \min\{ \kappa, \, 2\} }$ is defined in (\[theta\]). It is possible to choose a small $\varepsilon>0$ such that $${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon \quad \hbox{if }\kappa \in (1, \, 2], \qquad 1 - {r\over 2}> 3\varepsilon \quad \hbox{if }\kappa \in (2, \, \infty].$$ Let $\lambda = \lambda(n) := n^{-r}$. By (\[toto3\]), we have $\beta_{n,n^{-r}} (e_i) \ge n^{-\varepsilon}\, \E [\beta_{n,n^{-r}} (e_i)]$ for $\P$-almost all $\omega$ and all sufficiently large $n$, which yields $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^\varepsilon \sum_{i=1}^\deg {\beta_n (e_i) \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ It is easy to bound $\beta_n (e_i)$. For any given $x\in \T \backslash \{ e\}$ with $|x|\le n$, $n\mapsto \beta_n (x)$ is non-increasing (this is easily checked by induction on $|x|$). Chebyshev’s inequality, together with the Borel–Cantelli lemma (applied to a subsequence, as we did in the proof of the lower bound) and the monotonicity of $n\mapsto \beta_n(e_i)$, readily yields $\beta_n (e_i) \le n^\varepsilon \, \E [\beta_n (e_i)]$ for almost all $\omega$ and all sufficiently large $n$. As a consequence, for $\P$-almost all $\omega$ and all sufficiently large $n$, $$E_\omega\left( \ee^{- n^{-r} \tau_n} \right) \le n^{2\varepsilon} \sum_{i=1}^\deg {\E [\beta_n (e_i)] \over \E [\beta_{n, n^{-r}} (e_i)]} .$$ By Proposition \[p:beta-gamma\], this yields $E_\omega ( \ee^{- n^{-r} \tau_n} ) \le n^{-\varepsilon}$ (for $\P$-almost all $\omega$ and all sufficiently large $n$; this is where we use ${1\over \kappa -1} - {r\over \kappa}> 3\varepsilon$ if $\kappa \in (1, \, 2]$, and $1 - {r\over 2}> 3\varepsilon$ if $\kappa \in (2, \, \infty]$). In particular, for $n_k:= \lfloor k^{2/\varepsilon} \rfloor$, we have $\P$-almost surely, $E_\omega ( \sum_k \ee^{- n_k^{-r} \tau_{n_k}} ) < \infty$, which implies that, $\p$-almost surely for all sufficiently large $k$, $\tau_{n_k} \ge n_k^r$. This implies that $\p$-almost surely for all sufficiently large $n$, $\tau_n \ge {1\over 2}\, n^r$. The upper bound in (\[nullrec\]) of Theorem \[t:nullrec\] follows.$\Box$ Proposition \[p:beta-gamma\] is proved in Section \[s:beta-gamma\]. Proof of Proposition \[p:beta-gamma\] {#s:beta-gamma} ===================================== Let $\theta \in [0,\, 1]$. Let $(Z_{n,\theta})$ be a sequence of random variables, such that $Z_{1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i$, where $(A_i, \, 1\le i\le \deg)$ is distributed as $(A(x_i), \, 1\le i\le \deg)$ (for any $x\in \T$), and that $$Z_{j+1,\theta} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i {\theta + Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } , \qquad \forall\, j\ge 1, \label{ZW}$$ where $Z_{j,\theta}^{(i)}$ (for $1\le i \le \deg$) are independent copies of $Z_{j,\theta}$, and are independent of the random vector $(A_i, \, 1\le i\le \deg)$. Then, for any given $n\ge 1$ and $\lambda\ge 0$, $$Z_{n, 1-\ee^{-2\lambda}} \; \buildrel law \over = \; \sum_{i=1}^\deg A_i\, \beta_{n, \lambda}(e_i) , \label{Z=beta}$$ provided $(A_i, \, 1\le i\le \deg)$ and $(\beta_{n, \lambda}(e_i), \, 1\le i\le \deg)$ are independent. \[p:concentration\] Assume $p={1\over \deg}$ and $\psi'(1)<0$. Let $\kappa$ be as in $(\ref{kappa})$. For all $a\in (1, \kappa) \cap (1, 2]$, we have $$\sup_{\theta \in [0,1]} \sup_{j\ge 1} {[\E (Z_{j,\theta} )^a ] \over (\E Z_{j,\theta})^a} < \infty.$$ [*Proof of Proposition \[p:concentration\].*]{} Let $a\in (1,2]$. Conditioning on $A_1$, $\dots$, $A_\deg$, we can apply Lemma \[l:moment\] to see that $$\begin{aligned} &&\E \left[ \left( \, \sum_{i=1}^\deg A_i {\theta+ Z_{j,\theta}^{(i)} \over 1+ Z_{j,\theta}^{(i)} } \right)^a \Big| A_1, \dots, A_\deg \right] \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + (\deg-1) \left[ \sum_{i=1}^\deg A_i\, \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a \\ &\le& \sum_{i=1}^\deg A_i^a \, \E \left[ \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a,\end{aligned}$$ where $c_{32}$ depends on $a$, $\deg$ and the bound of $A$ (recalling that $A$ is bounded away from 0 and infinity). Taking expectation on both sides, and in view of (\[ZW\]), we obtain: $$\E[(Z_{j+1,\theta})^a] \le \deg \E(A^a) \E \left[ \left( {\theta+ Z_{j,\theta}\over 1+ Z_{j,\theta} }\right)^a \; \right] + c_{32} \left[ \E \left( {\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} } \right) \right]^a.$$ We divide by $(\E Z_{j+1,\theta})^a = [ \E({\theta+Z_{j,\theta}\over 1+ Z_{j,\theta} })]^a$ on both sides, to see that $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[ ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })^a] \over [\E ({\theta+ Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } + c_{32}.$$ Put $\xi = \theta+ Z_{j,\theta}$. By (\[RSD\]), we have $${\E[ ({\theta+Z_{j,\theta} \over 1+Z_{j,\theta} })^a] \over [\E ({\theta+Z_{j,\theta} \over 1+ Z_{j,\theta} })]^a } = {\E[ ({\xi \over 1- \theta+ \xi })^a] \over [\E ({ \xi \over 1- \theta+ \xi })]^a } \le {\E[\xi^a] \over [\E \xi ]^a } .$$ Applying Lemma \[l:moment\] to $k=2$ yields that $\E[\xi^a] = \E[( \theta+ Z_{j,\theta} )^a] \le \theta^a + \E[( Z_{j,\theta} )^a] + (\theta + \E( Z_{j,\theta} ))^a $. It follows that ${\E[ \xi^a] \over [\E \xi ]^a } \le {\E[ (Z_{j,\theta})^a] \over [\E Z_{j,\theta}]^a } +2$, which implies that for $j\ge 1$, $${\E[(Z_{j+1,\theta})^a]\over (\E Z_{j+1,\theta})^a} \le \deg \E(A^a) {\E[(Z_{j,\theta})^a]\over (\E Z_{j,\theta})^a} + (2 \deg \E(A^a)+ c_{32}).$$ Thus, if $\deg \E(A^a)<1$ (which is the case if $1<a<\kappa$), then $$\sup_{j\ge 1} {\E[ (Z_{j,\theta})^a] \over (\E Z_{j,\theta})^a} < \infty,$$ uniformly in $\theta \in [0, \, 1]$.$\Box$ We now turn to the proof of Proposition \[p:beta-gamma\]. For the sake of clarity, the proofs of (\[comp-Laplace\]), (\[E(beta):kappa&gt;2\]) and (\[E(beta):kappa&lt;2\]) are presented in three distinct parts. Proof of (\[comp-Laplace\]) {#subs:beta} --------------------------- By (\[exp\]) and (\[ZW\]), we have, for all $\theta\in [0, \, 1]$ and $j\ge 1$, $$\E \left\{ \exp\left( - t \, { Z_{j+1, \theta} \over \E (Z_{j+1, \theta})}\right) \right\} \le \E \left\{ \exp\left( - t \sum_{i=1}^\deg A_i { Z^{(i)}_{j, \theta} \over \E (Z^{(i)}_{j, \theta}) }\right) \right\}, \qquad t\ge 0.$$ Let $f_j(t) := \E \{ \exp ( - t { Z_{j, \theta} \over \E Z_{j, \theta}} )\}$ and $g_j(t):= \E (\ee^{ -t\, M_j})$ (for $j\ge 1$). We have $$f_{j+1}(t) \le \E \left( \prod_{i=1}^\deg f_j(t A_i) \right), \quad j\ge 1.$$ On the other hand, by (\[defMei1\]), $$g_{j+1}(t) = \E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) M^{(e_i)}_{j+1} \right) \right\} = \E \left( \prod_{i=1}^\deg g_j(t A_i) \right), \qquad j\ge 1.$$ Since $f_1(\cdot)= g_1(\cdot)$, it follows by induction on $j$ that for all $j\ge 1$, $f_j(t) \le g_j(t)$; in particular, $f_n(t) \le g_n(t)$. We take $\theta = 1- \ee^{-2\lambda}$. In view of (\[Z=beta\]), we have proved that $$\E \left\{ \exp\left( - t \sum_{i=1}^\deg A(e_i) {\beta_{n, \lambda}(e_i) \over \E [\beta_{n, \lambda}(e_i)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} , \label{beta_n(e)}$$ which yields (\[comp-Laplace\]).$\Box$ [**Remark.**]{} Let $$\beta_{n,\lambda}(e) := {(1-\ee^{-2\lambda})+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i) \over 1+ \sum_{i=1}^\deg A(e_i) \beta_{n,\lambda}(e_i)}.$$ By (\[beta\_n(e)\]) and (\[exp\]), if $\E(A)= {1\over \deg}$, then for $\lambda\ge 0$, $n\ge 1$ and $t\ge 0$, $$\E \left\{ \exp\left( - t {\beta_{n, \lambda}(e) \over \E [\beta_{n, \lambda}(e)] }\right) \right\} \le \E \left\{ \ee^{- t \, M_n} \right\} .$$ Proof of (\[E(beta):kappa&gt;2\]) {#subs:kappa>2} --------------------------------- Assume $p={1\over \deg}$ and $\psi'(1)<0$. Since $Z_{j, \theta}$ is bounded uniformly in $j$, we have, by (\[ZW\]), for $1\le j \le n-1$, $$\begin{aligned} \E(Z_{j+1, \theta}) &=& \E\left( {\theta+Z_{j, \theta} \over 1+Z_{j, \theta} } \right) \nonumber \\ &\le& \E\left[(\theta+ Z_{j, \theta} )(1 - c_{33}\, Z_{j, \theta} )\right] \nonumber \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \E\left[(Z_{j, \theta})^2\right] \label{E(Z2)} \\ &\le & \theta + \E(Z_{j, \theta}) - c_{33}\, \left[ \E Z_{j, \theta} \right]^2. \nonumber\end{aligned}$$ By Lemma \[l:abc\], we have, for any $K>0$ and uniformly in $\theta\in [0, \,Ê{K\over n}]$, $$\label{53} \E (Z_{n, \theta}) \le c_{34} \left( \sqrt {\theta} + {1\over n} \right) \le {c_{35} \over \sqrt{n}}.$$ We mention that this holds for all $\kappa \in (1, \, \infty]$. In view of (\[Z=beta\]), this yields the upper bound in (\[E(beta):kappa&gt;2\]). To prove the lower bound, we observe that $$\E(Z_{j+1, \theta}) \ge \E\left[(\theta+ Z_{j, \theta} )(1 - Z_{j, \theta} )\right] = \theta+ (1-\theta) \E(Z_{j, \theta}) - \E\left[(Z_{j, \theta})^2\right] . \label{51}$$ If furthermore $\kappa \in (2, \infty]$, then $\E [(Z_{j, \theta})^2 ] \le c_{36}\, (\E Z_{j, \theta})^2$ (see Proposition \[p:concentration\]). Thus, for all $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{36}\, (\E Z_{j,\theta})^2 .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of (\[Z=beta\]) and Lemma \[l:abc\] readily yields the lower bound in (\[E(beta):kappa&gt;2\]).$\Box$ Proof of (\[E(beta):kappa&lt;2\]) {#subs:kappa<2} --------------------------------- We assume in this part $p={1\over \deg}$, $\psi'(1)<0$ and $1<\kappa \le 2$. Let $\varepsilon>0$ be small. Since $(Z_{j, \theta})$ is bounded, we have $\E[(Z_{j, \theta})^2] \le c_{37} \, \E [(Z_{j, \theta})^{\kappa-\varepsilon}]$, which, by Proposition \[p:concentration\], implies $$\E\left[ (Z_{j, \theta})^2 \right] \le c_{38} \, \left( \E Z_{j, \theta} \right)^{\kappa- \varepsilon} . \label{c38}$$ Therefore, (\[51\]) yields that $$\E(Z_{j+1, \theta}) \ge \theta+ (1-\theta) \E(Z_{j, \theta}) - c_{38} \, (\E Z_{j, \theta})^{\kappa-\varepsilon} .$$ By (\[53\]), $\E (Z_{n, \theta}) \to 0$ uniformly in $\theta\in [0, \,Ê{K\over n}]$ (for any given $K>0$). An application of Lemma \[l:abc\] implies that for any $K>0$, $$\E (Z_{\ell, \theta}) \ge c_{14} \left( \theta^{1/(\kappa-\varepsilon)} + {1\over \ell^{1/(\kappa -1 - \varepsilon)}} \right), \qquad \forall \, \theta\in [0, \,Ê{K\over n}], \; \; \forall \, 1\le \ell \le n. \label{ell}$$ The lower bound in (\[E(beta):kappa&lt;2\]) follows from (\[Z=beta\]). It remains to prove the upper bound. Define $$Y_{j, \theta} := {Z_{j, \theta} \over \E(Z_{j, \theta})} , \qquad 1\le j\le n.$$ We take $Z_{j-1, \theta}^{(x)}$ (for $x\in \T_1$) to be independent copies of $Z_{j-1, \theta}$, and independent of $(A(x), \; x\in \T_1)$. By (\[ZW\]), for $2\le j\le n$, $$\begin{aligned} Y_{j, \theta} &\; {\buildrel law \over =} \;& \sum_{x\in \T_1} A(x) {(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) \over \E [(\theta+ Z_{j-1, \theta}^{(x)} )/ (1+ Z_{j-1, \theta}^{(x)}) ]} \ge \sum_{x\in \T_1} A(x) {Z_{j-1, \theta}^{(x)} / (1+ Z_{j-1, \theta}^{(x)}) \over \theta+ \E [Z_{j-1, \theta}]} \\ &=& { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - { \E [Z_{j-1, \theta}]\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2/\E(Z_{j-1, \theta}) \over 1+Z_{j-1, \theta}^{(x)}} \\ &\ge& \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta} \; ,\end{aligned}$$ where $$\begin{aligned} Y_{j-1, \theta}^{(x)} &:=&{Z_{j-1, \theta}^{(x)} \over \E(Z_{j-1, \theta})} , \\ \Delta_{j-1, \theta} &:=&{\theta\over \theta+ \E [Z_{j-1, \theta}]} \sum_{x\in \T_1} A(x) Y_{j-1, \theta}^{(x)} + \sum_{x\in \T_1} A(x) {(Z_{j-1, \theta}^{(x)})^2 \over \E(Z_{j-1, \theta})} .\end{aligned}$$ By (\[c38\]), $\E[ {(Z_{j-1, \theta}^{(i)})^2 \over \E(Z_{j-1, \theta})}]\le c_{38}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. On the other hand, by (\[ell\]), $\E(Z_{j-1, \theta}) \ge c_{14}\, \theta^{1/(\kappa-\varepsilon)}$ for $2\le j \le n$, and thus ${\theta\over \theta+ \E [Z_{j-1, \theta}]} \le c_{39}\, (\E Z_{j-1, \theta})^{\kappa-1- \varepsilon}$. As a consequence, $\E( \Delta_{j-1, \theta} ) \le c_{40}\, (\E Z_{j-1, \theta})^{\kappa-1-\varepsilon}$. If we write $\xi \; {\buildrel st. \over \ge} \; \eta$ to denote that $\xi$ is stochastically greater than or equal to $\eta$, then we have proved that $Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{x\in \T_1}^\deg A(x) Y_{j-1, \theta}^{(x)} - \Delta_{j-1, \theta}$. Applying the same argument to each of $(Y_{j-1, \theta}^{(x)}, \, x\in \T_1)$, we see that, for $3\le j\le n$, $$Y_{j, \theta} \; {\buildrel st. \over \ge} \; \sum_{u\in \T_1} A(u) \sum_{v\in \T_2: \; u={\buildrel \leftarrow \over v}} A(v) Y_{j-2, \theta}^{(v)} - \left( \Delta_{j-1, \theta}+ \sum_{u\in \T_1} A(u) \Delta_{j-2, \theta}^{(u)} \right) ,$$ where $Y_{j-2, \theta}^{(v)}$ (for $v\in \T_2$) are independent copies of $Y_{j-2, \theta}$, and are independent of $(A(w), \, w\in \T_1 \cup \T_2)$, and $(\Delta_{j-2, \theta}^{(u)}, \, u\in \T_1)$ are independent of $(A(u), \, u\in \T_1)$ and are such that $\e[\Delta_{j-2, \theta}^{(u)}] \le c_{40}\, (\E Z_{j-2, \theta})^{\kappa-1-\varepsilon}$. By induction, we arrive at: for $j>m \ge 1$, $$Y_{j, \theta} \; {\buildrel st. \over \ge}\; \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) Y_{j-m, \theta}^{(x)} - \Lambda_{j,m,\theta}, \label{Yn>}$$ where $Y_{j-m, \theta}^{(x)}$ (for $x\in \T_m$) are independent copies of $Y_{j-m, \theta}$, and are independent of the random vector $(A(w), \, 1\le |w| \le m)$, and $\E(\Lambda_{j,m,\theta}) \le c_{40}\, \sum_{\ell=1}^m (\E Z_{j-\ell, \theta})^{\kappa-1-\varepsilon} $. Since $\E(Z_{i, \theta}) = \E({\theta+ Z_{i-1, \theta} \over 1+ Z_{i-1, \theta}}) \ge \E(Z_{i-1, \theta}) - \E[(Z_{i-1, \theta})^2] \ge \E(Z_{i-1, \theta}) - c_{38}\, [\E Z_{i-1, \theta} ]^{\kappa-\varepsilon}$ (by (\[c38\])), we have, for all $j\in (j_0, n]$ (with a large but fixed integer $j_0$) and $1\le \ell \le j-j_0$, $$\begin{aligned} \E(Z_{j, \theta}) &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{38}\, [\E Z_{j-i, \theta} ]^{\kappa-1-\varepsilon}\right\} \\ &\ge&\E(Z_{j-\ell, \theta}) \prod_{i=1}^\ell \left\{ 1- c_{41}\, (j-i)^{-(\kappa-1- \varepsilon)/2}\right\} ,\end{aligned}$$ the last inequality being a consequence of (\[53\]). Thus, for $j\in (j_0, n]$ and $1\le \ell \le j^{(\kappa-1-\varepsilon)/2}$, $\E(Z_{j, \theta}) \ge c_{42}\, \E(Z_{j-\ell, \theta})$, which implies that for all $m\le j^{(\kappa-1-\varepsilon)/2}$, $\E(\Lambda_{j,m, \theta}) \le c_{43} \, m (\E Z_{j, \theta})^{\kappa-1-\varepsilon}$. By Chebyshev’s inequality, for $j\in (j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P\left\{ \Lambda_{j,m, \theta} > \varepsilon r\right\} \le {c_{43} \, m (\E Z_{j, \theta})^{\kappa -1-\varepsilon} \over \varepsilon r}. \label{toto4}$$ Let us go back to (\[Yn&gt;\]), and study the behaviour of $\sum_{x\in \T_m} ( \prod_{y\in ]\! ] e, x ]\! ]} A(y) ) Y_{j-m, \theta}^{(x)}$. Let $M^{(x)}$ (for $x\in \T_m$) be independent copies of $M_\infty$ and independent of all other random variables. Since $\E(Y_{j-m, \theta}^{(x)})= \E(M^{(x)})=1$, we have, by Fact \[f:petrov\], for any $a\in (1, \, \kappa)$, $$\begin{aligned} &&\E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} \\ &\le&2 \E \left\{ \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right) \, \E\left( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a \right) \right\}.\end{aligned}$$ By Proposition \[p:concentration\] and the fact that $(M_n)$ is a martingale bounded in $L^a$, we have $\E ( | Y_{j-m, \theta}^{(x)} - M^{(x)}|^a ) \le c_{44}$. Thus, $$\begin{aligned} \E \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right|^a \right\} &\le& 2c_{44} \E \left\{ \sum_{x\in \T_m} \prod_{y\in ]\! ] e, x ]\! ]} A(y)^a \right\} \\ &=& 2c_{44} \, \deg^m \, [\E(A^a)]^m.\end{aligned}$$ By Chebyshev’s inequality, $$\P \left\{ \left| \sum_{x\in \T_m} \left( \prod_{y\in ]\! ] e, x ]\! ]} A(y) \right) (Y_{j- m, \theta}^{(x)} - M^{(x)}) \right| > \varepsilon r\right\} \le {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a}. \label{toto6}$$ Clearly, $\sum_{x\in \T_m} (\prod_{y\in ]\! ] e, x ]\! ]} A(y) ) M^{(x)}$ is distributed as $M_\infty$. We can thus plug (\[toto6\]) and (\[toto4\]) into (\[Yn&gt;\]), to see that for $j\in [j_0, n]$, $m\le j^{(\kappa-1-\varepsilon)/2}$ and $r>0$, $$\P \left\{ Y_{j, \theta} > (1-2\varepsilon) r\right\} \ge \P \left\{ M_\infty > r\right\} - {c_{43}\, m (\E Z_{j, \theta})^{\kappa-1- \varepsilon} \over \varepsilon r} - {2c_{44} \, \deg^m [\E(A^a)]^m \over \varepsilon^a r^a} . \label{Yn-lb}$$ We choose $m:= \lfloor j^\varepsilon \rfloor$. Since $a\in (1, \, \kappa)$, we have $\deg \E(A^a) <1$, so that $\deg^m [\E(A^a)]^m \le \exp( - j^{\varepsilon/2})$ for all large $j$. We choose $r= {1\over (\E Z_{j, \theta})^{1- \delta}}$, with $\delta := {4\kappa \varepsilon \over \kappa -1}$. In view of (\[M-tail\]), we obtain: for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{23} \, (\E Z_{j, \theta})^{(1- \delta) \kappa} - {c_{43}\over \varepsilon} \, j^\varepsilon\, (\E Z_{j, \theta})^{\kappa-\varepsilon-\delta} - {2c_{44} \, (\E Z_{j, \theta})^{(1- \delta)a} \over \varepsilon^a \exp(j^{\varepsilon/2})} .$$ Since $c_{14}/j^{1/(\kappa-1- \varepsilon)} \le \E(Z_{j, \theta}) \le c_{35}/j^{1/2}$ (see (\[ell\]) and (\[53\]), respectively), we can pick up sufficiently small $\varepsilon$, so that for $j\in [j_0, n]$, $$\P \left\{ Y_{j, \theta} > {1-2\varepsilon\over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge {c_{23} \over 2} \, (\E Z_{j, \theta})^{(1-\delta) \kappa}.$$ Recall that by definition, $Y_{j, \theta} = {Z_{j, \theta} \over \E(Z_{j, \theta})}$. Therefore, for $j\in [j_0, n]$, $$\E[(Z_{j, \theta})^2] \ge [\E Z_{j, \theta}]^2 \, {(1-2\varepsilon)^2\over (\E Z_{j, \theta})^{2(1- \delta)}} \P \left\{ Y_{j, \theta} > {1-2\varepsilon \over (\E Z_{j, \theta})^{1- \delta}} \right\} \ge c_{45} \, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta}.$$ Of course, the inequality holds trivially for $0\le j < j_0$ (with possibly a different value of the constant $c_{45}$). Plugging this into (\[E(Z2)\]), we see that for $1\le j\le n-1$, $$\E(Z_{j+1, \theta}) \le \theta + \E(Z_{j, \theta}) - c_{46}\, (\E Z_{j, \theta})^{\kappa+ (2- \kappa)\delta} .$$ By Lemma \[l:abc\], this yields $\E(Z_{n, \theta}) \le c_{47} \, \{ \theta^{1/[\kappa+ (2- \kappa)\delta]} + n^{- 1/ [\kappa -1 + (2- \kappa)\delta]}\}$. An application of (\[Z=beta\]) implies the desired upper bound in (\[E(beta):kappa&lt;2\]).$\Box$ [**Remark.**]{} A close inspection on our argument shows that under the assumptions $p= {1\over \deg}$ and $\psi'(1)<0$, we have, for any $1\le i \le \deg$ and uniformly in $\lambda \in [0, \, {1\over n}]$, $$\left( {\alpha_{n, \lambda}(e_i) \over \E[\alpha_{n, \lambda}(e_i)]} ,\; {\beta_{n, \lambda}(e_i) \over \E[\beta_{n, \lambda}(e_i)]} , \; {\gamma_n(e_i) \over \E[\gamma_n (e_i)]} \right) \; {\buildrel law \over \longrightarrow} \; (M_\infty, \, M_\infty, \, M_\infty),$$ where “${\buildrel law \over \longrightarrow}$" stands for convergence in distribution, and $M_\infty$ is the random variable defined in $(\ref{cvg-M})$.$\Box$ [**Acknowledgements**]{} We are grateful to Philippe Carmona and Marc Yor for helpful discussions. [99]{} Chernoff, H. (1952). A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. [*Ann. Math. Statist.*]{} [**23**]{}, 493–507. Duquesne, T. and Le Gall, J.-F. (2002). [*Random Trees, Lévy Processes and Spatial Branching Processes.*]{} Astérisque [**281**]{}. Société Mathématique de France, Paris. Griffeath, D. and Liggett, T.M. (1982). Critical phenomena for Spitzer’s reversible nearest particle systems. [*Ann. Probab.*]{} [**10**]{}, 881–895. Harris, T.E. (1963). [*The Theory of Branching Processes.*]{} Springer, Berlin. Hoel, P., Port, S. and Stone, C. (1972). [*Introduction to Stochastic Processes.*]{} Houghton Mifflin, Boston. Kesten, H., Kozlov, M.V. and Spitzer, F. (1975). A limit law for random walk in a random environment. [*Compositio Math.*]{} [**30**]{}, 145–168. Le Gall, J.-F. (2005). Random trees and applications. [*Probab. Surveys*]{} [**2**]{}, 245–311. Liggett, T.M. (1985). [*Interacting Particle Systems.*]{} Springer, New York. Liu, Q.S. (2000). On generalized multiplicative cascades. [*Stoch. Proc. Appl.*]{} [**86**]{}, 263–286. Liu, Q.S. (2001). Asymptotic properties and absolute continuity of laws stable by random weighted mean. [*Stoch. Proc. Appl.*]{} [**95**]{}, 83–107. Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. [*Ann. Probab.*]{} [**20**]{}, 125–136. Lyons, R. and Peres, Y. (2005+). [*Probability on Trees and Networks.*]{} (Forthcoming book) [http://mypage.iu.edu/\~rdlyons/prbtree/prbtree.html]{} Mandelbrot, B. (1974). Multiplications aléatoires itérées et distributions invariantes par moyenne pondérée aléatoire. [*C. R. Acad. Sci. Paris*]{} [**278**]{}, 289–292. Menshikov, M.V. and Petritis, D. (2002). On random walks in random environment on trees and their relationship with multiplicative chaos. In: [*Mathematics and Computer Science II (Versailles, 2002)*]{}, pp. 415–422. Birkhäuser, Basel. Pemantle, R. (1995). Tree-indexed processes. [*Statist. Sci.*]{} [**10**]{}, 200–213. Pemantle, R. and Peres, Y. (1995). Critical random walk in random environment on trees. [*Ann. Probab.*]{} [**23**]{}, 105–140. Pemantle, R. and Peres, Y. (2005+). The critical Ising model on trees, concave recursions and nonlinear capacity. [ArXiv:math.PR/0503137.]{} Peres, Y. (1999). Probability on trees: an introductory climb. In: [*École d’Été St-Flour 1997*]{}, Lecture Notes in Mathematics [**1717**]{}, pp. 193–280. Springer, Berlin. Petrov, V.V. (1995). [*Limit Theorems of Probability Theory.*]{} Clarendon Press, Oxford. Rozikov, U.A. (2001). Random walks in random environments on the Cayley tree. [*Ukrainian Math. J.*]{} [**53**]{}, 1688–1702. Sinai, Ya.G. (1982). The limit behavior of a one-dimensional random walk in a random environment. [*Theory Probab. Appl.*]{} [**27**]{}, 247–258. Sznitman, A.-S. (2005+). Random motions in random media. (Lecture notes of minicourse at Les Houches summer school.) [http://www.math.ethz.ch/u/sznitman/]{} Zeitouni, O. (2004). Random walks in random environment. In: [*École d’Été St-Flour 2001*]{}, Lecture Notes in Mathematics [**1837**]{}, pp. 189–312. Springer, Berlin. -- ------------------------------ --------------------------------------------------- Yueyun Hu Zhan Shi Département de Mathématiques Laboratoire de Probabilités et Modèles Aléatoires Université Paris XIII Université Paris VI 99 avenue J-B Clément 4 place Jussieu F-93430 Villetaneuse F-75252 Paris Cedex 05 France France -- ------------------------------ ---------------------------------------------------
{ "pile_set_name": "ArXiv" }
723 P.2d 394 (1986) L. Lynn ALLEN and Merle Allen, Plaintiffs and Respondents, v. Thomas M. KINGDON and Joan O. Kingdon, Defendants and Appellants. No. 18290. Supreme Court of Utah. July 29, 1986. H. James Clegg, Scott Daniels, Salt Lake City, for defendants and appellants. Boyd M. Fullmer, Salt Lake City, for plaintiffs and respondents. HOWE, Justice: The plaintiffs Allen (buyers) brought this action for the return of all money they had paid on an earnest money agreement to purchase residential real estate. The defendants Kingdon (sellers) appeal the trial court's judgment that the agreement had been rescinded by the parties and that the buyers were entitled to a full refund. *395 On February 12, 1978, the buyers entered into an earnest money agreement to purchase the sellers' home for $87,500. The agreement provided for an immediate deposit of $1,000, which the buyers paid, to be followed by an additional down payment of $10,000 by March 15, 1978. The buyers were to pay the remainder of the purchase price at the closing which was set on or before April 15, 1978. The agreement provided for the forfeiture of all amounts paid by the buyers as liquidated and agreed damages in the event they failed to complete the purchase. The buyers did not pay the additional $10,000, but paid $9,800 because the parties later agreed on a $200 deduction for a light fixture the sellers were allowed to take from the home. An inscription on the $9,800 check stated all monies paid were "subject to closing." There were several additional exchanges between the parties after the earnest money agreement was signed. The buyers requested that the sellers fix the patio, which the sellers refused to do. The buyers asked that the sellers paint the front of the home, which Mr. Kingdon agreed to do, but did not accomplish. The parties eventually met to close the sale. The buyers insisted on a $500 deduction from the purchase price because of the sellers' failure to paint. The sellers refused to convey title unless the buyers paid the full purchase price. Because of this impasse, the parties did not close the transaction. Mrs. Allen and Mrs. Kingdon left the meeting, after which Mr. Kingdon orally agreed to refund the $10,800, paid by the buyers. However, three days later, the sellers' attorney sent a letter to the buyers advising them that the sellers would retain enough of the earnest money to cover any damages they would incur in reselling the home. The letter also stated that the buyers could avoid these damages by closing within ten days. The buyers did not offer to close the sale. The home was eventually sold for $89,100, less a commission of $5,346. Claiming damages in excess of $15,000, the sellers retained the entire $10,800 and refused to make any refund to the buyers. The trial court found that the parties had orally rescinded their agreement and ordered the sellers to return the buyers' payments, less $1,000 on a counterclaim of the sellers, which award is not challenged on this appeal. The sellers first contend that the trial court erred in holding that our statute of frauds permits oral rescission of a written executory contract for the sale of real property. U.C.A., 1953, § 25-5-1 provides: No estate or interest in real property, other than leases for a term not exceeding one year, nor any trust or power over or concerning real property or in any manner relating thereto, shall be created, granted, assigned, surrendered or declared otherwise than by operation of law, or by deed or conveyance in writing subscribed by the party creating, granting, assigning, surrendering or declaring the same, or by his lawful agent thereunto authorized by writing. (Emphasis added.) In Cutwright v. Union Savings & Investment Co., 33 Utah 486, 491-92, 94 P. 984, 985 (1908), this Court interpreted section 25-5-1 as follows: No doubt the transfer of any interest in real property, whether equitable or legal, is within the statute of frauds; and no such interest can either be created, transferred, or surrendered by parol merely.... No doubt, if a parol agreement to surrender or rescind a contract for the sale of lands is wholly executory, and nothing has been done under it, it is within the statute of frauds, and cannot be enforced any more than any other agreement concerning an interest in real property may be. (Emphasis added.) In that case, the buyer purchased a home under an installment contract providing for the forfeiture of all amounts paid in the event the buyer defaulted. The buyer moved into the home but soon discontinued payments. He informed the seller that he would make no more payments on the contract, surrendered the key to the house, and vacated the premises. Soon thereafter, an assignee of the buyer's interest informed the seller that he intended to make the payments *396 under the contract and demanded possession. The seller refused to accept the payments, claiming that the contract had been mutually rescinded on the buyer's surrender of possession. We held that the statute of frauds generally requires the surrender of legal and equitable interests in land to be in writing. Where, however, an oral rescission has been executed, the statute of frauds may not apply. In Cutwright, surrender of possession by the buyer constituted sufficient part performance of the rescission agreement to remove it from the statute of frauds. This exception is one of several recognized by our cases. We have also upheld oral rescission of a contract for the sale of land when the seller, in reliance on the rescission, enters into a new contract to resell the land. Budge v. Barron, 51 Utah 234, 244-45, 169 P. 745, 748 (1917). In addition, an oral rescission by the buyer may be enforceable where the seller has breached the written contract. Thackeray v. Knight, 57 Utah 21, 27-28, 192 P. 263, 266 (1920). In the present case, the oral rescission involved the surrender of the buyers' equitable interest in the home under the earnest money agreement. Further, the rescission was wholly executory. There is no evidence of any part performance of the rescission or that the buyers substantially changed their position in reliance on the promise to discharge the contract. On the contrary, three days after the attempted closing, the sellers informed the buyers that they intended to hold them to the contract. It was only after the buyers continued in their refusal to close that the sellers placed the home on the market. The buyers argue that the weight of authority in the United States is to the effect that an executory contract for the sale of land within the statute of frauds may be orally rescinded. This may indeed be the case when there are acts of performance of the oral agreement sufficient to take it out of the statute of frauds. See Annot., 42 A.L.R.3d 242, 251 (1972). In support of their contention that an oral rescission of an earnest money agreement for the purchase of land is valid absent any acts of performance, the buyers rely on Niernberg v. Feld, 131 Colo. 508, 283 P.2d 640 (1955). In that case, the Colorado Supreme Court upheld the oral rescission of an executory contract for the sale of land under a statute of frauds which, like Utah's, applies specifically to the surrender of interests in land. The Colorado court concluded that the statute of frauds concerns the making of contracts only and does not apply to their revocation. However, the court did not attempt to reconcile its holding with the contradictory language of the controlling statute. For a contrary result under a similar statute and fact situation, see Waller v. Lieberman, 214 Mich. 428, 183 N.W. 235 (1921). In light of the specific language of Utah's statute of frauds and our decision in Cutwright v. Union Savings & Investment Co., supra, we decline to follow the Colorado case. We note that the annotator at 42 A.L.R.3d 257 points out that in Niernberg the rescission was acted upon in various ways. We hold in the instant case that the wholly executory oral rescission of the earnest money agreement was unenforceable under our statute of frauds. Nor were the buyers entitled to rescind the earnest money agreement because of the sellers' failure to paint the front of the home as promised. Cf. Thackeray v. Knight, 57 Utah at 27-28, 192 P. at 266 (buyer's oral rescission of contract for sale of land was valid when seller breached contract). The rule is well settled in Utah that if the original agreement is within the statute of frauds, a subsequent agreement that modifies any of the material parts of the original must also satisfy the statute. Golden Key Realty, Inc. v. Mantas, 699 P.2d 730, 732 (Utah 1985). An exception to this general rule has been recognized where a party has changed position by performing an oral modification so that it would be inequitable to permit the other party to found a claim or defense on the original agreement as unmodified. White v. Fox, 665 P.2d 1297, 1301 (Utah 1983) *397 (citing Bamberger Co. v. Certified Productions, Inc., 88 Utah 194, 201, 48 P.2d 489, 492 (1935), aff'd on rehearing, 88 Utah 213, 53 P.2d 1153 (1936)). There is no indication that the buyers changed their position in reliance on the sellers' promise to paint the front of the house. Thus, equitable considerations would not preclude the sellers from raising the unmodified contract as a defense to the claim of breach. The fact that the parties executed several other oral modifications of the written contract does not permit the buyers to rescind the contract for breach of an oral promise on which they did not rely to their detriment. We therefore hold that the buyers were not entitled to rescind the earnest money agreement because of the sellers' failure to perform an oral modification required to be in writing under the statute of frauds. The buyers also contend that they are entitled to the return of the $10,800 because the inscription on the $9,800 check stated that all monies were paid "subject to closing." The buyers argue that by conditioning the check in this manner they may, in effect, rewrite the earnest money agreement and relieve themselves of any liability for their own failure to close the sale. We cannot accept this argument. The buyers were under an obligation to pay the monies unconditionally. The sellers' acceptance of the inscribed check cannot be construed as a waiver of their right to retain the $10,800 when the buyers failed to perform the agreement. Having concluded that the buyers breached their obligation under the earnest money agreement, we must next consider whether the liquidated damages provision of the agreement is enforceable. That provision provided that the sellers could retain all amounts paid by the buyers as liquidated and agreed damages in the event the buyers failed to complete the purchase. The general rules in Utah regarding enforcement of liquidated damages for breach of contract have been summarized as follows: Under the basic principles of freedom of contract, a stipulation to liquidated damages for breach of contract is generally enforceable. Where, however, the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any possible loss that might have been contemplated that it shocks the conscience, the stipulation will not be enforced. Warner v. Rasmussen, 704 P.2d 559, 561 (Utah 1985) (citations omitted). In support of their contention that the liquidated damages are not excessive compared to actual damages, the sellers assert that they offered evidence of actual damages in excess of $15,000. However, the trial court disagreed and found the amount of liquidated damages excessive. The record indicates that the only recoverable damages sustained by the sellers resulted from the resale of the home at a lower net price amounting to $3,746 (the difference between the contract price of $87,500 and the eventual selling price, less commission, of $83,754). We agree that $10,800 is excessive and disproportionate when compared to the $3,746 loss of bargain suffered by the sellers. Since the buyers did not ever have possession of the property, the other items of damage claimed by the sellers (interest on mortgage, taxes, and utilities) are not recoverable by them. Perkins v. Spencer, 121 Utah 468, 243 P.2d 446 (1952). Therefore, the sellers are not entitled to retain the full amount paid, but may offset their actual damages of $3,746 against the buyers' total payments. See Soffe v. Ridd, 659 P.2d 1082 (Utah 1983) (seller was entitled to actual damages where liquidated damages provision was held unenforceable). We reverse the trial court's judgment that the earnest money agreement was rescinded and conclude that the buyers breached their obligation to close the transaction. However, we affirm the judgment below that the liquidated damages provided for were excessive and therefore not recoverable. *398 The case is remanded to the trial court to amend the judgment to award the buyers $7,054, less $1,000 awarded by the trial court to the sellers on their counterclaim which is not challenged on this appeal. No interest or attorney fees are awarded to either party inasmuch as the trial court awarded none and neither party has raised the issue on appeal. HALL, C.J., and STEWART and DURHAM, JJ., concur. ZIMMERMAN, Justice (concurring): I join the majority in its disposition of the various issues. However, the majority quotes from Warner v. Rasmussen, 704 P.2d 559 (Utah 1985), to the effect that contractual provisions for liquidated damages will be enforced unless "the amount of liquidated damages bears no reasonable relationship to the actual damage or is so grossly excessive as to be entirely disproportionate to any loss that might have been contemplated that it shocks the conscience." The Court then finds that the amount of the liquidated damages provided for in the agreement is "excessive and disproportionate" when compared to the actual loss suffered by the sellers, thus implying that in the absence of a disparity as great as that which exists here (actual loss is approximately one-third of the penalty), the standard of Warner v. Rasmussen will not be satisfied. I think an examination of our cases should suggest to any thoughtful reader that, in application, the test stated in Warner is not nearly as accepting of liquidated damage provisions as the quoted language would suggest. In fact, I believe this Court routinely applies the alternative test of Warner—that the liquidated damages must bear some reasonable relationship to the actual damages—and that we carefully scrutinize liquidated damage awards. I think it necessary to say this lest the bar be misled by the rather loose language of Warner and its predecessors.
{ "pile_set_name": "FreeLaw" }
154 F.3d 417 U.S.v.Chukwuma* NO. 97-11093 United States Court of Appeals,Fifth Circuit. July 29, 1998 Appeal From: N.D.Tex. ,No397CR104D 1 Affirmed. * Fed.R.App.P. 34(a); 5th Cir.R. 34-2
{ "pile_set_name": "FreeLaw" }
AxisControlBus ControlBus PathPlanning1 PathPlanning6 PathToAxisControlBus GearType1 GearType2 Motor Controller AxisType1 AxisType2 MechanicalStructure
{ "pile_set_name": "Github" }
--- abstract: 'Feature selection (FS) is a key preprocessing step in data mining. CFS (Correlation-Based Feature Selection) is an FS algorithm that has been successfully applied to classification problems in many domains. We describe Distributed CFS (DiCFS) as a completely redesigned, scalable, parallel and distributed version of the CFS algorithm, capable of dealing with the large volumes of data typical of big data applications. Two versions of the algorithm were implemented and compared using the Apache Spark cluster computing model, currently gaining popularity due to its much faster processing times than Hadoop’s MapReduce model. We tested our algorithms on four publicly available datasets, each consisting of a large number of instances and two also consisting of a large number of features. The results show that our algorithms were superior in terms of both time-efficiency and scalability. In leveraging a computer cluster, they were able to handle larger datasets than the non-distributed WEKA version while maintaining the quality of the results, i.e., exactly the same features were returned by our algorithms when compared to the original algorithm available in WEKA.' address: - | Systems Engineering Department,\ National Autonomous University of Honduras. Blvd. Suyapa, Tegucigalpa, Honduras - | Department of Computer Science, University of Alcalá\ Alcalá de Henares, 28871 Madrid, Spain - | Department of Computer Science, University of A Coruña\ Campus de Elviña s/n 15071 - A Coruña, Spain author: - 'Raul-Jose Palma-Mendoza' - 'Luis de-Marcos' - Daniel Rodriguez - 'Amparo Alonso-Betanzos' title: 'Distributed Correlation-Based Feature Selection in Spark' --- feature selection ,scalability ,big data ,apache spark ,cfs ,correlation Introduction {#sec:intro} ============ In recent years, the advent of big data has raised unprecedented challenges for all types of organizations and researchers in many fields. Xindong et al.  [@XindongWu2014], however, state that the big data revolution has come to us not only with many challenges but also with plenty of opportunities for those organizations and researchers willing to embrace them. Data mining is one field where the opportunities offered by big data can be embraced, and, as indicated by Leskovec et al.  [@Leskovec2014mining], the main challenge is to extract useful information or knowledge from these huge data volumes that enable us to predict or better understand the phenomena involved in the generation of the data. Feature selection (FS) is a dimensionality reduction technique that has emerged as an important step in data mining. According to Guyon and Eliseeff [@Guyon2003] its purpose is twofold: to select relevant attributes and simultaneously to discard redundant attributes. This purpose has become even more important nowadays, as vast quantities of data need to be processed in all kinds of disciplines. Practitioners also face the challenge of not having enough computational resources. In a review of the most widely used FS methods, Bolón-Canedo et al. [@Bolon-Canedo2015b] conclude that there is a growing need for scalable and efficient FS methods, given that the existing methods are likely to prove inadequate for handling the increasing number of features encountered in big data. Depending on their relationship with the classification process, FS methods are commonly classified in one of three main categories : (i) filter methods, (ii) wrapper methods, or (iii) embedded methods. *Filters* rely solely on the characteristics of the data and, since they are independent of any learning scheme, they require less computational effort. They have been shown to be important preprocessing techniques, with many applications such as churn prediction [@Idris2012; @Idris2013] and microarray data classification. In microarray data classification, filters obtain better or at least comparable results in terms of accuracy to wrappers [@Bolon-Canedo2015a]. In *wrapper* methods, the final subset selection is based on a learning algorithm that is repeatedly trained with the data. Although wrappers tend to increase the final accuracy of the learning scheme, they are usually more computationally expensive than the other two approaches. Finally, in *embedded* methods, FS is part of the classification process, e.g., as happens with decision trees. Another important classification of FS methods is, according to their results, as (i) ranker algorithms or (ii) subset selector algorithms. With *rankers*, the result is a sorted set of the original features. The order of this returned set is defined according to the quality that the FS method determines for each feature. Some rankers also assign a weight to each feature that provides more information about its quality. *Subset selectors* return a non-ordered subset of features from the original set so that together they yield the highest possible quality according to some given measure. Subset selectors, therefore, consist of a search procedure and an evaluation measure. This can be considered an advantage in many cases, as rankers usually evaluate features individually and leave it to the user to select the number of top features in a ranking. One filter-based subset selector method is the Correlation-Based Feature Selection (CFS) algorithm [@Hall2000], traditionally considered useful due to its ability not only to reduce dimensionality but also to improve classification algorithm performance. However, the CFS algorithm, like many other multivariate FS algorithms, has a time execution complexity $\mathcal{O}(m^2 \cdot n)$, where $m$ is the number of features and $n$ is the number of instances. This quadratic complexity in the number of features makes CFS very sensitive to the *the curse of dimensionality* [@bellman1957dynamic]. Therefore, a scalable adaptation of the original algorithm is required to be able to apply the CFS algorithm to datasets that are large both in number of instances and dimensions. As a response to the big data phenomenon, many technologies and programming frameworks have appeared with the aim of helping data mining practitioners design new strategies and algorithms that can tackle the challenge of distributing work over clusters of computers. One such tool that has recently received much attention is Apache Spark [@Zaharia2010], which represents a new programming model that is a superset of the MapReduce model introduced by Google [@Dean2004a; @Dean2008]. One of Spark’s strongest advantages over the traditional MapReduce model is its ability to efficiently handle the iterative algorithms that frequently appear in the data mining and machine learning fields. We describe two distributed and parallel versions of the original CFS algorithm for classification problems using the Apache Spark programming model. The main difference between them is how the data is distributed across the cluster, i.e., using a horizontal partitioning scheme (hp) or using a vertical partitioning scheme (vp). We compare the two versions – DiCFS-hp and DiCFS-vp, respectively – and also compare them with a baseline, represented by the classical non-distributed implementation of CFS in WEKA [@Hall2009a]. Finally, their benefits in terms of reduced execution time are compared with those of the CFS version developed by Eiras-Fanco et al. [@Eiras-Franco2016] for regression problems. The results show that the time-efficiency and scalability of our two versions are an improvement on those of the original version of the CFS; furthermore, similar or improved execution times are obtained with respect to the Eiras-Franco et al [@Eiras-Franco2016] regression version. In the interest of reproducibility, our software and sources are available as a Spark package[^1] called DiCFS, with a corresponding mirror in Github.[^2] The rest of this paper is organized as follows. Section \[sec:stateofart\] summarizes the most important contributions in the area of distributed and parallel FS and proposes a classification according to how parallelization is carried out. Section \[sec:cFS\] describes the original CFS algorithm, including its theoretical foundations. Section \[sec:spark\] presents the main aspects of the Apache Spark computing framework, focusing on those relevant to the design and implementation of our proposed algorithms. Section \[sec:diCFS\] describes and discusses our DiCFS-hp and DiCFS-vp versions of the CFS algorithm. Section \[sec:experiments\] describes our experiments to compare results for DiCFS-hp and DiCFS-vp, the WEKA approach and the Eiras-Fanco et al. [@Eiras-Franco2016] approach. Finally, conclusions and future work are outlined in Section \[sec:conclusions\]. Background and Related Work {#sec:stateofart} =========================== As might be expected, filter-based FS algorithms have asymptotic complexities that depend on the number of features and/or instances in a dataset. Many algorithms, such as the CFS, have quadratic complexities, while the most frequently used algorithms have at least linear complexities [@Bolon-Canedo2015b]. This is why, in recent years, many attempts have been made to achieve more scalable FS methods. In what follows, we analyse recent work on the design of new scalable FS methods according to parallelization approaches: (i) search-oriented, (ii) dataset-split-oriented, or (iii) filter-oriented. *Search-oriented* parallelizations account for most approaches, in that the main aspects to be parallelized are (i) the search guided by a classifier and (ii) the corresponding evaluation of the resulting models. We classify the following studies in this category: - Kubica et al. [@Kubica2011] developed parallel versions of three forward-search-based FS algorithms, where a wrapper with a logistic regression classifier is used to guide a search parallelized using the MapReduce model. - García et al. [@Garcia_aparallel] presented a simple approach for parallel FS, based on selecting random feature subsets and evaluating them in parallel using a classifier. In their experiments they used a support vector machine (SVM) classifier and, in comparing their results with those for a traditional wrapper approach, found lower accuracies but also much shorter computation times. - Wang et al. [@Wang2016] used the Spark computing model to implement an FS strategy for classifying network traffic. They first implemented an initial FS using the Fisher score filter [@duda2012pattern] and then performed, using a wrapper approach, a distributed forward search over the best $m$ features selected. Since the Fisher filter was used, however, only numerical features could be handled. - Silva et al. [@Silva2017] addressed the FS scaling problem using an asynchronous search approach, given that synchronous search, as commonly performed, can lead to efficiency losses due to the inactivity of some processors waiting for other processors to end their tasks. In their tests, they first obtained an initial reduction using a mutual information (MI) [@Peng2005] filter and then evaluated subsets using a random forest (RF) [@Ho1995] classifier. However, as stated by those authors, any other approach could be used for subset evaluation. *Dataset-split-oriented* approaches have the main characteristic that parallelization is performed by splitting the dataset vertically or horizontally, then applying existing algorithms to the parts and finally merging the results following certain criteria. We classify the following studies in this category: - Peralta et al. [@Peralta2015] used the MapReduce model to implement a wrapper-based evolutionary search FS method. The dataset was split by instances and the FS method was applied to each resulting subset. Simple majority voting was used as a reduction step for the selected features and the final subset of feature was selected according to a user-defined threshold. All tests were carried out using the EPSILON dataset, which we also use here (see Section \[sec:experiments\]). - Bolón-Canedo et al. [@Bolon-Canedo2015a] proposed a framework to deal with high dimensionality data by first optionally ranking features using a FS filter, then partitioning vertically by dividing the data according to features (columns) rather than, as commonly done, according to instances (rows). After partitioning, another FS filter is applied to each partition, and finally, a merging procedure guided by a classifier obtains a single set of features. The authors experiment with five commonly used FS filters for the partitions, namely, CFS [@Hall2000], Consistency [@Dash2003], INTERACT [@Zhao2007], Information Gain [@Quinlan1986] and ReliefF [@Kononenko1994], and with four classifiers for the final merging, namely, C4.5 [@Quinlan1992], Naive Bayes [@rish2001empirical], $k$-Nearest Neighbors [@Aha1991] and SVM [@vapnik1995nature], show that their own approach significantly reduces execution times while maintaining and, in some cases, even improving accuracy. Finally, *filter-oriented* methods include redesigned or new filter methods that are, or become, inherently parallel. Unlike the methods in the other categories, parallelization in this category methods can be viewed as an internal, rather than external, element of the algorithm. We classify the following studies in this category: - Zhao et al. [@Zhao2013a] described a distributed parallel FS method based on a variance preservation criterion using the proprietary software SAS High-Performance Analytics. [^3] One remarkable characteristic of the method is its support not only for supervised FS, but also for unsupervised FS where no label information is available. Their experiments were carried out with datasets with both high dimensionality and a high number of instances. - Ramírez-Gallego et al. [@Ramirez-Gallego2017] described scalable versions of the popular mRMR [@Peng2005] FS filter that included a distributed version using Spark. The authors showed that their version that leveraged the power of a cluster of computers could perform much faster than the original and processed much larger datasets. - In a previous work [@Palma-Mendoza2018], using the Spark computing model we designed a distributed version of the ReliefF [@Kononenko1994] filter, called DiReliefF. In testing using datasets with large numbers of features and instances, it was much more efficient and scalable than the original filter. - Finally, Eiras-Franco et al [@Eiras-Franco2016], using four distributed FS algorithms, three of them filters, namely, InfoGain [@Quinlan1986], ReliefF [@Kononenko1994] and the CFS [@Hall2000], reduce execution times with respect to the original versions. However, in the CFS case, the version of those authors focuses on regression problems where all the features, including the class label, are numerical, with correlations calculated using the Pearson coefficient. A completely different approach is required to design a parallel version for classification problems where correlations are based on the information theory. The approach described here can be categorized as a *filter-oriented* approach that builds on works described elsewhere [@Ramirez-Gallego2017], [@Palma-Mendoza2018],  [@Eiras-Franco2016]. The fact that their focus was not only on designing an efficient and scalable FS algorithm, but also on preserving the original behaviour (and obtaining the same final results) of traditional filters, means that research focused on those filters is also valid for adapted versions. Another important issue in relation to filters is that, since they are generally more efficient than wrappers, they are often the only feasible option due to the abundance of data. It is worth mentioning that scalable filters could feasibly be included in any of the methods mentioned in the *search-oriented* and *dataset-split-oriented* categories, where an initial filtering step is implemented to improve performance. Correlation-Based Feature Selection (CFS) {#sec:cFS} ========================================= The CFS method, originally developed by Hall [@Hall2000], is categorized as a subset selector because it evaluates subsets rather than individual features. For this reason, the CFS needs to perform a search over candidate subsets, but since performing a full search over all possible subsets is prohibitive (due to the exponential complexity of the problem), a heuristic has to be used to guide a partial search. This heuristic is the main concept behind the CFS algorithm, and, as a filter method, the CFS is not a classification-derived measure, but rather applies a principle derived from Ghiselly’s test theory [@ghiselli1964theory], i.e., *good feature subsets contain features highly correlated with the class, yet uncorrelated with each other*. This principle is formalized in Equation (\[eq:heuristic\]) where $M_s$ represents the merit assigned by the heuristic to a subset $s$ that contains $k$ features, $\overline{r_{cf}}$ represents the average of the correlations between each feature in $s$ and the class attribute, and $\overline{r_{ff}}$ is the average correlation between each of the $\begin{psmallmatrix}k\\2\end{psmallmatrix}$ possible feature pairs in $s$. The numerator can be interpreted as an indicator of how predictive the feature set is and the denominator can be interpreted as an indicator of how redundant features in $s$ are. $$\label{eq:heuristic} M_s = \frac { k\cdot \overline { r_{cf} } }{ \sqrt { k + k (k - 1) \cdot \overline{ r_{ff}} } }$$ Equation (\[eq:heuristic\]) also posits the second important concept underlying the CFS, which is the computation of correlations to obtain the required averages. In classification problems, the CFS uses the symmetrical uncertainty (SU) measure [@press1982numerical] shown in Equation (\[eq:su\]), where $H$ represents the entropy function of a single or conditioned random variable, as shown in Equation (\[eq:entropy\]). This calculation adds a requirement for the dataset before processing, which is that all non-discrete features must be discretized. By default, this process is performed using the discretization algorithm proposed by Fayyad and Irani [@Fayyad1993]. $$\label{eq:su} SU = 2 \cdot \left[ \frac { H(X) - H(X|Y) }{ H(Y) + H(X) } \right]$$ $$\begin{aligned} \label{eq:entropy} H(X) &=-\sum _{ x\in X }{ p(x)\log _{2}{p(x)} } \nonumber \\ H(X | Y) &=-\sum _{ y\in Y }{ p(y) } \sum_{x \in X}{p(x |y) \log _{ 2 }{ p(x | y) } } \end{aligned}$$ The third core CFS concept is its search strategy. By default, the CFS algorithm uses a best-first search to explore the search space. The algorithm starts with an empty set of features and at each step of the search all possible single feature expansions are generated. The new subsets are evaluated using Equation (\[eq:heuristic\]) and are then added to a priority queue according to merit. In the subsequent iteration, the best subset from the queue is selected for expansion in the same way as was done for the first empty subset. If expanding the best subset fails to produce an improvement in the overall merit, this counts as a *fail* and the next best subset from the queue is selected. By default, the CFS uses five consecutive fails as a stopping criterion and as a limit on queue length. The final CFS element is an optional post-processing step. As stated before, the CFS tends to select feature subsets with low redundancy and high correlation with the class. However, in some cases, extra features that are *locally predictive* in a small area of the instance space may exist that can be leveraged by certain classifiers [@Hall1999]. To include these features in the subset after the search, the CFS can optionally use a heuristic that enables inclusion of all features whose correlation with the class is higher than the correlation between the features themselves and with features already selected. Algorithm \[alg:cFS\] summarizes the main aspects of the CFS. $Corrs := $ correlations between all features with the class \[lin:allCorrs\] $BestSubset := \emptyset$ $Queue.setCapacity(5)$ $Queue.add(BestSubset)$ $NFails := 0$ $HeadState := Queue.dequeue$ $NewSubsets := evaluate(expand(HeadState), Corrs)$ \[lin:expand\] $Queue.add(NewSubsets)$ $BestSubset$ $LocalBest := Queue.head$ $BestSubset := LocalBest$ $NFails := 0$ $NFails := NFails + 1$ $BestSubset$ The Spark Cluster Computing Model {#sec:spark} ================================= The following short description of the main concepts behind the Spark computing model focuses exclusively on aspects that complete the conceptual basis for our DiCFS proposal in Section \[sec:diCFS\]. The main concept behind the Spark model is what is known as the resilient distributed dataset (RDD). Zaharia et al. [@Zaharia2010; @Zaharia2012] defined an RDD as a read-only collection of objects, i.e., a dataset partitioned and distributed across the nodes of a cluster. The RDD has the ability to automatically recover lost partitions through a lineage record that knows the origin of the data and possible calculations done. Even more relevant for our purposes is the fact that operations run for an RDD are automatically parallelized by the Spark engine; this abstraction frees the programmer from having to deal with threads, locks and all other complexities of traditional parallel programming. With respect to the cluster architecture, Spark follows the master-slave model. Through a cluster manager (master), a driver program can access the cluster and coordinate the execution of a user application by assigning tasks to the executors, i.e., programs that run in worker nodes (slaves). By default, only one executor is run per worker. Regarding the data, RDD partitions are distributed across the worker nodes, and the number of tasks launched by the driver for each executor is set according to the number of RDD partitions residing in the worker. Two types of operations can be executed on an RDD, namely, actions and transformations. Of the *actions*, which allow results to be obtained from a Spark cluster, perhaps the most important is $collect$, which returns an array with all the elements in the RDD. This operation has to be done with care, to avoid exceeding the maximum memory available to the driver. Other important actions include $reduce$, $sum$, $aggregate$ and $sample$, but as they are not used by us here, we will not explain them. *Transformations* are mechanisms for creating an RDD from another RDD. Since RDDs are read-only, a transformation creating a new RDD does not affect the original RDD. A basic transformation is $mapPartitions$, which receives, as a parameter, a function that can handle all the elements of a partition and return another collection of elements to conform a new partition. The $mapPartitions$ transformation is applied to all partitions in the RDD to obtain a new transformed RDD. Since received and returned partitions do not need to match in size, $mapPartitions$ can thus reduce or increase the overall size of an RDD. Another interesting transformation is $reduceByKey$; this can only be applied to what is known as a $PairRDD$, which is an RDD whose elements are key-value pairs, where the keys do not have to be unique. The $reduceByKey$ transformation is used to aggregate the elements of an RDD, which it does by applying a commutative and associative function that receives two values of the PairRDD as arguments and returns one element of the same type. This reduction is applied by key, i.e., elements with the same key are reduced such that the final result is a PairRDD with unique keys, whose corresponding values are the result of the reduction. Other important transformations (which we do not explain here) are $map$, $flatMap$ and $filter$. Another key concept in Spark is *shuffling*, which refers to the data communication required for certain types of transformations, such as the above-mentioned $reduceByKey$. Shuffling is a costly operation because it requires redistribution of the data in the partitions, and therefore, data read and write across all nodes in the cluster. For this reason, shuffling operations are minimized as much as possible. The final concept underpinning our proposal is *broadcasting*, which is a useful mechanism for efficiently sharing read-only data between all worker nodes in a cluster. Broadcast data is dispatched from the driver throughout the network and is thus made available to all workers in a deserialized fast-to-access form. Distributed Correlation-Based Feature Selection (DiCFS) {#sec:diCFS} ======================================================= We now describe the two algorithms that conform our proposal. They represent alternative distributed versions that use different partitioning strategies to process the data. We start with some considerations common to both approaches. As stated previously, CFS has a time execution complexity of $\mathcal{O}(m^2 \cdot n)$ where $m$ is the number of features and $n$ is the number of instances. This complexity derives from the first step shown in Algorithm \[alg:cFS\], the calculation of $\begin{psmallmatrix}m+ 1\\2\end{psmallmatrix}$ correlations between all pairs of features including the class, and the fact that for each pair, $\mathcal{O}(n)$ operations are needed in order to calculate the entropies. Thus, to develop a scalable version, our main focus in parallelization design must be on the calculation of correlations. Another important issue is that, although the original study by Hall [@Hall2000] stated that all correlations had to be calculated before the search, this is only a true requisite when a backward best-first search is performed. In the case of the search shown in Algorithm \[alg:cFS\], correlations can be calculated on demand, i.e., on each occasion a new non-evaluated pair of features appears during the search. In fact, trying to calculate all correlations in any dataset with a high number of features and instances is prohibitive; the tests performed on the datasets described in Section \[sec:experiments\] show that a very low percentage of correlations is actually used during the search and also that on-demand correlation calculation is around $100$ times faster when the default number of five maximum fails is used. Below we describe our two alternative methods for calculating these correlations in a distributed manner depending on the type of partitioning used. Horizontal Partitioning {#subsec:horizontalPart} ----------------------- Horizontal partitioning of the data may be the most natural way to distribute work between the nodes of a cluster. If we consider the default layout where the data is represented as a matrix $D$ in which the columns represent the different features and the rows represent the instances, then it is natural to distribute the matrix by assigning different groups of rows to nodes in the cluster. If we represent this matrix as an RDD, this is exactly what Spark will automatically do. Once the data is partitioned, Algorithm \[alg:cFS\] (omitting line \[lin:allCorrs\]) can be started on the driver. The distributed work will be performed on line \[lin:expand\], where the best subset in the queue is expanded and, depending on this subset and the state of the search, a number $nc$ of new pairs of correlations will be required to evaluate the resulting subsets. Thus, the most complex step is the calculation of the corresponding $nc$ contingency tables that will allow us to obtain the entropies and conditional entropies that conform the symmetrical uncertainty correlation (see Equation (\[eq:su\])). These $nc$ contingency tables are partially calculated locally by the workers following Algorithm \[alg:localCTables\]. As can be observed, the algorithm loops through all the local rows, counting the values of the features contained in *pairs* (declared in line \[lin:pairs\]) and storing the results in a map holding the feature pairs as keys and the contingency tables as their matching values. The next step is to merge the contingency tables from all the workers to obtain global results. Since these tables hold simple value counts, they can easily be aggregated by performing an element-wise sum of the corresponding tables. These steps are summarized in Equation (\[eq:cTables\]), where $CTables$ is an RDD of keys and values, and where each key corresponds to a feature pair and each value to a contingency table. $pairs \leftarrow$ $nc$ pairs of features \[lin:pairs\] $rows \leftarrow$ local rows of $partition$ $m \leftarrow$ number of columns (features in $D$) $ctables \leftarrow$ a map from each pair to an empty contingency table $ctables(x,y)(r(x),r(y))$ += $1$ $ctables$ $$\begin{aligned} \label{eq:cTables} pairs &= \left \{ (feat_a, feat_b), \cdots, (feat_x, feat_y) \right \} \nonumber \\ nc &= \left | pairs \right | \nonumber \\ CTables &= D.mapPartitions(localCTables(pairs)).reduceByKey(sum) \nonumber \\ CTables &= \begin{bmatrix} ((feat_a, feat_b), ctable_{a,b})\\ \vdots \\ ((feat_x, feat_y), ctable_{x,y})\\ \end{bmatrix}_{nc \times 1} \nonumber \\\end{aligned}$$ Once the contingency tables have been obtained, the calculation of the entropies and conditional entropies is straightforward since all the information necessary for each calculation is contained in a single row of the $CTables$ RDD. This calculation can therefore be performed in parallel by processing the local rows of this RDD. Once the distributed calculation of the correlations is complete, control returns to the driver, which continues execution of line \[lin:expand\] in Algorithm \[alg:cFS\]. As can be observed, the distributed work only happens when new correlations are needed, and this occurs in only two cases: (i) when new pairs of features need to be evaluated during the search, and (ii) at the end of the execution if the user requests the addition of locally predictive features. To sum up, every iteration in Algorithm \[alg:cFS\] expands the current best subset and obtains a group of subsets for evaluation. This evaluation requires a merit, and the merit for each subset is obtained according to Figure \[fig:horizontalPartResume\], which illustrates the most important steps in the horizontal partitioning scheme using a case where correlations between features f2 and f1 and between f2 and f3 are calculated in order to evaluate a subset. ![Horizontal partitioning steps for a small dataset D to obtain the correlations needed to evaluate a features subset[]{data-label="fig:horizontalPartResume"}](fig01.eps){width="100.00000%"} Vertical Partitioning {#subsec:vecticalPart} --------------------- Vertical partitioning has already been proposed in Spark by Ramírez-Gallego et al. [@Ramirez-Gallego2017], using another important FS filter, mRMR. Although mRMR is a ranking algorithm (it does not select subsets), it also requires the calculation of information theory measures such as entropies and conditional entropies between features. Since data is distributed horizontally by Spark, those authors propose two main operations to perform the vertical distribution: - *Columnar transformation*. Rather than use the traditional format whereby the dataset is viewed as a matrix whose columns represent features and rows represent instances, a transposed version is used in which the data represented as an RDD is distributed by features and not by instances, in such a way that the data for a specific feature will in most cases be stored and processed by the same node. Figure \[fig:columnarTrans\], based on Ramírez-Gallego et al. [@Ramirez-Gallego2017], explains the process using an example based on a dataset with two partitions, seven instances and four features. - *Feature broadcasting*. Because features must be processed in pairs to calculate conditional entropies and because different features can be stored in different nodes, some features are broadcast over the cluster so all nodes can access and evaluate them along with the other stored features. ![Example of a columnar transformation of a small dataset with two partitions, seven instances and four features (from [@Ramirez-Gallego2017])[]{data-label="fig:columnarTrans"}](fig02.eps){width="100.00000%"} In the case of the adapted mRMR [@Ramirez-Gallego2017], since every step in the search requires the comparison of a single feature with a group of remaining features, it proves efficient, at each step, to broadcast this single feature (rather than multiple features). In the case of the CFS, the core issue is that, at any point in the search when expansion is performed, if the size of subset being expanded is $k$, then the correlations between the $m-k$ remaining features and $k-1$ features in the subset being expanded have already been calculated in previous steps; consequently, only the correlations between the most recently added feature and the $m-k$ remaining features are missing. Therefore, the proposed operations can be applied efficiently in the CFS just by broadcasting the most recently added feature. The disadvantages of vertical partitioning are that (i) it requires an extra processing step to change the original layout of the data and this requires shuffling, (ii) it needs data transmission to broadcast a single feature in each search step, and (iii) the fact that, by default, the dataset is divided into a number of partitions equal to the number of features $m$ in the dataset may not be optimal for all cases (while this parameter can be tuned, it can never exceed $m$). The main advantage of vertical partitioning is that the data layout and the broadcasting of the compared feature move all the information needed to calculate the contingency table to the same node, which means that this information can be more efficiently processed locally. Another advantage is that the whole dataset does not need to be read every time a new set of features has to be compared, since the dataset can be filtered by rows to process only the required features. Due to the nature of the search strategy (best-first) used in the CFS, the first search step will always involve all features, so no filtering can be performed. For each subsequent step, only one more feature per step can be filtered out. This is especially important with high dimensionality datasets: the fact that the number of features is much higher than the number of search steps means that the percentage of features that can be filtered out is reduced. We performed a number of experiments to quantify the effects of the advantages and disadvantages of each approach and to check the conditions in which one approach was better than the other. Experiments {#sec:experiments} =========== The experiments tested and compared time-efficiency and scalability for the horizontal and vertical DiCFS approaches so as to check whether they improved on the original non-distributed version of the CFS. We also tested and compared execution times with that reported in the recently published research by Eiras-Franco et al. [@Eiras-Franco2016] into a distributed version of CFS for regression problems. Note that no experiments were needed to compare the quality of the results for the distributed and non-distributed CFS versions as the distributed versions were designed to return the same results as the original algorithm. For our experiments, we used a single master node and up to ten slave nodes from the big data platform of the Galician Supercomputing Technological Centre (CESGA). [^4] The nodes have the following configuration: - CPU: 2 X Intel Xeon E5-2620 v3 @ 2.40GHz - CPU Cores: 12 (2X6) - Total Memory: 64 GB - Network: 10GbE - Master Node Disks: 8 X 480GB SSD SATA 2.5" MLC G3HS - Slave Node Disks: 12 X 2TB NL SATA 6Gbps 3.5" G2HS - Java version: OpenJDK 1.8 - Spark version: 1.6 - Hadoop (HDFS) version: 2.7.1 - WEKA version: 3.8.1 The experiments were run on four large-scale publicly available datasets. The ECBDL14 [@Bacardit2012] dataset, from the protein structure prediction field, was used in the ECBLD14 Big Data Competition included in the GECCO’2014 international conference. This dataset has approximately 33.6 million instances, 631 attributes and 2 classes, consists 98% of negative examples and occupies about 56GB of disk space. HIGGS [@Sadowski2014], from the UCI Machine Learning Repository [@Lichman2013], is a recent dataset representing a classification problem that distinguishes between a signal process which produces Higgs bosons and a background process which does not. KDDCUP99 [@Ma2009] represents data from network connections and classifies them as normal connections or different types of attacks (a multi-class problem). Finally, EPSILON is an artificial dataset built for the Pascal Large Scale Learning Challenge in 2008.[^5] Table \[tbl:datasets\] summarizes the main characteristics of the datasets. [P[1in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}P[0.7in]{}]{} Dataset & No. of Samples ($\times 10^{6}$) & No. of Features. & Feature Types & Problem Type\ ECBDL14 [@Bacardit2012] & $\sim$33.6 & 632 & Numerical, Categorical & Binary\ HIGGS [@Sadowski2014] & 11 & 28 & Numerical & Binary\ KDDCUP99 [@Ma2009] & $\sim$5 & 42 & Numerical, Categorical & Multiclass\ EPSILON & 1/2 & 2,000 & Numerical & Binary\ With respect to algorithm parameter configuration, two defaults were used in all the experiments: the inclusion of locally predictive features and the use of five consecutive fails as a stopping criterion. These defaults apply to both distributed and non-distributed versions. Moreover, for the vertical partitioning version, the number of partitions was equal to the number of features, as set by default in Ramírez-Gallego et al. [@Ramirez-Gallego2017]. The horizontally and vertically distributed versions of the CFS are labelled DiCFS-hp and DiCFS-vp, respectively. We first compared execution times for the four algorithms in the datasets using ten slave nodes with all their cores available. For the case of the non-distributed version of the CFS, we used the implementation provided in the WEKA platform [@Hall2009a]. The results are shown in Figure \[fig:execTimeVsNInsta\]. ![Execution time with respect to percentages of instances in four datasets, for DiCFS-hp and DiCFS-vp using ten nodes and for a non-distributed implementation in WEKA using a single node[]{data-label="fig:execTimeVsNInsta"}](fig03.eps){width="100.00000%"} Note that, with the aim of offering a comprehensive view of execution time behaviour, Figure \[fig:execTimeVsNInsta\] shows results for sizes larger than the 100% of the datasets. To achieve these sizes, the instances in each dataset were duplicated as many times as necessary. Note also that, since ECBDL14 is a very large dataset, its temporal scale is different from that of the other datasets. Regarding the non-distributed version of the CFS, Figure \[fig:execTimeVsNInsta\] does not show results for WEKA in the experiments on the ECBDL14 dataset, because it was impossible to execute that version in the CESGA platform due to memory requirements exceeding the available limits. This also occurred with the larger samples from the EPSILON dataset for both algorithms: DiCFS-vp and DiCFS-hp. Even when it was possible to execute the WEKA version with the two smallest samples from the EPSILON dataset, these samples are not shown because the execution times were too high (19 and 69 minutes, respectively). Figure \[fig:execTimeVsNInsta\] shows successful results for the smaller HIGGS and KDDCUP99 datasets, which could still be processed in a single node of the cluster, as required by the non-distributed version. However, even in the case of these smaller datasets, the execution times of the WEKA version were worse compared to those of the distributed versions. Regarding the distributed versions, DiCFS-vp was unable to process the oversized versions of the ECBDL14 dataset, due to the large amounts of memory required to perform shuffling. The HIGGS and KDDCUP99 datasets showed an increasing difference in favor of DiCFS-hp, however, due to the fact that these datasets have much smaller feature sizes than ECBDL14 and EPSILON. As mentioned earlier, DiCFS-vp ties parallelization to the number of features in the dataset, so datasets with small numbers of features were not able to fully leverage the cluster nodes. Another view of the same issue is given by the results for the EPSILON dataset; in this case, DiCFS-vp obtained the best execution times for the 300% sized and larger datasets. This was because there were too many partitions (2,000) for the number of instances available in smaller than 300% sized datasets; further experiments showed that adjusting the number of partitions to 100 reduced the execution time of DiCFS-vp for the 100% EPSILON dataset from about 2 minutes to 1.4 minutes (faster than DiCFS-hp). Reducing the number of partitions further, however, caused the execution time to start increasing again. Figure \[fig:execTimeVsNFeats\] shows the results for similar experiments, except that this time the percentage of features in the datasets was varied and the features were copied to obtain oversized versions of the datasets. It can be observed that the number of features had a greater impact on the memory requirements of DiCFS-vp. This caused problems not only in processing the ECBDL14 dataset but also the EPSILON dataset. We can also see quadratic time complexity in the number of features and how the temporal scale in the EPSILON dataset (with the highest number of dimensions) matches that of the ECBDL14 dataset. As for the KDDCUP99 dataset, the results show that increasing the number of features obtained a better level of parallelization and a slightly improved execution time of DiCFS-vp compared to DiCFS-hp for the 400% dataset version and above. ![Execution times with respect to different percentages of features in four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:execTimeVsNFeats"}](fig04.eps){width="100.00000%"} An important measure of the scalability of an algorithm is *speed-up*, which is a measure that indicates how capable an algorithm is of leveraging a growing number of nodes so as to reduce execution times. We used the speed-up definition shown in Equation (\[eq:speedup\]) and used all the available cores for each node (i.e., 12). The experimental results are shown in Figure \[fig:speedup\], where it can be observed that, for all four datasets, DiCFS-hp scales better than DiCFS-vp. It can also be observed that the HIGGS and KDDCUP datasets are too small to take advantage of the use of more than two nodes and also that practically no speed-up improvement is obtained from increasing this value. To summarize, our experiments show that even when vertical partitioning results in shorter execution times (the case in certain circumstances, e.g., when the dataset has an adequate number of features and instances for optimal parallelization according to the cluster resources), the benefits are not significant and may even be eclipsed by the effort invested in determining whether this approach is indeed the most efficient approach for a particular dataset or a particular hardware configuration or in fine-tuning the number of partitions. Horizontal partitioning should therefore be considered as the best option in the general case. $$\label{eq:speedup} speedup(m)=\left[ \frac { execution\quad time\quad on\quad 2\quad nodes }{execution\quad time\quad on\quad m\quad nodes } \right]$$ ![Speed-up for four datasets for DiCFS-hp and DiCFS-vp[]{data-label="fig:speedup"}](fig05.eps){width="100.00000%"} We also compared the DiCFS-hp approach with that of Eiras-Franco et al. [@Eiras-Franco2016], who described a Spark-based distributed version of the CFS for regression problems. The comparison was based on their experiments with the HIGGS and EPSILON datasets but using our current hardware. Those datasets were selected as only having numerical features and so could naturally be treated as regression problems. Table \[tbl:speedUp\] shows execution time and speed-up values obtained for different sizes of both datasets for both distributed and non-distributed versions and considering them to be classification and regression problems. Regression-oriented versions for the Spark and WEKA versions are labelled RegCFS and RegWEKA, respectively, the number after the dataset name represents the sample size and the letter indicates whether the sample had removed or added instances (*i*) or removed or added features (*f*). In the case of oversized samples, the method used was the same as described above, i.e., features or instances were copied as necessary. The experiments were performed using ten cluster nodes for the distributed versions and a single node for the WEKA version. The resulting speed-up was calculated as the WEKA execution time divided by the corresponding Spark execution time. The original experiments in [@Eiras-Franco2016] were performed only using EPSILON\_50i and HIGGS\_100i. It can be observed that much better speed-up was obtained by the DiCFS-hp version for EPSILON\_50i but in the case of HIGGS\_100i, the resulting speed-up in the classification version was lower than the regression version. However, in order to have a better comparison, two more versions for each dataset were considered, Table \[tbl:speedUp\] shows that the DiCFS-hp version has a better speed-up in all cases except in HIGGS\_100i dataset mentioned before. -------------- --------- --------- ------- -------- -------- ---------- Dataset RegCFS DiCFS-hp EPSILON\_25i 1011.42 655.56 58.85 63.61 10.31 17.19 EPSILON\_25f 393.91 703.95 25.83 55.08 12.78 15.25 EPSILON\_50i 4103.35 2228.64 76.98 110.13 20.24 53.30 HIGGS\_100i 182.86 327.61 21.34 23.70 13.82 8.57 HIGGS\_200i 2079.58 475.98 28.89 26.77 17.78 71.99 HIGGS\_200f 934.07 720.32 21.42 34.35 20.97 43.61 -------------- --------- --------- ------- -------- -------- ---------- : Execution time and speed-up values for different CFS versions for regression and classification[]{data-label="tbl:speedUp"} Conclusions and Future Work {#sec:conclusions} =========================== We describe two parallel and distributed versions of the CFS filter-based FS algorithm using the Apache Spark programming model: DiCFS-vp and DiCFS-hp. These two versions essentially differ in how the dataset is distributed across the nodes of the cluster. The first version distributes the data by splitting rows (instances) and the second version, following Ramírez-Gallego et al.  [@Ramirez-Gallego2017], distributes the data by splitting columns (features). As the outcome of a four-way comparison of DiCFS-vp and DiCFS-hp, a non-distributed implementation in WEKA and a distributed regression version in Spark, we can conclude as follows: - As was expected, both DiCFS-vp and DiCFS-hp were able to handle larger datasets in much a more time-efficient manner than the classical WEKA implementation. Moreover, in many cases they were the only feasible way to process certain types of datasets because of prohibitive WEKA memory requirements. - Of the horizontal and vertical partitioning schemes, the horizontal version (DiCFS-hp) proved to be the better option in the general case due to its better scalability and its natural partitioning mode that enables the Spark framework to make better use of cluster resources. - For classification problems, the benefits obtained from distribution compared to non-distribution version can be considered equal to or even better than the benefits already demonstrated for the regression domain [@Eiras-Franco2016]. Regarding future research, an especially interesting line is whether it is necessary for this kind of algorithm to process all the data available or whether it would be possible to design automatic sampling procedures that could guarantee that, under certain circumstances, equivalent results could be obtained. In the case of the CFS, this question becomes more pertinent in view of the study of symmetrical uncertainty in datasets with up to 20,000 samples by Hall [@Hall1999], where tests showed that symmetrical uncertainty decreased exponentially with the number of instances and then stabilized at a certain number. Another line of future work could be research into different data partitioning schemes that could, for instance, improve the locality of data while overcoming the disadvantages of vertical partitioning. Acknowledgements {#acknowledgements .unnumbered} ================ The authors thank CESGA for use of their supercomputing resources. This research has been partially supported by the Spanish Ministerio de Economía y Competitividad (research projects TIN 2015-65069-C2-1R, TIN2016-76956-C3-3-R), the Xunta de Galicia (Grants GRC2014/035 and ED431G/01) and the European Union Regional Development Funds. R. Palma-Mendoza holds a scholarship from the Spanish Fundación Carolina and the National Autonomous University of Honduras. [10]{} url \#1[`#1`]{}urlprefixhref \#1\#2[\#2]{} \#1[\#1]{} D. W. Aha, D. Kibler, M. K. Albert, [Instance-Based Learning Algorithms]{}, Machine Learning 6 (1) (1991) 37–66. [](http://dx.doi.org/10.1023/A:1022689900470). J. Bacardit, P. Widera, A. M[á]{}rquez-chamorro, F. Divina, J. S. Aguilar-Ruiz, N. Krasnogor, [Contact map prediction using a large-scale ensemble of rule sets and the fusion of multiple predicted structural features]{}, Bioinformatics 28 (19) (2012) 2441–2448. [](http://dx.doi.org/10.1093/bioinformatics/bts472). R. Bellman, [[Dynamic Programming]{}]{}, Rand Corporation research study, Princeton University Press, 1957. V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [[Distributed feature selection: An application to microarray data classification]{}]{}, Applied Soft Computing 30 (2015) 136–150. [](http://dx.doi.org/10.1016/j.asoc.2015.01.035). V. Bol[ó]{}n-Canedo, N. S[á]{}nchez-Maro[ñ]{}o, A. Alonso-Betanzos, [Recent advances and emerging challenges of feature selection in the context of big data]{}, Knowledge-Based Systems 86 (2015) 33–45. [](http://dx.doi.org/10.1016/j.knosys.2015.05.014). M. Dash, H. Liu, [[Consistency-based search in feature selection]{}]{}, Artificial Intelligence 151 (1-2) (2003) 155–176. [](http://dx.doi.org/10.1016/S0004-3702(03)00079-1). <http://linkinghub.elsevier.com/retrieve/pii/S0004370203000791> J. Dean, S. Ghemawat, [MapReduce: Simplied Data Processing on Large Clusters]{}, Proceedings of 6th Symposium on Operating Systems Design and Implementation (2004) 137–149[](http://arxiv.org/abs/10.1.1.163.5292), [](http://dx.doi.org/10.1145/1327452.1327492). J. Dean, S. Ghemawat, [[MapReduce: Simplified Data Processing on Large Clusters]{}]{}, Communications of the ACM 51 (1) (2008) 107. <http://dl.acm.org/citation.cfm?id=1327452.1327492> R. O. Duda, P. E. Hart, D. G. Stork, [[Pattern Classification]{}]{}, John Wiley [&]{} Sons, 2001. C. Eiras-Franco, V. Bol[ó]{}n-Canedo, S. Ramos, J. Gonz[á]{}lez-Dom[í]{}nguez, A. Alonso-Betanzos, J. Touri[ñ]{}o, [[Multithreaded and Spark parallelization of feature selection filters]{}]{}, Journal of Computational Science 17 (2016) 609–619. [](http://dx.doi.org/10.1016/j.jocs.2016.07.002). U. M. Fayyad, K. B. Irani, [[Multi-Interval Discretization of Continuos-Valued Attributes for Classification Learning]{}]{} (1993). <http://trs-new.jpl.nasa.gov/dspace/handle/2014/35171> D. J. Garcia, L. O. Hall, D. B. Goldgof, K. Kramer, [A Parallel Feature Selection Algorithm from Random Subsets]{} (2004). E. E. Ghiselli, [[Theory of Psychological Measurement]{}]{}, McGraw-Hill series in psychology, McGraw-Hill, 1964. <https://books.google.es/books?id=mmh9AAAAMAAJ> I. Guyon, A. Elisseeff, [An Introduction to Variable and Feature Selection]{}, Journal of Machine Learning Research (JMLR) 3 (3) (2003) 1157–1182. [](http://arxiv.org/abs/1111.6189v1), [](http://dx.doi.org/10.1016/j.aca.2011.07.027). M. A. Hall, [Correlation-based feature selection for machine learning]{}, PhD Thesis., Department of Computer Science, Waikato University, New Zealand (1999). [](http://dx.doi.org/10.1.1.37.4643). M. A. Hall, [[Correlation-based Feature Selection for Discrete and Numeric Class Machine Learning]{}]{} (2000) 359–366. <http://dl.acm.org/citation.cfm?id=645529.657793> M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, I. Witten, [The WEKA data mining software: An update]{}, SIGKDD Explorations 11 (1) (2009) 10–18. [](http://dx.doi.org/10.1145/1656274.1656278). T. K. Ho, [[Random Decision Forests]{}]{}, in: Proceedings of the Third International Conference on Document Analysis and Recognition (Volume 1) - Volume 1, ICDAR ’95, IEEE Computer Society, Washington, DC, USA, 1995, pp. 278—-. <http://dl.acm.org/citation.cfm?id=844379.844681> A. Idris, A. Khan, Y. S. Lee, [Intelligent churn prediction in telecom: Employing mRMR feature selection and RotBoost based ensemble classification]{}, Applied Intelligence 39 (3) (2013) 659–672. [](http://dx.doi.org/10.1007/s10489-013-0440-x). A. Idris, M. Rizwan, A. Khan, [Churn prediction in telecom using Random Forest and PSO based data balancing in combination with various feature selection strategies]{}, Computers and Electrical Engineering 38 (6) (2012) 1808–1819. [](http://dx.doi.org/10.1016/j.compeleceng.2012.09.001). I. Kononenko, [[Estimating attributes: Analysis and extensions of RELIEF]{}]{}, Machine Learning: ECML-94 784 (1994) 171–182. [](http://dx.doi.org/10.1007/3-540-57868-4). <http://www.springerlink.com/index/10.1007/3-540-57868-4> J. Kubica, S. Singh, D. Sorokina, [[Parallel Large-Scale Feature Selection]{}]{}, in: Scaling Up Machine Learning, no. February, 2011, pp. 352–370. [](http://dx.doi.org/10.1017/CBO9781139042918.018). <http://ebooks.cambridge.org/ref/id/CBO9781139042918A143> J. Leskovec, A. Rajaraman, J. D. Ullman, [[Mining of Massive Datasets]{}]{}, 2014. [](http://dx.doi.org/10.1017/CBO9781139924801). <http://ebooks.cambridge.org/ref/id/CBO9781139924801> M. Lichman, [[UCI Machine Learning Repository]{}](http://archive.ics.uci.edu/ml) (2013). <http://archive.ics.uci.edu/ml> J. Ma, L. K. Saul, S. Savage, G. M. Voelker, [Identifying Suspicious URLs : An Application of Large-Scale Online Learning]{}, in: Proceedings of the International Conference on Machine Learning (ICML), Montreal, Quebec, 2009. R. J. Palma-Mendoza, D. Rodriguez, L. De-Marcos, [[Distributed ReliefF-based feature selection in Spark]{}](http://link.springer.com/10.1007/s10115-017-1145-y), Knowledge and Information Systems (2018) 1–20[](http://dx.doi.org/10.1007/s10115-017-1145-y). <http://link.springer.com/10.1007/s10115-017-1145-y> H. Peng, F. Long, C. Ding, [[Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.]{}]{}, IEEE transactions on pattern analysis and machine intelligence 27 (8) (2005) 1226–38. [](http://dx.doi.org/10.1109/TPAMI.2005.159). <http://www.ncbi.nlm.nih.gov/pubmed/16119262> D. Peralta, S. del R[í]{}o, S. Ram[í]{}rez-Gallego, I. Riguero, J. M. Benitez, F. Herrera, [[Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach ]{}]{}, Mathematical Problems in Engineering 2015 (JANUARY). [](http://dx.doi.org/10.1155/2015/246139). W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, [Numerical recipes in C]{}, Vol. 2, Cambridge Univ Press, 1982. J. R. Quinlan, [[Induction of Decision Trees]{}](http://dx.doi.org/10.1023/A:1022643204877), Mach. Learn. 1 (1) (1986) 81–106. [](http://dx.doi.org/10.1023/A:1022643204877). <http://dx.doi.org/10.1023/A:1022643204877> J. R. Quinlan, [[C4.5: Programs for Machine Learning]{}](http://portal.acm.org/citation.cfm?id=152181), Vol. 1, 1992. [](http://dx.doi.org/10.1016/S0019-9958(62)90649-6). <http://portal.acm.org/citation.cfm?id=152181> S. Ram[í]{}rez-Gallego, I. Lastra, D. Mart[í]{}nez-Rego, V. Bol[ó]{}n-Canedo, J. M. Ben[í]{}tez, F. Herrera, A. Alonso-Betanzos, [[Fast-mRMR: Fast Minimum Redundancy Maximum Relevance Algorithm for High-Dimensional Big Data]{}]{}, International Journal of Intelligent Systems 32 (2) (2017) 134–152. [](http://dx.doi.org/10.1002/int.21833). <http://doi.wiley.com/10.1002/int.21833> I. Rish, [An empirical study of the naive Bayes classifier]{}, in: IJCAI 2001 workshop on empirical methods in artificial intelligence, Vol. 3, IBM, 2001, pp. 41–46. P. Sadowski, P. Baldi, D. Whiteson, [Searching for Higgs Boson Decay Modes with Deep Learning]{}, Advances in Neural Information Processing Systems 27 (Proceedings of NIPS) (2014) 1–9. J. Silva, A. Aguiar, F. Silva, [[Parallel Asynchronous Strategies for the Execution of Feature Selection Algorithms]{}]{}, International Journal of Parallel Programming (2017) 1–32[](http://dx.doi.org/10.1007/s10766-017-0493-2). <http://link.springer.com/10.1007/s10766-017-0493-2> V. Vapnik, [The Nature of Statistical Learning Theory]{} (1995). Y. Wang, W. Ke, X. Tao, [[A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark]{}]{}, Information 7 (1) (2016) 6. [](http://dx.doi.org/10.3390/info7010006). <http://www.mdpi.com/2078-2489/7/1/6> , [Xingquan Zhu]{}, [Gong-Qing Wu]{}, [Wei Ding]{}, [[Data mining with big data]{}](http://ieeexplore.ieee.org/document/6547630/), IEEE Transactions on Knowledge and Data Engineering 26 (1) (2014) 97–107. [](http://dx.doi.org/10.1109/TKDE.2013.109). <http://ieeexplore.ieee.org/document/6547630/> M. Zaharia, M. Chowdhury, T. Das, A. Dave, [[Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing]{}]{}, NSDI’12 Proceedings of the 9th USENIX conference on Networked Systems Design and Implementation (2012) 2[](http://arxiv.org/abs/EECS-2011-82), [](http://dx.doi.org/10.1111/j.1095-8649.2005.00662.x). M. Zaharia, M. Chowdhury, M. J. Franklin, S. Shenker, I. Stoica, [Spark : Cluster Computing with Working Sets]{}, HotCloud’10 Proceedings of the 2nd USENIX conference on Hot topics in cloud computing (2010) 10[](http://dx.doi.org/10.1007/s00256-009-0861-0). Z. Zhao, H. Liu, [Searching for interacting features]{}, IJCAI International Joint Conference on Artificial Intelligence (2007) 1156–1161[](http://dx.doi.org/10.3233/IDA-2009-0364). Z. Zhao, R. Zhang, J. Cox, D. Duling, W. Sarle, [[Massively parallel feature selection: an approach based on variance preservation]{}]{}, Machine Learning 92 (1) (2013) 195–220. [](http://dx.doi.org/10.1007/s10994-013-5373-4). <http://link.springer.com/10.1007/s10994-013-5373-4> [^1]: <https://spark-packages.org> [^2]: <https://github.com/rauljosepalma/DiCFS> [^3]: <http://www.sas.com/en_us/software/high-performance-analytics.html> [^4]: <http://bigdata.cesga.es/> [^5]: <http://largescale.ml.tu-berlin.de/about/>
{ "pile_set_name": "ArXiv" }
Effects of alcohol on prolonged cognitive performance measured with Stroop's Color Word Test. 24 men and 24 women were randomly assigned in equal numbers to an Alcohol group, a Placebo group, or a Control group. The alcohol dose was 1.0 ml of 100% alcohol/kg of body weight. Subjects were tested three consecutive times using Stroop's Color Word Test. The dependent measures were total time needed to complete the test, number of errors made and number of hesitations. Data were grouped into three blocks of 100 words. Results indicated that number of hesitations was too insensitive a measure to yield any significant effects. On the two first measures alcohol had a detrimental effect in that the Alcohol group needed more time to complete the test and made more errors than the Placebo group. There was also a significant interaction of alcohol dose by sex by blocks on both these measures, indicating that the detrimental effect of alcohol over time was restricted to women. Different implications of the results were discussed.
{ "pile_set_name": "PubMed Abstracts" }
Fighting for a climate change treaty A NASA study finds the amount of ozone destroyed in the Arctic in 2011, shown in this image, was comparable to the ozone 'hole' that formed each spring since the mid-1980s [EPA] In 1974, chemists Mario Molina and Frank Sherwood Rowland published a landmark article that demonstrated the ability of chlorofluorocarbons (CFCs) to break down the ozone layer, the atmospheric region that plays a vital role in shielding humans and other life from harmful ultraviolet (UV) radiation. It marked the opening salvo of a decade-long fight to phase out and ban the use of these widespread industrial compounds. The period between Molina and Rowland's article and the establishment of an international agreement to regulate CFCs was remarkably similar to current climate change politics. It included calls for scientific consensus before moving on the issue, industry push back, fears over economic chaos, claims of inadequate chemical substitutes, difficulty in getting industrialised nations to the table, and debates and diplomacy over how to get developing nations to agree to regulate a problem predominantly caused by the industrialised world. Together, these issues created a political climate that was anything but conducive to an agreement for avoiding environmental catastrophe. And yet an agreement was reached. CFC production was greatly curtailed and disaster was averted. The Montreal Protocol - initially signed by 24 nations in 1987 and now ratified by 196 countries - bound nations to a set of policies that would rapidly reduce the use of CFCs. It became the first global environmental treaty to implement the precautionary approach, mandating strong actions now to avert future damage. The protocol has since become, in the words of former UN secretary-general Kofi Annan, "perhaps the single most successful international environmental agreement." It can also be called the first climate change treaty, since ozone-depleting substances are potent greenhouse gases. Lessons from the fight and eventual ban of CFCs can illuminate our current struggles to regulate greenhouse gases and provide guidance toward creating a strong treaty necessary to stave off another environmental disaster. An $8bn industry For more than 40 years, the generally non-toxic and non-flammable compounds known as CFCs were widely produced and used in refrigerants, propellants, and solvents. They were first manufactured as a safe alternative to ammonia and sulphur dioxide in refrigeration in the early 1930s. Their widespread success, due to their unique and seemingly miraculous chemical properties, propelled an $8bn industry that employed 600,000 people directly and was reaching new heights of manufacturing at the time of Molina and Rowland's discovery. As CFC production swelled to meet the global demand for aerosol and refrigeration, so too did the release of these ozone-depleting compounds into the atmosphere. Unlike carbon dioxide, CFCs are a foreign element in the atmosphere. When released, CFC molecules rise and reach the ozone layer where they encounter UV radiation. The strong radiation breaks down these molecules into their simpler parts, most notably chlorine atoms. Molina and Rowland realised these now free chlorine atoms could react and deplete the ozone layer. The US Environmental Protection Agency estimates that one chlorine atom can destroy 100,000 ozone molecules. Continuing to produce CFCs at such high levels would inevitably have depleted more of the ozone layer and would have led to greater harm to humans from UV rays. Further studies concurred with Molina and Rowland's findings and predicted losses of ozone that would have greatly increased cases of skin cancer and eye damage. Other detrimental impacts included reduced productivity in plants and crops and harm to marine life and air quality. The findings provoked wide-ranging reactions. Emboldened by the passage of the Clean Air and Clean Water Acts in the United States, the science and environmental communities wanted the US government to ban production and use of CFCs. They saw the depletion of the ozone layer as a grave, imminent threat that needed to be met with decisive action. The CFC industry, led by DuPont, which accounted for nearly 50 per cent of the market, attacked the theory as unfounded, arguing that no stratospheric ozone loss had been observed. DuPont and other CFC manufacturers lobbied extensively to prevent states from passing bills banning CFC use. The 'ban-now-find-out-later' approach DuPont also embarked on an advertising campaign to undermine the idea that CFCs damaged the ozone layer, while simultaneously arguing that any hasty restrictions would have a disastrous impact on businesses, jobs and the economy. DuPont's chairman, Irving Shapiro, announced to several major newspapers that "the 'ban-now-find-out-later' approach thrust upon an $8bn segment of industry, both in the headlines and in many legislative proposals, is a disturbing trend. Businesses can be destroyed before scientific facts are assembled and evaluated … The nation cannot afford to act on this and other issues before the full facts are known." Public health concerns, however, trumped industry arguments and consumers began boycotting aerosol sprays. Pressure from environmentalists and consumer groups resulted in a ban on aerosol sprays in 1978. In the end, though, the ban turned out to be only a partial victory for both sides. Nearly all sprays were banned, but numerous putatively "essential" uses of CFCs in air conditioners and refrigerators remained unregulated. The United States was the only major CFC-producing nation to voluntarily eliminate CFCs in aerosols, although relatively minor producers such as Canada, Denmark and Sweden soon followed suit. And while European nations today are at the forefront of promoting climate change legislation, in the 1970s and 1980s, CFC-producing giants like England and France were reluctant to impose restrictions. After these initial efforts by individual nations, progress toward an international CFCs agreement ground to a halt in the early 1980s. This was largely because protecting the ozone layer produced an unprecedented problem for human society. The public and governments were being told that the impacts of a thinning ozone layer would not be seen for decades. Yet in order to prevent much higher risks of skin cancer and cataracts, it was essential to act now and begin phasing out CFCs. Manufacturers continued to resist, arguing that in the absence of suitable substitutes, curtailing CFC production would result in significant job losses and a large reduction in the supply of air conditioners and refrigerators. They argued that action on CFCs would harm both the developed and developing world. On top of this, almost all nations would have to agree on a coordinated phase out and eventual ban of the industrial compounds since the release of CFCs by any one nation would have a global impact. Delayed implementations Producers of CFCs continued to wage a public battle against further regulation. Sceptics stepped up their public relations campaigns disputing the evidence, finding scientists to argue persuasively against the threat, and predicting dire economic consequences. The doubt did nothing to change the scientific consensus around CFCs and ozone depletion, but it helped to delay implementation of limits on CFCs for many years. While special interests were fighting it out in the public square, diplomacy was taking place behind the scenes. Domestic and international workshops were assessing the CFC-ozone connection while proposing various regulations, compromises, and deals to get major CFC-producing nations and developing nations to the table to begin talks toward an international agreement. The United States and the UN Environment Programme played leading roles. The fruit of this diplomatic labour was the Vienna Convention of March 1985, which produced a framework agreement in which states agreed to cooperate in research and assessments of the ozone problem, to exchange information and to adopt measures to prevent harm to the ozone layer. But the accord fell far short of mandating actions to limit CFC production or of establishing a timetable to phase it out. Much like the current climate change debate, it looked as if action on the issue was about to be stymied by a lengthy political struggle. Two months later, scientists discovered the Antarctic ozone hole. From a climate change perspective, this would be comparable to a large ice sheet breaking off from an ice shelf, melting overnight and causing a small rise in sea level, thereby warning the world of the potential consequences of unchecked climate change. Scientists discovered that ozone levels over the Antarctic had dropped by 10 per cent during the winter and an ozone hole had begun to form. The ozone hole is an area with extremely low amounts of ozone, not an actual hole. But the discovery, the first startling proof of the thinning ozone layer, was an alarming wake-up call that human activities can have dire consequences for the atmosphere and in turn major health implications. Intense media attention galvanised public opinion and sparked fears that ozone holes might form over populated cities around the world. The EPA estimated that if CFC production continued to grow at 2.5 per cent a year until 2050, 150 million Americans would develop skin cancer, leading to some 3 million deaths by 2075. After the momentous discovery of ozone depletion, the balance shifted toward regulation. Industry at first still lobbied in private, but eventually began to change its position as scientific evidence of ozone depletion continued to mount. In the summer of 1987, as preparations were under way for the Montreal Conference on Substances that Deplete the Ozone Layer, the Reagan administration publicly came out in support of international limits on CFC production. This effectively put a stop to industry opposition and propelled an agreement among industrialised nations to reduce CFC production by 50 per cent by 2000. The resulting Montreal Protocol included a 10-year grace period and a fund for developing nations in order to get them to agree to regulate a problem largely generated by the industrialised world. The Multilateral Fund has since provided $2.7bn to developing nations for transitioning to better technology and CFC substitutes and for meeting phase-out obligations. The fund was the first financial instrument of its kind and is the model for the UN-REDD (Reducing Emissions from Deforestation and Forest Degradation) programme, in which industrial nations use carbon offsets to provide developing nations with an incentive for conserving their forests. The Montreal Protocol Since 1987, the Montreal Protocol has been strengthened with the addition of more ozone-damaging substances to the list and the compliance of nearly 200 countries. Ozone-depleting substances in the atmosphere hit their peak in 1997–98 and have been falling ever since. Action on account of the ozone layer has greatly improved air quality while reducing the future risk of skin cancer, cataracts, and blindness. Furthermore, the treaty has done more than any other to reduce climate change by stopping 135bn metric tonnes of CO2-equivalent emissions from escaping to the atmosphere in the last two decades. Due to the nature of CFCs, however, the ozone is still thinning in certain places. This may well continue until the middle of the 21st century, at which point the ozone layer should begin to recover. The true significance of the international agreement is best illustrated by a NASA simulation of what would have occurred had CFC production continued at its pre-Montreal rate. By 2020, 17 per cent of global ozone would be destroyed. By 2040, the ozone thinning would affect the entire planet. And by 2065, atmospheric ozone drops to 70 per cent below 1970s levels. As a result, there would have been a threefold increase in the amount of harmful UV radiation reaching the planet's surface, resulting in tens of millions of skin cancer and cataract cases and trillions in health care costs. Luckily, it is a fate we managed to avoid. The first and foremost lesson to take from the fight to ban CFCs is that it was successful. The discovery that human activity was harming the atmosphere influenced public opinion and consumer buying power enough to change national policy and provide momentum toward an international agreement that enacted regulations to prevent a future catastrophe. Nations agreed to take precautions that would cause some short-term difficulties in order to head off a long-term disaster. Secondly, health concerns were the driving motivator behind public and government action. Peter Morrisette argues that the passage of a meaningful ozone treaty relied on four key factors: Ozone depletion was viewed as a global problem; there was strong scientific understanding of the causes and effects of ozone depletion; there were public-health concerns about skin cancer, which were amplified by the ozone hole discovery; and substitutes for CFCs were available. Climate change is also viewed as a global problem and there is a nearly universal consensus among climate scientists over the causes. Some argue that the major difference between obtaining a treaty back then and what hinders today's agreement is a lack of readily available substitutes in the form of alternative energy - wind, solar, electric - to take the place of fossil fuels. International agreement Yet the claim that no cost-effective, efficient substitutes were available was also made during the CFC debates. It was not until after the ozone hole discovery, at which point an international agreement seemed likely, that industry announced that substitutes could be made available under the right market conditions and policy incentives. CFC producers used the ensuing protocol as a mechanism to develop and market substitutes. Might not a similar situation unfold today if governments enforced greenhouse gas reductions, and policy and market conditions fostered alternative energies? It seems the major difference between a successful ozone treaty and an out-of-reach climate agreement is the weak connection made between climate change and human health. Where ozone depletion was primarily thought of as a human health issue, climate change is an environmental issue. Until that narrative is altered, an agreement on climate change could be elusive. Encouraging signs toward that end are emerging, none more so than the US EPA declaration that greenhouse gases jeopardise public health. The declaration paves way for the EPA to regulate greenhouse gas emissions from coal plants and other facilities. The regulatory route seems the most feasible way to reduce greenhouse emissions in the United States, as any climate change legislation has been killed in Congress. The Supreme Court ruling in favour of the EPA gave the agency judicial approval to use its authority to regulate such gases under the Clean Air Act. Just as measures to protect the ozone layer have benefited the climate, so too will EPA action on regulating greenhouse gases provide important health benefits by cleaning up the air. Added benefits of climate mitigation It is important to communicate that climate change mitigation will have the added benefit of reducing air pollution and improving respiratory health. It will also reduce the use of fossil fuels like oil and coal whose extraction processes - from mountaintop removal, which clogs streams and pollutes water supplies, to offshore drilling spills, which can contaminate seafood - have direct human health implications. While regulation at the national level is a good start, an international agreement - perhaps a stronger version of the Kyoto Protocol - will be necessary to achieve global cooperation on climate change. For this to happen, the public will need to voice greater concern and take more action, as it did during the CFC threat. Ozone depletion was framed as an international human health issue, which amplified the public's demand for accelerated government action. A similar approach may work for climate change. The question that remains is whether a catastrophic discovery similar to the ozone hole will be necessary to spur global concerns over climate change and push governments to act. If so, the consequences may prove to be far more disruptive - economically and ecologically - than the ozone problem of the previous century. Matthew Cimitile is a writer for the US Geological Survey Coastal and Marine Science Center in St. Petersburg, Florida.
{ "pile_set_name": "Pile-CC" }
I want to make roasted artichokes for a party tomorrow. Can I hold prepped artichokes (lemon water and oil) in the baking dish overnight? 2 Comments Well, I thought the better of that strategy and roasted them today. Half of them are vacuum sealed for future use and the other half will be served either at room temp or gently warmed! I was concerned about excessive oxidation.
{ "pile_set_name": "Pile-CC" }
Perceptual-motor skill learning in Gilles de la Tourette syndrome. Evidence for multiple procedural learning and memory systems. Procedural learning and memory systems likely comprise several skills that are differentially affected by various illnesses of the central nervous system, suggesting their relative functional independence and reliance on differing neural circuits. Gilles de la Tourette syndrome (GTS) is a movement disorder that involves disturbances in the structure and function of the striatum and related circuitry. Recent studies suggest that patients with GTS are impaired in performance of a probabilistic classification task that putatively involves the acquisition of stimulus-response (S-R)-based habits. Assessing the learning of perceptual-motor skills and probabilistic classification in the same samples of GTS and healthy control subjects may help to determine whether these various forms of procedural (habit) learning rely on the same or differing neuroanatomical substrates and whether those substrates are differentially affected in persons with GTS. Therefore, we assessed perceptual-motor skill learning using the pursuit-rotor and mirror tracing tasks in 50 patients with GTS and 55 control subjects who had previously been compared at learning a task of probabilistic classifications. The GTS subjects did not differ from the control subjects in performance of either the pursuit rotor or mirror-tracing tasks, although they were significantly impaired in the acquisition of a probabilistic classification task. In addition, learning on the perceptual-motor tasks was not correlated with habit learning on the classification task in either the GTS or healthy control subjects. These findings suggest that the differing forms of procedural learning are dissociable both functionally and neuroanatomically. The specific deficits in the probabilistic classification form of habit learning in persons with GTS are likely to be a consequence of disturbances in specific corticostriatal circuits, but not the same circuits that subserve the perceptual-motor form of habit learning.
{ "pile_set_name": "PubMed Abstracts" }
1982–83 Georgia Tech Yellow Jackets men's basketball team The 1982-83 Georgia Tech Yellow Jackets men's basketball team represented the Georgia Institute of Technology. Led by head coach Bobby Cremins, the team finished the season with an overall record of 13-15 (4-10 ACC). Roster Schedule and results References Category:Georgia Tech Yellow Jackets men's basketball seasons Georgia Tech Category:1982 in sports in Georgia (U.S. state) Category:1983 in sports in Georgia (U.S. state)
{ "pile_set_name": "Wikipedia (en)" }
= -371 - 685 for s. 8 Solve 1149*b - 10 = 1139*b for b. 1 Solve -8*o - 11 = 5 for o. -2 Solve -5*d + 0*d - 3*d = 0 for d. 0 Solve -34*f + 17*f = -17 for f. 1 Solve 5*p = 11 - 21 for p. -2 Solve -7 = -3*q + 8 for q. 5 Solve -109*z = -129*z - 60 for z. -3 Solve -5 + 1 = -2*t for t. 2 Solve 0 = -27*o + 34*o for o. 0 Solve -7*v - 300 + 286 = 0 for v. -2 Solve 70*k = 102*k + 288 for k. -9 Solve 6*p = -29 + 35 for p. 1 Solve -13*l - 59 + 7 = 0 for l. -4 Solve 23*z + 72 = 5*z for z. -4 Solve -3*r + 8*r = 15 for r. 3 Solve 2567*x = 2589*x - 154 for x. 7 Solve -19*h + 27 = -11 for h. 2 Solve -4*a = 4*a + 40 for a. -5 Solve -32*c + 19*c - 65 = 0 for c. -5 Solve -2*p + p - 1 = 0 for p. -1 Solve 14*l + 250 = 166 for l. -6 Solve 56 = 15*o + 11 for o. 3 Solve -10 = 2*v - 8 for v. -1 Solve 10*c + 2946 = 2956 for c. 1 Solve 47*v - 196 = 19*v for v. 7 Solve 2*w - 84 + 88 = 0 for w. -2 Solve -43*f = -47*f + 16 for f. 4 Solve -10*g + 76 = 46 for g. 3 Solve -43 = 4*q - 55 for q. 3 Solve 0 = 9*o + 14 + 22 for o. -4 Solve -14*z + 10*z = 8 for z. -2 Solve 2*f + 1 = 3 for f. 1 Solve -17*r = 76 + 43 for r. -7 Solve 20 = -8*m + 3*m for m. -4 Solve -m - 6 = -3 for m. -3 Solve 0 = 25*x - 23*x + 8 for x. -4 Solve -46*x + 72 = -20 for x. 2 Solve 25 = -4*f + 9 for f. -4 Solve -54*r + 59*r - 5 = 0 for r. 1 Solve 10 + 17 = -9*v for v. -3 Solve -51 + 48 = -c for c. 3 Solve 25*q = -9*q + 204 for q. 6 Solve 14*b = 10*b - 16 for b. -4 Solve 2*v + 10*v = 24 for v. 2 Solve -23*d = -24*d + 4 for d. 4 Solve 2*l - 10 = 4*l for l. -5 Solve -3*c - 1 = -2*c for c. -1 Solve 12 + 0 = 4*d for d. 3 Solve -325*s + 317*s - 40 = 0 for s. -5 Solve -54*n - 79 - 191 = 0 for n. -5 Solve -18*i + 46 = 100 for i. -3 Solve 0 = -4*r - 4 + 4 for r. 0 Solve 9*s = 21*s + 24 for s. -2 Solve 149*n - 528 = 17*n for n. 4 Solve 0 = -6*f - 25 + 31 for f. 1 Solve 0 = -4*x - 3*x - 7 for x. -1 Solve -57 = -16*p - 41 for p. 1 Solve 0 = -7*b + 8*b - 2 for b. 2 Solve 1412*c = 1430*c - 162 for c. 9 Solve 0 = 2*s + 7 + 3 for s. -5 Solve -11*g - 160 = 9*g for g. -8 Solve -89 = -4*z - 69 for z. 5 Solve 0 = -20*x + 16 - 116 for x. -5 Solve -1 = -4*v + 3 for v. 1 Solve 2*n - 12 = -4 for n. 4 Solve 187 - 147 = -5*q for q. -8 Solve -1301*c + 40 = -1291*c for c. 4 Solve 13*v - 2*v = -11 for v. -1 Solve -87*g = -65*g + 88 for g. -4 Solve 2*n = -3*n - 20 for n. -4 Solve -53*s - 30 + 83 = 0 for s. 1 Solve 4*t - 418 + 446 = 0 for t. -7 Solve 201 - 185 = -4*b for b. -4 Solve -20*r - 24 = -28*r for r. 3 Solve 32 = 3*n + 17 for n. 5 Solve -15*j - 30 + 0 = 0 for j. -2 Solve 16*t = 10*t + 12 for t. 2 Solve 28*z = 57*z - 145 for z. 5 Solve 180 = 38*w - 2*w for w. 5 Solve 44*q = -4 + 4 for q. 0 Solve -12 = -123*t + 127*t for t. -3 Solve 12 = a + 16 for a. -4 Solve 2388 = 3*s + 2394 for s. -2 Solve 8*x - 51 = -35 for x. 2 Solve 0 = 13*v - v - 36 for v. 3 Solve -y + 4 = 3 for y. 1 Solve -4*w - 4*w = 0 for w. 0 Solve 4*v = -18*v + 44 for v. 2 Solve 3*w = 20 - 26 for w. -2 Solve 22*s = -53 + 251 for s. 9 Solve 0 = 34*c - 6*c + 28 for c. -1 Solve 0 = -374*n + 376*n for n. 0 Solve 0 = d - 3*d - 2 for d. -1 Solve -36*h = -40*h + 12 for h. 3 Solve -69*c + 36*c = -33 for c. 1 Solve 0 = 21*s - 0 for s. 0 Solve -22 = 9*y - 4 for y. -2 Solve 16*a + 33 = -31 for a. -4 Solve 0 = 10*m - 13*m + 15 for m. 5 Solve -7*y + 2022 = 2064 for y. -6 Solve -22*r + 17*r + 15 = 0 for r. 3 Solve -68 = 8*r - 28 for r. -5 Solve 2 = 2*a - 6 for a. 4 Solve -19*i + 65 = -6*i for i. 5 Solve -363*x + 356*x + 70 = 0 for x. 10 Solve -63 = -10*k + 17 for k. 8 Solve -20 = 10*s - 60 for s. 4 Solve -5*v + 188 - 163 = 0 for v. 5 Solve 0 = 186*q - 184*q - 6 for q. 3 Solve -66*u = 120 + 144 for u. -4 Solve -9486*b + 9489*b = 9 for b. 3 Solve 0 = -42*v + 16*v + 208 for v. 8 Solve 66*z - 27 = 75*z for z. -3 Solve 4*t = 38*t - 34 for t. 1 Solve 0 = 4*w - 7*w - 15 for w. -5 Solve 0 = -9*w + 3*w + 30 for w. 5 Solve 55*g = 65*g + 40 for g. -4 Solve -2*m - 51 = 15*m for m. -3 Solve 11*q = -20 - 13 for q. -3 Solve -3*k + 4 = -4*k for k. -4 Solve 0 = -8*n - 8*n - 16 for n. -1 Solve 4*a - 15 = 1 for a. 4 Solve 71*f + 3*f - 17*f = 0 for f. 0 Solve 12*y - 25 - 23 = 0 for y. 4 Solve -11*c - 21 = -4*c for c. -3 Solve 13*o = 7*o + 48 for o. 8 Solve -40*k - 498 = -338 for k. -4 Solve 675 = -4*y + 703 for y. 7 Solve -366*f + 379*f - 26 = 0 for f. 2 Solve -11*z + 22 = -0*z for z. 2 Solve 30 = 148*i - 153*i for i. -6 Solve 91*d - 96 = 107*d for d. -6 Solve 4*d + 80 = 84 for d. 1 Solve -6 = -2*j + 4 for j. 5 Solve 0 = 17*m - 603 + 467 for m. 8 Solve 0 = -63*l + 67*l + 4 for l. -1 Solve -61 + 52 = -3*c for c. 3 Solve t = 11 - 9 for t. 2 Solve -12*d + 9*d = -3 for d. 1 Solve 543*d - 536*d - 49 = 0 for d. 7 Solve 0 = 90*t - 88*t for t. 0 Solve 116*q - 138*q = 220 for q. -10 Solve -47*k + 366 = -10 for k. 8 Solve 33*w - 26*w + 42 = 0 for w. -6 Solve 7*a - 3*a - 16 = 0 for a. 4 Solve 0 = -18*d + 105 - 15 for d. 5 Solve -21 = 340*o - 333*o for o. -3 Solve v + 17 = 12 for v. -5 Solve 0 = -22*a + 19*a - 3 for a. -1 Solve 43*p - 45*p - 6 = 0 for p. -3 Solve 149*a = 162*a for a. 0 Solve -1317*f - 88 = -1328*f for f. 8 Solve 14*t - 10*t - 12 = 0 for t. 3 Solve 28*w = 26*w + 6 for w. 3 Solve -11*n = n - 60 for n. 5 Solve -14*f - 4*f - 18 = 0 for f. -1 Solve 97*y - 69*y - 28 = 0 for y. 1 Solve -15 = 36*o - 33*o for o. -5 Solve 144 = 5*l + 119 for l. 5 Solve 108 = g - 19*g for g. -6 Solve 84*d - 86*d + 4 = 0 for d. 2 Solve 3 = -2*h + 1 for h. -1 Solve -39*q - 26*q - 65 = 0 for q. -1 Solve 2 = -o - 0 for o. -2 Solve 8*y - 129 + 153 = 0 for y. -3 Solve -31 = -7*f - 3 for f. 4 Solve 7*z = -57 + 71 for z. 2 Solve -50*g + 88*g - 114 = 0 for g. 3 Solve -5 = -17*r + 63 for r. 4 Solve 6*q + 9 = -3 for q. -2 Solve -7*m + 0 = 14 for m. -2 Solve -53*t - 38 - 174 = 0 for t. -4 Solve 27*s - 32*s = -30 for s. 6 Solve 0 = 3*j - 6*j - 15 for j. -5 Solve -5*n - 20 = -5 for n. -3 Solve 0 = -177*l - 359 + 1952 for l. 9 Solve 5*y + 12 = y for y. -3 Solve -20*p + 14*p = -12 for p. 2 Solve 221 = -5*r + 246 for r. 5 Solve 1 + 35 = 18*y for y. 2 Solve 99*v = 65*v - 34 for v. -1 Solve 5*q - 156 = -176 for q. -4 Solve -170 = -22*n - 16 for n. 7 Solve -6*i + 0 = -24 for i. 4 Solve 607*n - 615*n + 8 = 0 for n. 1 Solve 0 = 108*n - 36*n + 144 for n. -2 Solve 88 + 7 = 19*x for x. 5 Solve 54*v - 10 = 44*v for v. 1 Solve 31*l - 36 = 49*l for l. -2 Solve 65 = -13*n - 0*n for n. -5 Solve -7 = 3*m + 8 for m. -5 Solve -65*k - 21 = -72*k for k. 3 Solve 60*g + 40 = 68*g for g. 5 Solve -20*x = -24*x + 28 for x. 7 Solve -2*c + 11 - 1 = 0 for c. 5 Solve -43*t - 159 = -30 for t. -3 Solve -8*r = -21*r - 39 for r. -3 Solve -5*a + 3*a = -8 for a. 4 Solve 2*a - 68 = 36*a for a. -2 Solve -551*f = -545*f + 12 for f. -2 Solve 98 = 56*m - 182 for m. 5 Solve 2*n - 25 = 7*n for n. -5 Solve -92*w = -77*w + 90 for w. -6 Solve 106*l - 26*l = 0 for l. 0 Solve 3*p + 8 = -p for p. -2 Solve -247 + 121 = -14*w for w. 9 Solve 0 = 33*b - 16 - 17 for b. 1 Solve -16*f - 35 = -67 for f. 2 Solve 64*c = 66*c for c. 0 Solve 52*w = 51*w - 2 for w. -2 Solve 5 = -233*m + 228*m for m. -1 Solve 2 = 2*d + 6 for d. -2 Solve 3*q = -50 + 41 for q. -3 Solve -406*c - 10 = -411*c for c. 2 Solve -72 = -16*x - 8 for x. 4 Solve 6*m = 8*m + 8 for m. -4 Solve 17*t = 13*t - 16 for t. -4 Solve 2605*y + 18 = 2614*y for y. 2 Solve -58 = 29*m + 29 for m. -3 Solve -30*q + 42*q + 48 = 0 for q. -4 Solve 10*p = 440 - 480 for p. -4 Solve 33*y = 26*y + 7 for y. 1 Solve -694 = 15*m - 634 for m. -4 Solve -20*i = -9*i - 33 for i. 3 Solve 10*g + 15 = -35 for g. -5 Solve 0 = -8*f + 4*f for f. 0 Solve 6*x - 34 = -4 for x. 5 Solve -10*w = -5*w + 10 for w. -2 Solve -9*i = -2*i - 7 for i. 1 Solve 28 = 378*u - 385*u for u. -4 Solve 75*j - 29*j - 368 = 0 for j. 8 Solve 1031*w - 1037*w = 36 for w. -6 Solve 2*m = 16 - 8 for m. 4 Solve 10*r + 8 = -2 for r. -1 Solve 148*y + 1245 = 209 for y. -7 Solve -226*u = -248*u + 22 for u. 1 Solve 192*j = 179*j + 78 for j. 6 Solve -20*q - 5 = -25*q for q. 1 Solve -42 = 379*z - 385*z for z. 7 Solve 68*c = 77*c - 45 for c. 5 Solve -k + 413 = 408 for k. 5 Solve 0 = 47*o - 39*o - 24 for o. 3 Solve 21 = -8*r + 77 for r. 7 Solve -4 = -j + 1 for j. 5 Solve 5*s - 409 = -429 for s. -4 Solve 30 = 9*q - 24 for q. 6 Solve -621 + 663 = -14*o for o. -3 Solve 0 = -4*k - 33 + 37 for k. 1 Solve -20
{ "pile_set_name": "DM Mathematics" }
There is no denying the fact that night shift workers are fast losing on their health. Long and hectic work schedules lead to irregular appetites, rapid changes in weight and a high risk of gastro-intestinal […] After 60 years, authorities in the United States have approved a pill that will treat malaria. According to a report in BBC, the drug, tafenoquine is being described as a “phenomenal achievement” and will treat the recurring […] Compounds in green tea and in red wine may help block the formation of toxic molecules that cause severe developmental and mental disorders, and may help treat certain inborn congenital metabolic diseases, a study has […] Carbohydrates have become the ‘culprits’ for many healthy eaters recently. Despite their less stellar stature in the nutrition department, carbs aren’t actually the enemies for your body. They are responsible for providing you with energy, […] Does the surgical removal of tonsils and adenoids in young children have long-term health implications? Researchers in a new study say the removal may increase the risk of certain ailments, but other experts aren’t so […]
{ "pile_set_name": "Pile-CC" }
New Garden Website Design Woodside Garden is a walled garden near Jedburgh in the Scottish Borders. It has a plant centre, an award winning coffee shop and runs a series of events throughout the year. We were delighted when we secured the Woodside Garden website design contract. Stephen and Emma Emmerson inherited their website when they bought the business back in 2010. Over the years they had “made do” with it, but it no longer met their needs: it was not mobile-friendly and it was difficult to add events and highlight blogs on the front page. There has also been a recent re-brand and the existing site could not be updated easily to incorporate the new logo and colour scheme. Emma was confident that Red Kite Services could deliver a website that she wanted as we have a good knowledge of plants and wildlife. We have also worked on updating the website for Stillingfleet Lodge Gardens for many years, showing further experience in the sector. Website Review We started the re-build process by reviewing the existing site and agreeing the content and images that would be retained. We streamlined the number of pages and reviewed the categories on the site. We set up a development site where we could design the outline using our favourite template. We used the new colours from the logo throughout the site and used categories to place appropriate blogs onto the static pages. Emma did not want us to use sliders, but did send us some stunning images to use. She wanted a nice clean site but with a flowery touch, which we achieved by adding curved frames to the images. Events She also wanted to easily add events, so we used a simple plug in. This makes it easy to add events and the next few events are highlighted in the sidebar. We have also categorised events so that if you look on the Wee Woodsiders page you can see a full list of child-friendly events. We are still in the free “snagging” stage which we offer on all our website builds, so we are still working with Emma to develop the site now it has gone live. Contact Us About Red Kite Services Red Kite Services is a family run business owned by Peter and Samantha Lyth. We believe in supporting independent, local companies and our aim with RKS is to provide cost-effective support to help local small businesses to thrive. Sam set up the business in 2010 after seeing that many small business owners know what they need to do in terms of administration and marketing, but don't have enough time to do it. Peter joined the business in 2015, which has allowed us to offer a broader range of services. Between us we have experience in Financial Services, health and science sector and retail. Testimonials I contacted Red Kite because I have been unable to update my website for a few years now. The site was built for me over five years ago and I just wanted to be able to update the price list regularly and alter treatments. I had already been in touch with other companies about updating Continue Reading
{ "pile_set_name": "Pile-CC" }
文件说明: 1、base_dic_full.dic hash索引 -- 字典带有词频和词性标志。 2、words_addons.dic s 开头的表示停止词 u 后缀词(地名后缀、数学单位等) n 前导词(姓、汉字数词等) a 后导词(地区,部门等) 3、 not-build/base_dic_full.txt 没编译过的词典源码 4、重新编译词典的方法: <?php header('Content-Type: text/html; charset=utf-8'); require_once('phpanalysis.class.php'); $pa = new PhpAnalysis('utf-8', 'utf-8', false); $pa->MakeDict( sourcefile, 16 , 'dict/base_dic_full.dic'); echo "OK"; ?>
{ "pile_set_name": "Github" }
Comparison of Rappaport-Vassiliadis Enrichment Medium and Tetrathionate Brilliant Green Broth for Isolation of Salmonellae from Meat Products. The effectiveness of Rappaport-Vassiliadis enrichment medium (RV medium) and Difco's tetrathionate brilliant green broth (TBG) for detection of Salmonella in 553 samples of meat products was compared. All samples were preenriched for 20 h in buffered peptone water. Then 0.1 ml of the preenrichment was inoculated into 10 ml of RV medium, 1 ml was added to 9 ml of TBG broth, and 1 ml was inoculated into 10 ml of Muller-Kauffman (MK) tetrathionate broth. All enrichments were incubated at 43°C for 24 h, except for MK broth which was incubated for 48 h, and all were subcultured onto brilliant green deoxycholate agar and bismuth sulfite agar. The Rappaport-Vassiliadis medium was superior to Difco's tetrathionate brilliant green broth, being considerably more sensitive and more specific. The superiority of RV medium concerned the number of positive samples (36% and 28%, respectively), and also the number of Salmonella serotypes and strains. The RV medium inhibited the lactose- and sucrose-negative competing organisms much more than the Difco's tetrathionate broth. The performance of Difco and Muller-Kauffman tetrathionate brilliant green broths was similar. Addition of the brilliant green solution after boiling the tetrathionate broth slightly increased its efficacy. The effectiveness of brilliant green deoxycholate agar and bismuth sulfite agar was similar, whether after enrichment in RV medium or in any of the studied tetrathionate brilliant green broths.
{ "pile_set_name": "PubMed Abstracts" }
HotKnot is a near-field communication technology (which is mainly used in a capacitive touch screen) used in some smart terminal devices. This near-field communication includes two processes: proximity detection process and data transmission process. The proximity detection process of the near-field communication is: a touch screen terminal of one party sends a proximity detection sequence (for example, the proximity detection sequence includes six frequencies), and after receiving the proximity detection sequence, a touch screen terminal of the other party successively scans the multiple frequencies included in the proximity detection sequence. If signal strength at each frequency is greater than a preset signal strength threshold, it is considered that a signal source exists at the frequency. After the scan is completed, if signal sources exist at all frequencies, it is determined that the sequence is valid; otherwise, it is determined that the sequence is invalid. After it is determined that the sequence is valid, the receiving party feeds back a proximity response sequence to the sending party. After receiving the proximity response sequence, the sending party successively performs scan similarly, and determines whether the response sequence is valid. The determining manner is described above. When the two parties both consider that the sequence is valid, it is considered that sequence identification succeeds once. After the sequence identification succeeds for multiple times according to an interaction rule, it is determined that a touch screen terminal approaches. After the proximity detection succeeds, an interference source is turned off, and the data transmission process is started to send or receive data. During the proximity detection, the interference source such as an LCD is not turned off, there is a relatively big difficulty to correctly determine a frequency of the sequence, and setting of a signal strength threshold plays a particularly important role in determining of a signal. Therefore, it appears to be particularly important to be capable of setting a proper signal strength threshold according to a noise situation. During proximity detection of two HotKnot (which is a type of near-field communication and is mainly used in the capacitive touch screen) devices, a drive signal of LCD scan, or common-mode interference when a charger is connected interferes with signal detection of the capacitive touch screen, which may cause an error when the proximity detection is performed by using the touch screen, and a case in which the two parties cannot enter or one party enters by a mistake. Currently, to enable the capacitive touch screen to adapt to different LCD interference intensities, noise reduction processing is usually performed on detected data. After the noise reduction processing, a signal strength threshold determining policy is used. If signal strength is greater than the threshold, it is considered that a signal is valid; otherwise, it is considered that the signal is an invalid signal. In addition, for the foregoing interference cases, an interference frequency is detected by using an instrument, and then the interference frequency is not used as a determining basis, thereby avoiding an interference sources. However, in a current processing manner, there are at least two problems: 1) Although some problems can be solved by using a proper signal strength threshold, when interference occurs at some frequencies, signal strength of noise is sometimes greater than strength of a signal, and the frequencies are very difficult to be identified, which finally results in failure of entire sequence identification; and in addition, interference intensity often changes, detection reliability and sensitivity are difficult to be ensured if only one fixed signal strength threshold is used. 2) Interference in an actual environment often changes; if some fixed frequencies are not identified, although a situation of interference at the fixed frequencies can be improved, when the interference at the frequencies changes, the changed interference frequencies cannot be shielded, that is, a compatibility problem exists. Therefore, in the case of weak signal or strong interference, reliability and sensitivity of proximity detection are not high.
{ "pile_set_name": "USPTO Backgrounds" }
DataverseUse test Set import-private-functions=true Query: Let Variable [ Name=$txt ] := LiteralExpr [STRING] [Hello World, I would like to inform you of the importance of Foo Bar. Yes, Foo Bar. Jürgen.] Let Variable [ Name=$tokens ] := FunctionCall asterix.hashed-word-tokens@1[ Variable [ Name=$txt ] ] SELECT ELEMENT [ Variable [ Name=$token ] ] FROM [ Variable [ Name=$tokens ] AS Variable [ Name=$token ] ]
{ "pile_set_name": "Github" }