text
stringlengths
24
612k
Čilić in Doubt for Australian Open Usually in the first week of January you would find Croatian tennis star Marin Čilić in the sun down under in Australia preparing for the year’s first Grand Slam, but on Friday the US Open champ was in a very cold Zagreb… Čilić, who was in Zagreb at a press conference ahead of this month’s Zagreb Indoors tournament (31 January – 8 February), said that a niggling shoulder injury may force him out of Australian Open in Melbourne which starts in just over a fortnight. “My shoulder is bothering me. I will have to test it a few more times to see if I will be able to play at all in the Australian Open,” said the World Number 9, adding that he will not risk further injury to his shoulder that could rule him out for a longer period, and that he will have to be 100% fit to take part in Melbourne.
Q: Get the sum by file extensions in python Newbie question here! I'm trying to get the sum of file sizes by file extension in a directory. So far I'm using a modified version of this (via Python - Acquiring a count of file extensions across all directories) to count them. Trying to use os.path.getsize() then use sum() to add them up, but I either get zero or errors. What am I missing? The code I copied is this: import os import collections extensions = collections.defaultdict(int) place = input('Type the directory path: ') for path, dirs, files in os.walk(place): for filename in files: extensions[os.path.splitext(filename)[1].lower()] += 1 for key,value in extensions.items(): print ('Extension: ', key, ' ', value, ' items') A: Did you try this? import os import collections extensions = collections.defaultdict(int) size = collections.defaultdict(int) for path, dirs, files in os.walk('/'): for filename in files: extensions[os.path.splitext(filename)[1].lower()] += 1 size[os.path.splitext(filename)[1].lower()] += os.path.getsize(path+os.sep+filename) for key,value in extensions.items(): print 'Extension: ', key, ' ', value, ' items' for key,value in size.items(): print 'Extension: ', key, ' ', value, ' size' this is based on your link
using CIIP.Module.BusinessObjects.SYS.Logic; using DevExpress.Data.Filtering; namespace CIIP.Designer { public static class BusinessObjectQuickAddExtendesion { #region quick add public static Property AddProperty(this BusinessObject self,string name, BusinessObjectBase type, int? size = null) { var property = new Property(self.Session); property.PropertyType = type; property.Name = name; if (size.HasValue) property.Size = size.Value; self.Properties.Add(property); return property; } public static Property AddProperty<T>(this BusinessObject self,string name, int? size = null) { return self.AddProperty(name, self.Session.FindObject<BusinessObjectBase>(new BinaryOperator("FullName", typeof(T).FullName)) , size ); } //public static CollectionProperty AddAssociation(this BusinessObject self, string name, BusinessObject bo, bool isAggregated,Property relation) //{ // var cp = new CollectionProperty(self.Session); // self.Properties.Add(cp); // cp.PropertyType = bo; // cp.Name = name; // cp.Aggregated = isAggregated; // //cp.RelationProperty = relation; // return cp; //} #endregion } }
Having trouble starting tractor. When starting the tractor I turn on the switch, advance the throtle about 1/3 and pull the starter and pull the choke. Starter does its job but does not fire. After 2 or 3 tries, I pull the throtle back to minimum. I try again but when I let go of the starter there is a single cylinder firing. If I can respond quick enough and advance the throtle only one or two notches it sometimes will fire to more cylinders. After it has been running (warmed up) it most often will start normally. About 2 years ago, I changed the electric system from 6 volt to 12 volt, but I had to install an "Exciter Button" and everything has worked fine until about 6 months ago. Now I have to re-charge the battery before starting. I can't find anything wrong. Any help would be appreciated.I hope I did this right. This is my first time posting anything. its possible that you have some sticking valves (old gas). I had one do the same thing and found that the valves were sticking enough to cause these issues and once it "did" start it would run ok. then once it sat for a while and cooled off, the varnish would set up again and valves would stick. I put some seafoam in the engine oil and fuel tank.ran it for awhile and problems ceased. just a thought. Need more information on the "exciter" problem. Alternator used? How wired? Following is a link to a common Delco 10SI alternator conversion. Battery discharging is a common fault in that the wires are mixed up on terminals 1 and 2 - and or a diode or resistor not used in the #1 terminal wire. Edit: Charging battery problem. Charge up battery and have checked at auto parts store. If battery is still good you will need to start checking for a short, the battery drain, when engine not running. Previously thought "exciter" button was not disconnecting after energizing alternator, which still could be the problem. Question. Did you install a resistor in line with the coil, or coil with internal resistor, when converted to 12 volts? Last edited by Eugene on Thu Jan 10, 2013 5:08 am, edited 1 time in total. hayseed235 wrote:when I let go of the starter there is a single cylinder firing This suggests that there is a problem in the ignition system. It could be that there is not enough current getting through to run both the starter and give enough to the coil for a spark. Or, there could be a problem in the ignition system itself (points, condensor, etc.) When the starter stops drawing, then you get a good enough spark, or enough current to overcome a weak ignition component. I had these symptoms more than once. One time it turned out to be a weak battery, another time it was old plugs. Michael CummingsEddie - a 1959 International Lo-Boy named after my father in law, who who bought her new.
/* * Copyright (C) 2020 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ package com.android.permissioncontroller.role.ui; import android.app.Activity; import android.content.Context; import android.content.Intent; import android.os.Bundle; import androidx.annotation.NonNull; /** * Dummy activity in place of * {@link com.android.permissioncontroller.role.ui.specialappaccess.SpecialAppAccessActivity} */ public class SpecialAppAccessActivity extends Activity { /** * Create an intent for starting this activity. */ @NonNull public static Intent createIntent(@NonNull String roleName, @NonNull Context context) { return new Intent(context, SpecialAppAccessActivity.class); } @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); finish(); } }
Girl, 16, wounded in South Side shooting in Heart of Chicago neighborhood A16-year-old girl is in stable condition Tuesday after being shot in the upper chest. Witnesses say she was simply walking with a friend down a sidewalk when the gunfire began. March 18, 2014 9:07:46 PM PDT March 18, 2014 (CHICAGO) -- A teenager was shot in the chest while walking on the city's South Side in the Heart of Chicago neighborhood. A16-year-old girl is in stable condition Tuesday after being shot in the upper chest. Witnesses say she was simply walking with a friend down a sidewalk when the gunfire began. On Tuesday night, bullet holes mark the spot where the shooting ended this food market that was full of customers. "It was like one after another. Like he was standing literally outside, and I was looking at him. And he was shooting inside," said Norma Yanez, store employee. It happened around 4 p.m. Tuesday afternoon at the corner of 18th and Wood. Witnesses say it began with a foot chase, the teenage gunman firing at another young man. As they ran east on 18th Street, a stray bullet instead hitting a 16-year-old girl in the upper chest. "She looked stable. She didn't look like she was, like with her eyes closed and wasn't moving," said Federico Yanez, witness. "She was lying down on the stairway. So she was responsive, but yet in pain," said Sonia Yanez, owner, Meztisoy Food Market. But the shooting didn't stop. Store employee Norma Yanez was outside, a bullet whizzing by her head. "Some guy was coming towards me shooting, so I came into the store. And then the guy they were shooting at came in after me," said Norma Yanez. After the man who was the intended target ran into the store, the gunman stood outside firing a handful of shots through the window. The bullets hitting this sign, a coffee machine and other items on a front table, but missing the 15 or so people in the store. "He was thinking about coming in, but he ran off," said Sonia Yanez. "I just didn't think it was happening. I didn't think it was happening. I was just like in shock," said Norma Yanez. Police have not offered a motive for the shooting, but some neighborhood residents believe it may have been gang-related, perhaps retaliation for another shooting that they say happened in the neighborhood Monday.
Alain Resnais, director of 'Hiroshima, Mon Amour', dies aged 91 Reuters Staff 2 Min Read PARIS (Reuters) - The French film director Alain Resnais, known for classics such as “Hiroshima, Mon Amour”, “Last Year at Marienbad” and the documentary “Night and Fog” about Nazi concentration camps, died on Sunday at the age of 91. Resnais, who was born in 1922 in northwestern France and started his career with mid-length films in the 1940s, rose to fame with “Night and Fog” and “Van Gogh”, a short that won an Oscar in its category in 1950. In 1959, with author Marguerite Duras as scriptwriter, he directed “Hiroshima, Mon Amour”, a feature about a love affair between a French woman and a Japanese architect that secured his reputation as a feature-film director. French President Francois Hollande joined a chorus of condolences for Resnais, described as a highly original and influential film-maker steeped in the pre-war cinema culture of the United States. Director Alain Resnais reacts after receiving the Lifetime Achievement award during the award ceremony at the 62nd Cannes Film Festival in this May 24, 2009 file picture. REUTERSEric Gaillard/Files “He constantly broke codes, rules and trends while appealing to a vast audience,” Hollande’s office said in a statement. The director won a lifetime achievement award at the Cannes Film Festival in 2009, while the Berlin Film Festival awarded him the Alfred Bauer prize for his last film, “Loving, Drinking and Eating”, to be released in France this month. One of his favorite actors, Pierre Arditi, hailed the director as extremely original. “There is nothing that less resembles a Resnais film than another Resnais film,” Arditi told BFM TV. “He always tried to avoid copying what he had done before, he didn’t want to use any formula.” Thierry Fremaux, director of the Cannes Film Festival, hailed a director whose films have influenced generations of film-makers. “He hit hard from the start with his short films in the 1950s and when the Nouvelle Vague arrived, he was sort of a big brother.”
Synthesis of scroll-type composite microtubes of Mo2C/MoCO by controlled pyrolysis of Mo(CO)6. Composite microtubes of Mo(2)C/MoCO have been synthesized for the first time under well-controlled conditions by thermal decomposition of Mo(CO)(6) at about 600 degrees C. Here, thermal stability and phase transition of the products, as well as the influence of reaction temperature and argon flow rate, have been carefully investigated. All samples were characterized by X-ray powder diffraction (XRD), scanning electron microscopy (SEM), and X-ray photoelectron spectroscopy (XPS). The reaction model and rolling mechanism were proposed on the basis of the experimental facts.
Q: How long had Box been running amok when Logan discovered him? Originally, in Logan's Run, the killer robot "Box" was programed to freeze and store sea food. How long had the abandoned "Box" been been running amok when encountered by Logan 5 and Jessica 6 in the Logan's Run franchise? A: I'm not sure we have any way to arrive at a clear answer from the evidence of the movie. I'm not sure it's even clear whether Box predates the City or not -- it's been a while since I've seen it and I've already read two different interpretations while trying to write this answer (Wikipedia's entry seems to suggest that Box was processing food for the city, while The World of Logan's Run seems to believe Box predated the City and was more or less left to his own fate when the City was created). I'm also not sure that "running amok" is quite right. It implies an active rampage, whereas in the movie, people come to him, believing they're fleeing to safety! A: Box' original purpose was to freeze and preserve food for storage. "[The food] stopped, and they (the runners) started", it explains. This suggests that the delivery system for the food ended, followed by people escaping the City of Domes. It's possible that the process he was built for was to (pre)supply the City of Domes, which could explain why there seemed to be a direct route between them, one that seemingly every runner found. We don't know how long he sat inactive - presumably it was a long time; long enough to make the decision that humans were an acceptable replacement for food in his process.
The Advantage, by Patrick Lencioni Patrick Lencioni is a well-known management consultant and author, known for books like The Five Dysfunctions of a Team. I haven’t gotten around to reading his books, but somebody recently recommended The Advantage, subtitled Why organizational health trumps everything else in business, so I picked it up. Lencioni views this book as the synthesis of the work described in his other books – a simple guide to creating more effective organizations. His opening sentence reads: The single greatest advantage any company can achieve is organizational health. Yet it is ignored by most leaders even though it is simple, free, and available to anyone who wants it. Lencioni defines organizational health as integrity: “when it is whole, consistent, and complete, that is, when its management, operations, strategy, and culture fit together and make sense.” In other words, even if an organization hires all of the smartest people, it will not be able to effectively use them without making sure they are all aligned and pulling together in the same direction. The leaders of the organization also need to be able to receive the intelligence and wisdom from their people, and not waste the teams’ efforts by working at odds to each other. In other words, the opposite of a healthy organization is what we all deride as corporate politics, and none of us want to be part of such an organization. The deceptively simple aspect is that I read this book and said “Yes, that all makes sense”, but then realized I have never been part of an organization that consistently prioritizes and executes these health initiatives. And even when I have been part of the management team, I do not prioritize these, instead getting distracted by the fire drill du jour. So this book is a reminder to focus on these important basics, rather than the urgent distractions. Step 1 is to build a cohesive leadership team. Lencioni believes that an effective leadership team must be small enough (3-10 people) so that each member feels personally responsible for the success of the whole organization. Some behaviors he describes of such a leadership team are: Trust and vulnerability – being able to make mistakes and discuss them so everybody can improve. Constructive conflict – if difficult decisions need to be made, all sides of the issue must be discussed vigorously and without rancor, which can’t happen without the trust established above. Collective accountability – once decisions are made, all take responsibility for executing on the decision, even if they argued against the decision. Focusing on organization results – success is defined as the whole organization succeeding, not just a particular department. There should be no situation where sales can say “My team did its job” if the organization did not hit its goals. Once the leadership team is established, then the next step is to Create Clarity on what the organization is for, and what it is focused on. Lencioni outlines six questions which every organization should be able to answer: Why do we exist? (mission) How do we behave? (no more than 3 differentiating core values) What do we do? How will we succeed? (what is our strategy and differentiator?) What is most important, right now? (what is our thematic goal? What defining objectives are necessary to achieve that goal? What metrics will show progress towards those objectives?) Who must do what? What I particularly like about Lencioni’s formulation is that if the leadership team can reach clarity on the answers to these questions, they can write down the whole vision for the organization on a single page, which Lencioni calls the organizational playbook. He suggests having the playbook be available in every meeting to review how actions being considered in the meeting align with the overall mission, values and strategy. Once the leadership team is aligned, and the playbook is defined, it still remains to ensure that the entire organization is informed and aligned with the playbook. This is step 3, Overcommunicate Clarity. Lencioni notes that what feels like overkill repetition by the leaders is probably the minimum required communication for organization members to internalize the message. Each member of the leadership team should be able to articulate why it makes sense, and what their team has to do to execute on the vision. I once worked for a VP at Google who started each monthly all hands meeting with a slide of the same three metrics, and a plea for anybody on the team working on something that wasn’t focused on improving one of those metrics to contact her personally so she could sort out the misalignment. It was extremely powerful in driving agreement across the team on what mattered. Such misalignments across the organization detract from clarity, which brings us to step 4, Reinforce Clarity. Lencioni is making the point that every structure and process within the company must also reinforce the playbook. For instance, if the playbook says that one of the core values is teamwork, but performance review and bonuses are awarded individually, that’s a misalignment. If core values are not included as part of the interview process, that’s a misalignment. The organization must live and breathe its values and playbook to be effective and healthy, and part of what the leadership team must do is drive consistency across all aspects of the company. Any discordant element can destroy the synergistic effects of organizational health. One last great point that Lencioni made was that organizational health starts at the top. A cohesive leadership team will never develop trust, vulnerability, and accountability unless the leader/CEO goes first. And a CEO that doesn’t trust his or her people will never want to create and disseminate clarity in the way Lencioni describes. This reminds me of something that Patrick Pichette (former CFO of Google) once said: “Your CEO is your culture” – the person at the top sets the tone for the whole organization. His point was that an important consideration when deciding whether to take a job is how you align with the CEO’s values, because their values will set the culture. I liked this book for having a clear, simple plan for improving organizational health. When I’ve been on healthy teams, I’ve seen elements of what Lencioni describes, and it makes a huge difference in making each team member more productive and effective. And yet, I have rarely seen this sort of clarity from a leadership team – I suspect that it almost seems like a waste of time to focus on these basics when there are more urgent and complex matters to address. Lencioni suggests the leadership team should be doing an offsite once a quarter to review and update the playbook, as he feels creating clarity for the organization is one of the most important and valuable responsibilities of the leadership team. I suspect he might be right, and I will be thinking how I can incorporate these ideas into the work I do. As homework for my coaching certification program, I wrote up this book as a resource for coaching, and was intrigued by how easily these practices for organizational health map to individual health. I believe that we, as individuals, are often like organizations in the sense that we are not “whole, consistent, and complete”. We say we have one set of values, but act in a way that is contrary to those values sometimes, and are in relationships that pull us in different directions. Learning to act congruently such that we are whole and consistent and say what we mean is one of the reasons that I believe people need coaching. So I think there is an opportunity here to apply the principles of organizational health from this book to the individual to improve their ability to act congruently. Lencioni posits that a healthy organization has a leadership team that behaves with four characteristics, and I’ll explain how I think these can apply to the individual. — Trust and vulnerability – we must trust ourselves, and be willing to be vulnerable and admit our shortcomings before we can address and improve upon them — Constructive conflict – we must consider potential outcomes without giving in to our Inner Critic that tears us down without providing constructive input — Collective accountability – we must hold ourselves accountable for our decisions, and not let part of ourselves (even an unconscious part) undermine carrying out those decisions — Focusing on results – similar to the above The six questions to Create Clarity map cleanly for individuals – I just did the exercise for myself, and it was helpful to write down my mission, values, and current priorities. Overcommunicating Clarity can apply in two ways to individuals. One is journaling daily to see how we are doing relative to the plan we outlined and the answers to the six questions. The other is overcommunicating that plan to others – for instance, I’ve started to introduce myself as a coach at social gatherings rather than as a business strategy analyst, and that is helping to reinforce a new vision for myself. Reinforcing Clarity also applies to individuals, in creating new practices and processes for ourselves that help us achieve our vision. It can also apply in ending old relationships and starting new relationships that are more in line with our newfound Clarity. Anyway, I thought it was interesting to see that these principles can apply almost directly to individuals as well as organizations.
Q: Getting the last token of a macro argument It is easy to get the first token of a macro argument. Ignoring some irrelevant complications, you do something like this: \def\firsttokof#1{\first(#1)} \def\first(#1#2){#1} It's equally easy to get all the remaining tokens after the first one: \def\nonfirsttoksof#1{\rest(#1)} \def\rest(#1#2){#2} As I understand it, this is because TeX does pattern matching from left to right, taking the shortest possible match. In \first and \rest, the shortest possible match for #1 will always be a single token. But how can you get the last token of a macro argument? A: Assuming that you need only parameterless macros, here's a way that recognizes also a trailing space or closed brace. \documentclass{article} \usepackage{expl3,l3regex} \def\A{abc} \def\B{ab{c}} \def\C{ab{c} } \def\D{ab\linebreak} \ExplSyntaxOn \cs_new_protected:Npn \velleman_grab_last:N #1 { \tl_set:Nx \l_velleman_testz_tl { \token_get_replacement_spec:N #1 } \tl_set_eq:NN \l_velleman_testy_tl \l_velleman_testz_tl \regex_replace_once:nnN { \A .* (.) \Z } { \1 } \l_velleman_testz_tl \regex_replace_once:nnN { \A .* (.) . \Z } { \1 } \l_velleman_testy_tl \prg_case_str:xxn { \l_velleman_testz_tl } { { \c_rbrace_str }{ \tl_set:Nn \l_velleman_last_tl { \c_group_end_token } } { \c_space_tl }{ \tl_set:Nn \l_velleman_last_tl { \c_space_token } \velleman_test_last:N #1 } } { \tl_set:Nx \l_velleman_last_tl { \tl_item:Vn #1 { -1 } } } \tl_show:N \l_velleman_last_tl } \cs_new:Npn \velleman_test_last:N #1 { \str_if_eq:xxF { \l_velleman_testy_tl } { \c_rbrace_str } { \tl_set:Nx \l_velleman_testz_tl { \tl_item:Vn #1 { -1 } } \token_if_cs:VT \l_velleman_testz_tl { \tl_set:NV \l_velleman_last_tl \l_velleman_testz_tl } } } \cs_generate_variant:Nn \tl_item:nn {V} \cs_generate_variant:Nn \token_if_cs:NT {V} \velleman_grab_last:N \A \velleman_grab_last:N \B \velleman_grab_last:N \C \velleman_grab_last:N \D \ExplSyntaxOff With the help of \regex_replace_once:nnN we leave in \l_velleman_test the last item in the "meaning" of the control sequence. Then we sort out the cases. In the case the last item is a space, another check has to be done; we keep also the last but one item; if it's a brace then the last item is surely a space; otherwise we look whether it's a control sequence. There is still some small problem, but this should be enough for a start. For example the macros don't work for \space and the last test is inaccurate if the trailing space follows a control symbol. A: With all due respect to egreg's impressive answer, I think reversing the token list and grabbing the now-first argument might be slightly easier albeit less general: \documentclass{article} \usepackage{expl3} \begin{document} \ExplSyntaxOn \cs_new:Npn \my_tl_last:N #1 { \tl_reverse:N #1 \tl_head:N #1 } \tl_set:Nn \l_tmpa_tl {ab{cd}ef} \my_tl_last:N \l_tmpa_tl \ExplSyntaxOff \end{document} Storing the result might need a slight alteration depending on what you're looking for… A: This answer is based on egreg's, simplified. If one wants to get the last item (brace group or single non-space non-[begin/end]-group token) in a list of tokens, simply use \tl_item:Nn \foo { -1 }. If one wants to get the last token, the easiest way is to use the (experimental) l3regex module, as egreg noted. Here I define \velleman_get_last:nN, which expects two arguments: some tokens, and a control sequence in which to store the result. In most cases, \regex_extract_once:nnN { . \Z } { <tokens> } \result will do the trick: the regular expression means "any token (.), followed by the end (\Z) of the input ". The line just below that converts from the result of \regex_extract_once:nnN (currently a sequence) to a token list. The only case that needs to be treated specially is when the last token is an explicit end-group character. This cannot be put into a macro to give the result: we test for that with \regex_match:nnTF { \cE. \Z }, where the regex means "a catcode (\c) end-group (E) token with arbitrary character code (.), followed by the end (\Z) of the input", and in that case, we put \c_group_end_token, an implicit end-group token, into the token list. \documentclass{article} \usepackage{expl3,l3regex,l3str} \def\A{abc} \def\B{ab{c}} \def\C{ab{c} } \def\D{ab\linebreak} \ExplSyntaxOn % \seq_new:N \l_velleman_last_seq \tl_new:N \l_velleman_last_tl \cs_new_protected:Npn \velleman_get_last:nN #1#2 { \regex_match:nnTF { \cE. \Z } {#1} { \tl_set:Nn #2 { \c_group_end_token } } { \regex_extract_once:nnN { . \Z } {#1} \l_velleman_last_seq \tl_set:Nx #2 { \seq_item:Nn \l_velleman_last_seq { 1 } } } } \cs_generate_variant:Nn \velleman_get_last:nN { V } % \cs_new_protected:Npn \test:N #1 { \velleman_get_last:VN #1 \l_tmpa_tl \msg_term:n { Last ~ item: ~ ' \tl_item:Nn #1 { -1 } ' \\ Last ~ token: ~ ' \tl_to_str:N \l_tmpa_tl ' } } \test:N \A \test:N \B \test:N \C \test:N \D \ExplSyntaxOff \stop
In the October-December 2010 issue of this journal, Abdulrahman *et al*.\[[@ref1]\] summarized their clinical experience with some of the available ureteral metal stents. I gladly accepted to write this comment in order to clarify certain points which are confusing many if not all of our colleagues: In today\'s nomenclature, "Stenting" is the use of a hollow device to create a pathway, support a structure, or opening of hollow organs that are partially or completely obstructed due to benign or malignant obstructive diseases. Under this description, also externally communicating urethral catheters or the double-J\'s which create a pathway should be called "stents", but they are not. The word "stent" cannot be found in a dictionary. It derives from the name of a British dentist, Charles Thomas Stent, lived in the 19^th^ century, who used metallic scaffolds for immobilizing tissues. Scaffolding tubular devices to tutor occluded blood vessels were introduced in the early 1980s and were named "stents" which became an accepted term in the medical vocabulary. What we call today ureteral double-J stents are in reality "intraureteral catheters" made of various polymers. The newcomer to this list is the Resonance, which is a bare metal, non-expandable double-J which does not have a lumen. None of the double-Js are scaffolding devices because of their small caliber. They just create a pathway but do not create a scaffold in the ureter. For this, they need to be large in caliber and have a large lumen. Studies demonstrated that the lumen of most double-J stents occlude within a few weeks and urine drains around the stent. The way the Resonance drains the kidney is quite speculative. Its caliber is 6 Fr. For giving its double-J shape, it has a centrally positioned metal wire filling almost all its luminal space. Drainage is obtained through capillary drainage around the spiral outer wall. Then, there is a myth of conventional polyurethane stents crushing under the pressure of tumor. Although there are many reports on conventional polyurethane stents occluding in malignancies, to the best of my knowledge no one could show a case where such a stent crushed under the pressure of a tumor. External malignant compression on an ureter can occlude urine drainage, but cannot crush a polyurethane stent. The reason for failure of the polyurethane ureteral stents in malignant cases is because their lumen occludes early by debris and the peristent space occludes by the compressing/strangling tumor. In two separate studies, metal coil and metal coil reinforced ureteral stents were compared with conventional ureteral stents withstanding compression.\[[@ref2][@ref3]\] Unfortunately, these comparisons were done using unrealistic conditions. In these studies, the stents compared were put and compressed between two metal surfaces. This is far from what happens in real life. No tumor is metal hard, and the way a tumor develops by cell division cannot be simulated by approximating two metal surfaces for crushing a stent. Another point of confusion is the term of "chronic obstruction" describing an obstruction necessitating long-term stenting. There should be a separation between benign and malignant obstructions. There are clear differences in the occlusion mechanisms between an intrinsic pathology causing a benign obstruction, a primary or infiltrating ureteral malignancy and the compression of an extraureteral tumor. These differences are the cause of differences in success rates when double-J stent is used for benign and also malignant obstructions. Ureteral stenoses necessitating long-term stenting are caused by intrinsic malignant disease of the ureter, compression, or infiltration of malignancies of the abdominal organs or by iatrogenic reasons such as trauma during ureteroscopy or gynecological accidents. Ureteral anastomoses or ureteral reimplantation to the bladder or bowel made reservoirs or conduits, ureteral ischemia during renal transplantation are additional reasons for the development of ureteral stenoses. Because of a lack of a better alternative and its affordable price, for restoring the obstructed urinary flow, currently small-caliber double-J ureteral stents which were developed more than 30 years ago are used. They have to be changed every 3-6 months. Patients with chronic obstructions need stenting for long months, or even for years. Such patients need long-term stenting with nonoccluding large-caliber devices. During the recent years, new approaches for ureteral stenting have been tried.\[[@ref4]\] This brings the era of metal stents into urological practice. Theoretically, like in the vascular system, noncovered, large-caliber metal mesh stents (24-30 F) had to provide a relief also to ureteral obstructions. Several attempts to use large-caliber bare metal mesh wire vascular and biliary stents in the ureter failed. Tissue proliferation through their interstices causing restenosis limited their use. To prevent restenosis, drug-coated or covered vascular stents have been tried to be used as ureteric stents. Some of these stents had large migration rates (81.2% with the externally covered Passager compared to 22.2% with the internally covered Hemobahn endoprosthesis). The implanted Passager caused a "trumpet-like" ureteral narrowing above the proximal end of the stent indicating reactive tissue proliferation. With the Hemobahn stent hyperplastic tissue development at the end of the stent was reported in 27.7% of the cases.\[[@ref5][@ref6]\] The function of the ureter completely differs from blood vessels. Blood vessels are almost inactive tubes allowing blood to flow forward. In contrast to this, the ureter has variable calibers all along its length and the flow of urine is obtained by its peristaltic function. This makes difficult to stent the ureters the way blood vessels are stented with vascular stents. Additionally, vascular stents are permanently implanted, where in the ureters most stents are for short- or long-term use, to be removed after a period of time. Currently, three different metal ureteral devices which are approved for deobstructing the ureter are available. They are Memokath 051, the Resonance, and the Allium URS. Only the Memokath 051 and the most recent Allium URS can be called stents because of their large lumen. The Memokath has a nitinol made bare metal closed coiled body and a thermo-expandable bell-shaped end for anchoring. Its caliber is 10.5 F.\[[@ref7]\] The Allium URS has a nitinol made skeleton fully covered with a strong proprietary polymer made thin membrane giving its tubular shape. It comes in two calibers of 24 and 30 F. These stents have a main body with high-radial-force and softer end segments to reduce the development of obstructing reactive proliferative tissue. The stent also have a feature to make its easy endoscopic removal. It can be inserted either antegradely or endoscopically. The Resonance coiled metal double-J has a 6 F caliber without a distinct lumen. Like any device in medicine stents also are not devoid of problems. The Memokath 051 and the Resonance have their inherent problems of reactive tissue proliferation at the ends of the stent, encrustation, and stone formation\[[@ref8]\] The nonsuitability of the Memokath 051 in benign ureteral obstructions were reported in the past.\[[@ref9]\] There are very large outcome differences between the limited number of papers published on the Resonance stents. Modi *et al*. reported a 38% failure rate. The authors recommended "vigilant monitoring" of the patients with a Resonance stent. Comparing these results with Liatsikos *et al*., 100% success in malignancy patients and 56% failure in benign ureteral stricture patients in 8.5 months are somehow confusing.\[[@ref8][@ref10]\] The problem with these reports is that the failures are reported, but the reasons of the failure are not analyzed in depth. Such reports should include more details on the pathology, length and place of the stricture, infection, the need for pre-dilation, etc. This additional information can give us more clues to understand the reason for the different outcomes. During the 24^th^ Engineering and Urology Meeting in Chicago (2009), Clayman\'s group mentioning failures they had with the Resonance, checked the function of this stent in laboratory conditions. On the basis of this study, they reported that the Resonance may cause a clinically significant functional obstruction.\[[@ref11]\] Reports on the Allium URS are very few. It seems that their geometry allows stent wall apposition to the ureteral wall for allowing intraluminal flow. However, when positioned in the ureteral orifice its migration has been reported in a recent presentation during the 28^th^ World Congress on Endourology held in Chicago (2010). Hopefully, the organ specific new large-caliber ureteral stents will solve, if not all but most of the deficiencies of the current double-Js. However, their long-term efficacy will have to proven in large clinical studies. **Editor Note:** This commentary was received as expert comments on "Clinical experience with ureteral metal stents" which was published in October -- December 2010 issue by Abdul Rahman Al Aown (Aown AA, Iason K, Panagiotis k, Liatsikos EN. Clinical experience with ureteral metal stents. Indian J Urol 2010;26:472-7.) **Source of Support:** Nil **Conflict of Interest:** None declared.
Q: Separable differential equation, answer correct? I've tried to solve the following equation the way I've been taught: $\frac{dy}{dx} = \frac{y}{2x} $ $y'*\frac{1}{y} = \frac{1}{2x} $ From the form $y'g(y) = f(x) $ we assign $g(x) = \frac{1}{y}$ and $f(x) = \frac{1}{2x}$. Proceeding by rewriting the left hand side (where $G(y)$ is a primitive function of $g(y)$): $\frac{d}{dx}G(y) = \frac{1}{2x}$ $G(y) = ln(y)$ $\frac{d}{dx}ln(y) = \frac{1}{2x}$ $ln(y) = \int\frac{1}{2x}dx$ $ln(y) = \frac{1}{2}\int\frac{1}{x}dx$ $ln(y) = \frac{ln(x)}{2} + C$ $ln(y) = ln(e^{(ln(x)/2) + C})$ $y = e^{(ln(x)/2) + C}$ Where the last line would be my answer. However the textbook says that the answer is as simple as $y^2 = Cx$. Am I doing it the wrong way? A: Hint: $y=e^{0.5\ln(x)+C}=e^{\ln(x^{0.5})+C}=x^{0.5}e^C$. Rename $e^C=\sqrt{C_1}$ and you get $y=\sqrt{C_1}\sqrt{x}$. Square this expression and you get $y^2=C_1x$. A: Your answer is correct. Squaring both sides gives: $$y^2=e^{\ln(x)+2C}\implies y^2=e^{\ln(x)}\cdot e^{2C}\implies y^2=x\cdot e^{2C}$$ Since $C$ is an arbitrary constant, you can substitute $k=e^{2C}$ to obtain your general solution.
Identify and analysis crotonylation sites in histone by using support vector machines. Lysine crotonylation (Kcr) is a newly discovered histone posttranslational modification, which is specifically enriched at active gene promoters and potential enhancers in mammalian cell genomes. Although lysine crotonylation sites can be correctly identified with high-resolution mass spectrometry, the experimental methods are time-consuming and expensive. Therefore, it is necessary to develop computational methods to deal with this problem. We proposed a new encoding scheme named position weight amino acid composition to extract sequence information of histone around crotonylation sites. We chose protein data from Uniprot database. A series of steps were used to construct a strict and objective benchmark dataset for training and testing the proposed method. All samples were characterized by a significant number of features derived from position weight amino acid composition. The support vector machine was used to perform classification. Based on a series of experiments, we found that the sensitivity (Sn), specificity (Sp), accuracy (Acc), and Matthew's correlation coefficient (MCC) were respectively 71.69%, 98.7%, 94.43%, and 0.778 in jackknife cross-validation. Comparison results demonstrated that our proposed model was better than random forest algorithm. We also performed the feature analysis on samples. Identification of the Kcr sites in histone is an indispensable step for decoding protein function. Therefore, the method can promote the deep understanding of the physiological roles of crotonylation and provide useful information for developing drugs to treat various diseases associated with crotonylation.
Multilayer optical films, i.e., films that provide desirable light transmission and/or reflection properties at least partially by an arrangement of microlayers of differing refractive index, are known and used in an ever increasing variety of applications. Multilayer optical films have been demonstrated by coextrusion of alternating polymer layers. For example, U.S. Pat. No. 3,610,724 (Rogers), U.S. Pat. No. 4,446,305 (Rogers et al.), U.S. Pat. No. 4,540,623 (Im et al.), U.S. Pat. No. 5,448,404 (Schrenk et al.), and U.S. Pat. No. 5,882,774 (Jonza et al.) each disclose multilayer optical films. In these polymeric multilayer optical films, polymer materials are used predominantly or exclusively in the makeup of the individual layers. Such films are compatible with high volume manufacturing processes, and can be made in large sheet and roll formats. An illustrative embodiment is shown in FIG. 1. In typical constructions, the film bodies comprise one or more layers of such multilayer optical films, sometimes referred to as an “optical stack”, and further protective layers on one or both sides thereof. Illustrative protective layers include, e.g., so-called “skin layers” on one or both sides comprising more robust materials, e.g., polycarbonate or polycarbonate blends, which impart desired additional mechanical, optical, or chemical properties to the construction. U.S. Pat. No. 6,368,699 (Gilbert et al.) and U.S. Pat. No. 6,737,154 (Jonza et al.) disclose illustrative examples thereof. It is also common to further include additional outer layers for protection, e.g., removable buffer layers sometimes referred to as “premask layers” which protect the film body during early handling and processing and are then removed during later manufacturing steps. Illustrative examples include polyethylene-based films and polyurethane-based films. An illustrative embodiment is shown in FIG. 2. Many product applications, however, require relatively small and sometimes numerous pieces of optical film. For these applications, small pieces of multilayer optical film can be obtained from a larger sheet of such film by subdividing the sheet by mechanical means, such as by cutting the sheet with a shearing device (e.g., a scissors), slitting the sheet with a blade, or cutting with other mechanical apparatus (e.g., die stamping and guillotines). However, the forces exerted on the film by the cutting mechanism can cause layer delamination in a region along the cut line or edge of the film. This is particularly true for many multilayer optical films. The resultant delamination region is often discernable by a discoloration or other optical degradation relative to intact areas of the film. Because the multilayer optical film relies on intimate contact of the individual layers to produce the desired reflection/transmission characteristics, as a result of degradation in the delamination region it fails to provide those desired characteristics. In some product applications, the delamination may not be problematic or even noticeable. In others applications, particularly where it is important for substantially the entire piece of film from edge-to-edge to exhibit the desired reflection or transmission characteristics, or where the film may be subjected to mechanical stresses and/or wide temperature variations that could cause the delamination to propagate in the film over time, the delamination can be highly detrimental. U.S. Pat. No. 6,991,695 (Tait et al.) discloses a method for using laser radiation to cut or subdivide optical films using, inter alia, removable liners to support the film and cut pieces. Though laser converting of polymeric materials has been known for some time, see, e.g., U.S. Pat. No. 5,010,231 (Huizing a) and U.S. Pat. No. 6,833,528 (De Steur et al.), laser conversion of optical film bodies has not provided desired results. In the region of the optical body near the cutting zone, i.e., the edge, heat generated during the laser converting process often results in degradation of one or more components of the optical film body that impairs desired optical performance. The heat is often observed to disrupt the desired crystalline character of some layers in the optical film, making the component layers in such regions relatively amorphous in character such that desired birefringence is not achieved. As a result, the apparent color of the body in that region is not uniform to other portions of the body located more distantly from the cut zone. Further, the polycarbonate materials commonly used as skin layers tend to yellow upon exposure to the heat encountered during laser conversion, further impairing desired optical performance of the film. There exists, therefore, a need for an improved method for subdividing multilayer optical film bodies and articles comprising such film. Preferably, the method would not produce delamination or color shifting or yellowing at the cut lines or film edges, would cut the film cleanly without substantial debris accumulation on the film, and would be compatible with automated and/or continuous manufacturing processes.
If you talk to me more than once, chances are you will probably know about my obsession with a children’s toy product, Bionicle. For point of reference, Bionicle was a children’s toy line created by Lego at the start of the 2000’s. It featured buildable action figures and had a set of short novels to create a story to go along with the toy line – basically becoming Power Rangers, Pokemon, and/or Transformers, but with a different spin. During the first year of its run (2001) Bionicle generated over $161 Million (£100 Million) [1] in revenue for LEGO, and roughly the same in first years of its run. This success is a major reason why Bionicle was largely responsible for the continuation of the LEGO Company and its success today. During the 90’s LEGO had been suffering losses and were running out of time and money, and Bionicle came in to save the day. Now, if Bionicle is nothing more than a children’s toy line that begs the question, what made it so “popular” or least loveable by a child who was born on the wrong side of the 90’s. I will answer that question for myself, and hopefully will instill a degree of understanding to the general audience as to why I remain loyal to a dead franchise. My love of these toys stems from my early childhood. In fact this shares the timing of another one of my hobbies, pipecleaners. Now like most children, I too, had those few things that I really enjoyed and were permitted by my parents. In my case, a major part of my childhood was Bionicle. I remember I used to ask my parents all the time for them, but never was able to get ahold of many of the figures; luckily this is where pipecleaners came in. With the fuzz covered wires I could create my own action figures and doll out justice upon the evildoers. So, like any loyal fan, some of my first creations were pipecleaner versions of the actual Bionicle characters, and then I fused the pieces and pipecleaners and also created my own characters. You could say that Bionicle was my first fandom. And one could even make the argument that because of my love for these plastic representations of bio-mechanical beings I became a writer. With Bionicle kick starting the process, I quickly amassed several legions of pipecleaner creations, many of whom were reminiscent of the heroes themselves (the Toa). I quickly began to create stories in my mind where the Toa would fight alongside the heroes of another franchise to defeat a great evil. Those in turn evolved into stories featuring my own characters and my own worlds – simple as I was merely a child then. I credit a great deal of my creativity and storytelling skill to these great, wonderful, and deep Biological Chronicle based beings. I have many thanks to give to my first fandom. The great Bionicles of the days past are no more and we are left with nothing but memories. I give thanks to the Bionicle series for giving me a great childhood where I believed in heroes. I thank you for the all the good times and stories I have had with you and the plastic figure I still own. I thank you for inspiring me in that early stage of my life to create and tell stories and without that inspiration I might not have written this piece this day. I also have a practical use for my love of Bionicle. Even my online handle, gamer tag, and partial pseudonym are a throwback to Bionicle. The first few stories in the Bionicle Universe feature a character known as the Chronicler, and I have taken that title upon myself in my work to honor my roots and give tribute to the greatness of my childhood fandom. On my mission, when I was assigned to the Northern Lights Samoan ward, I was also instructed to begin learning the Samoan language. I found that as I learned, the pronunciation of the words, rhythm and air of the language closely mirrored my childhood fandom. (The names of the characters and places in Bionicle are based on the Polynesian Languages). Who knew that after so many years, my love for the toys of my childhood would help me in the real world to partially learn and speak a new language? More than that, it helped me stay sane and keep a grasp on reality. As I had previously explained, the stress of serving a mission is intense and sometimes hard to deal with, and Bionicle helped me get through that. Even though it had originally been canceled in 2010, it was brought back in 2015 for a two year run. 2015 and 2016 would see the release of the Second Generation Bionicles. Almost as if it was brought back for me and my mission, it saw the light of day once more. It was glorious, and I was able to enjoy building these figures to destress on my mission. In conclusion, to this day, I still love Bionicle. I read the stories, watch the movies, play the games, and build the figures, I still create stories featuring character that I have created and that are already part of the series. Truly, I would say that Bionicle is my first fandom, and it will always have a special place in my heart, even if I have no one to share it with. I will always remember my childhood because of Bionicle. I will always remember the tale of the Bionicle; find your unity, perform your duty, and claim your destiny. I would love to see them revived for another go someday, but I understand that all things must end in time. The Mask of time, the first bit of Lore created in the Bionicle Universe. End in Time, more like begin in time. Image credit - Biosector01 - Free Use References - 1. Telegraph - LEGO - 2010
WASHINGTON (Reuters) - At a hearing last fall, U.S. Treasury Secretary Timothy Geithner told lawmakers that he and his team were working to put the $700 billion financial bailout fund “out of its misery.” But some in Washington now see a second, backdoor bailout in its place. On December 24, the Obama administration announced it was extending an unlimited credit line to mortgage finance agencies Fannie Mae FNM.N and FRE.N Freddie Mac, which would keep them afloat no matter how high their losses. Representative Dennis Kucinich, an Ohio Democrat who was an early opponent of Obama in the 2008 presidential race, thinks the move is backdoor way to help banks, and a congressional subcommittee he leads is investigating the Treasury’s decision to cover unlimited losses at the housing finance companies. “This new authority must be used responsibly and for the benefit of American families,” Kucinich said. It “cannot be used simply to purchase toxic assets at inflated prices, thus transferring the losses to the U. S. taxpayers and acting as a backdoor TARP.” That’s exactly what Treasury is doing, says Dean Baker, co-director of the Center for Economic Policy Research in Washington. “This looks like the original TARP,” Baker said, referring to $700 billion financial rescue fund, known officially as the Troubled Asset Relief Program. The original bailout program, devised by former Treasury Secretary Henry Paulson, “was a plan to help the banks restore their capital position by buying bad assets at and above market price, and that looks like what Fannie and Freddie will be doing if they are incurring losses of this magnitude.” The Treasury’s announcement said the unlimited credit line for the two government-controlled companies would be in place through the end of 2012, weeks after Obama would face voters if he seeks a second term. The Treasury also said it was scrapping plans for the two agencies, which play a role in funding three-fourths of all U.S. residential mortgages, to reduce the size of their investment portfolios. The 2010 limits on their portfolios, in fact, would allow their investment holdings to grow. CALL TO ARMS Kucinich is not the only one on Capitol Hill up in arms. House Energy and Commerce Committee Chairman Henry Waxman, a California Democrat, said he doesn’t like the idea of a “blank check” for Fannie and Freddie. And Darrell Issa, the top Republican on the House Oversight and Government Reform Committee, called it “a continuation of the bailout policies that have mortgaged away the future solvency of our country.” As the financial crisis unfolded in 2008, Paulson announced that the mortgage agencies would get an explicit guarantee from the federal government: $100 billion each. The Obama administration doubled that in early 2009 to $200 billion each. Combined, Fannie and Freddie have so far tapped about $111 billion. Until the 2008 announcement, investors had seen their congressional charter and existing line of credit with Treasury as an implicit guarantee of support from the federal government. The Obama administration now hopes its new, unlimited and even more explicit guarantee will bolster investor confidence and bring private sector buyers back into the market to help hold down mortgage costs. But mortgage rates are expected to rise in the coming months as the Federal Reserve ends its $1.25 trillion program to purchase mortgage-backed securities at the end of this quarter. Freddie Mac sees the average rate on a 30-year, fixed-rate mortgage rising to 6 percent by the end of the year. Higher mortgage rates could smother any emerging rebound in the still-fragile U.S. housing market, and a further decline in home prices could likely create more foreclosures. That would leave many banks with even weaker portfolios. Banks could dump their toxic mortgage assets onto Fannie Mae and Freddie Mac, since their portfolio loan limits are now capped at $810 billion by the end of this year. They had earlier been set to be 10 percent lower than 2009 year-end levels. Fannie said on December 28 its mortgage investments ended November at $752.2 billion, down 4.9 percent from the end of 2008. Freddie Mac’s mortgage investment portfolio shrank in November to $761.8 billion, a 5.8 percent decline for the first 11 months of the year, the company said December 23. December figures are not yet available. Treasury officials have said their move to allow unlimited losses for three years is merely precautionary. But the Center for Economic Policy Research’s Baker said, “you only take precaution against conceivable events in the world.” He adds that it is hard to imagine that Fannie and Freddie would have losses of more than $400 billion from mortgages originated before September 2008, suggesting the agencies are incurring losses on what they have bought since 2008, “which should raise a lot of eyebrows.” The administration said it plans to lay out a vision for the future of the two agencies in the president’s fiscal 2011 budget proposal in February. But officials caution against expecting too much detail.
Hong Kong. Indonesian diversified conglomerate Lippo Group has invested Rp 628 billion ($45 million), or 350 million Hong Kong dollars, in Chinese internet giant Tencent, the world's eighth most valuable listed company and owner of leading internet services including WeChat, Snapchat and Spotify, and technology companies such as Tesla. The investment highlights Lippo's continued digital transformation and investment into the fourth industrial revolution, the group said in a statement on Monday (25/06). Lippo, a pan-Asian group with strategic investments and operations across eight markets globally, is the largest integrated services group in Indonesia, serving more than 60 million unique customers across its real estate, malls, department stores, hospitals, telecommunications, media and financial services businesses. Lippo’s 350 million Hong Kong dollars investment into Tencent comprised of new Tencent shares and equity-linked notes (ELNs). The investment is made by Lippo's Hong Kong investment subsidiary. Tencent's market capitalization passed $500 billion last November, making it the first listed Chinese firm to do so and briefly overtaking Facebook as the world’s fifth biggest firm. Co-founder Ma Huateng, nicknamed Pony Ma, is the 17th richest person in the world, with a fortune of $45.3 billion — four places behind Google co-founder Sergey Brin, according to the latest Forbes rich list. Tencent's core business is built on messaging app WeChat, the world’s largest mobile gaming franchises, and an ecosystem of services (for its 1 billion users) usually offered by Silicon Valley firms that have no foothold in China. Tencent Video, a streaming service much like Netflix, is the biggest of its kind in China and carries exclusive content including HBO series “Game of Thrones”. The service more than doubled in size in 2017, drawing more than 40 million paying subscribers. In Indonesia, Lippo’s nine unique sectors are all pushing to digitalize. In addition, it has established a digital investment group to lay strong foundations for the fourth industrial revolution. This includes Venturra Capital having invested in 24 start-ups in the last 2 years, mataharimall.com, and OVO — Indonesia’s leading payment and marketing platform.
; Script generated by the Inno Setup Script Wizard. ; SEE THE DOCUMENTATION FOR DETAILS ON CREATING INNO SETUP SCRIPT FILES! [Setup] ; compiler-related directives OutputBaseFilename=audacity-win-1.2.6 SolidCompression=yes ; installer-related directives AppName=Audacity AppVerName=Audacity 1.2.6 AppPublisherURL=http://audacity.sourceforge.net AppSupportURL=http://audacity.sourceforge.net AppUpdatesURL=http://audacity.sourceforge.net ChangesAssociations=yes DefaultDirName={pf}\Audacity ; Always warn if dir exists, because we'll overwrite previous Audacity. DirExistsWarning=yes DisableProgramGroupPage=yes UninstallDisplayIcon="{app}\audacity.exe" LicenseFile=..\LICENSE.txt InfoBeforeFile=..\README.txt ; min versions: Win95, NT 4.0 MinVersion=4.0,4.0 [Tasks] Name: desktopicon; Description: "Create a &desktop icon"; GroupDescription: "Additional icons:"; MinVersion: 4,4 Name: associate_aup; Description: "&Associate Audacity project files"; GroupDescription: "Other tasks:"; Flags: checkedonce; MinVersion: 4,4 [Files] Source: "..\win\Release\audacity.exe"; DestDir: "{app}"; Flags: ignoreversion Source: "..\win\Release\audacity-1.2-help.htb"; DestDir: "{app}"; Flags: ignoreversion Source: "..\win\Release\Languages\ar\*.*"; DestDir: "{app}\Languages\ar"; Flags: ignoreversion Source: "..\win\Release\Languages\bg\*.*"; DestDir: "{app}\Languages\bg"; Flags: ignoreversion Source: "..\win\Release\Languages\ca\*.*"; DestDir: "{app}\Languages\ca"; Flags: ignoreversion Source: "..\win\Release\Languages\cs\*.*"; DestDir: "{app}\Languages\cs"; Flags: ignoreversion Source: "..\win\Release\Languages\da\*.*"; DestDir: "{app}\Languages\da"; Flags: ignoreversion Source: "..\win\Release\Languages\de\*.*"; DestDir: "{app}\Languages\de"; Flags: ignoreversion Source: "..\win\Release\Languages\el\*.*"; DestDir: "{app}\Languages\el"; Flags: ignoreversion Source: "..\win\Release\Languages\es\*.*"; DestDir: "{app}\Languages\es"; Flags: ignoreversion Source: "..\win\Release\Languages\eu\*.*"; DestDir: "{app}\Languages\eu"; Flags: ignoreversion Source: "..\win\Release\Languages\fi\*.*"; DestDir: "{app}\Languages\fi"; Flags: ignoreversion Source: "..\win\Release\Languages\fr\*.*"; DestDir: "{app}\Languages\fr"; Flags: ignoreversion Source: "..\win\Release\Languages\ga\*.*"; DestDir: "{app}\Languages\ga"; Flags: ignoreversion Source: "..\win\Release\Languages\hu\*.*"; DestDir: "{app}\Languages\hu"; Flags: ignoreversion Source: "..\win\Release\Languages\it\*.*"; DestDir: "{app}\Languages\it"; Flags: ignoreversion Source: "..\win\Release\Languages\ja\*.*"; DestDir: "{app}\Languages\ja"; Flags: ignoreversion Source: "..\win\Release\Languages\lt\*.*"; DestDir: "{app}\Languages\lt"; Flags: ignoreversion Source: "..\win\Release\Languages\mk\*.*"; DestDir: "{app}\Languages\mk"; Flags: ignoreversion Source: "..\win\Release\Languages\nb\*.*"; DestDir: "{app}\Languages\nb"; Flags: ignoreversion Source: "..\win\Release\Languages\nl\*.*"; DestDir: "{app}\Languages\nl"; Flags: ignoreversion Source: "..\win\Release\Languages\pl\*.*"; DestDir: "{app}\Languages\pl"; Flags: ignoreversion Source: "..\win\Release\Languages\pt\*.*"; DestDir: "{app}\Languages\pt"; Flags: ignoreversion Source: "..\win\Release\Languages\ru\*.*"; DestDir: "{app}\Languages\ru"; Flags: ignoreversion Source: "..\win\Release\Languages\sl\*.*"; DestDir: "{app}\Languages\sl"; Flags: ignoreversion Source: "..\win\Release\Languages\sv\*.*"; DestDir: "{app}\Languages\sv"; Flags: ignoreversion Source: "..\win\Release\Languages\uk\*.*"; DestDir: "{app}\Languages\uk"; Flags: ignoreversion Source: "..\win\Release\Languages\zh\*.*"; DestDir: "{app}\Languages\zh"; Flags: ignoreversion Source: "..\win\Release\Languages\zh_TW\*.*"; DestDir: "{app}\Languages\zh_TW"; Flags: ignoreversion Source: "..\README.txt"; DestDir: "{app}"; Flags: ignoreversion Source: "..\LICENSE.txt"; DestDir: "{app}"; Flags: ignoreversion Source: "..\win\Release\Nyquist\bug.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\dspprims.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\evalenv.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\follow.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\init.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\misc.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\nyinit.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\nyqmisc.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\nyquist.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\printrec.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\profile.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\seq.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\seqfnint.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\seqmidi.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\sndfnint.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\system.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\test.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion Source: "..\win\Release\Nyquist\xlinit.lsp"; DestDir: "{app}\Nyquist"; Flags: ignoreversion ; doesn't work: Source: "..\win\Release\Plug-Ins\analyze.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\beat.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\clicktrack.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\crossfadein.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\crossfadeout.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\delay.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion ; redundant: Source: "..\win\Release\Plug-Ins\fadein.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion ; redundant: Source: "..\win\Release\Plug-Ins\fadeout.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\GVerb.dll"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\Hard Limiter.dll"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\highpass.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\lowpass.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\pluck.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\sc4.dll"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\SilenceMarker.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion Source: "..\win\Release\Plug-Ins\tremolo.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion ; redundant: Source: "..\win\Release\Plug-Ins\undcbias.ny"; DestDir: "{app}\Plug-Ins"; Flags: ignoreversion [Icons] Name: "{commonprograms}\Audacity"; Filename: "{app}\audacity.exe" Name: "{userdesktop}\Audacity"; Filename: "{app}\audacity.exe"; MinVersion: 4,4; Tasks: desktopicon [InstallDelete] ; Get rid of Audacity 1.0.0 stuff that's no longer used. Type: files; Name: "{app}\audacity-help.htb" ; Don't think we want to do this because user may have stored their own. ; Type: filesandordirs; Name: "{app}\vst" ; We've switched from a folder in the start menu to just the Audacity.exe at the top level. ; Get rid of 1.0.0 folder and its icons. Type: files; Name: "{commonprograms}\Audacity\audacity.exe" Type: files; Name: "{commonprograms}\Audacity\unins000.exe" Type: dirifempty; Name: "{commonprograms}\Audacity" [Registry] Root: HKCR; Subkey: ".AUP"; ValueType: string; ValueData: "Audacity.Project"; Flags: createvalueifdoesntexist uninsdeletekey; Tasks: associate_aup Root: HKCR; Subkey: "Audacity.Project"; ValueType: string; ValueData: "Audacity Project File"; Flags: createvalueifdoesntexist uninsdeletekey; Tasks: associate_aup Root: HKCR; Subkey: "Audacity.Project\shell"; ValueType: string; ValueData: ""; Flags: createvalueifdoesntexist uninsdeletekey; Tasks: associate_aup Root: HKCR; Subkey: "Audacity.Project\shell\open"; Flags: createvalueifdoesntexist uninsdeletekey; Tasks: associate_aup Root: HKCR; Subkey: "Audacity.Project\shell\open\command"; ValueType: string; ValueData: """{app}\audacity.exe"" ""%1"""; Flags: createvalueifdoesntexist uninsdeletekey; Tasks: associate_aup [Run] Filename: "{app}\audacity.exe"; Description: "Launch Audacity"; Flags: nowait postinstall skipifsilent
TAXI DRIVERS FUMING AS CABS FACE SCRAPHEAP OVER EMISSIONS TARGET NEARLY half of Edinburgh’s taxis are to be forced from the road in a massive emissions cull, the Evening News can reveal. All black cabs older than ten years will need to be replaced by 2020 under a new Edinburgh City Council policy, accounting for 616 vehicles of a fleet of 1316. Drivers have slammed the council plans to meet EU guidelines as rushed and poorly researched, with many likely to have to quit the trade as it faces a £20 million upgrade bill. “We want to be part of the solution on air quality and we should be getting support from the council,” Edinburgh Taxi Association chairman Mark McNally said. “But we feel that we’ve been singled out as a group and the benefits are insignificant. We feel let down in being asked to jump through hoops. It’s unacceptable.” With drivers facing forking out up to £62,000 for a top-of-the-range new taxi, representatives fear for their futures. “Some of these guys are 55 or 60-plus and they won’t be given finance to renew these vehicles,” Mr McNally said. He maintained the taxi trade was committed to helping improve air quality in the city, but that measures have been adopted way ahead of other initiatives, including low emission zones. “We find ourselves singled out with no evidence to show what impact this will have on improving air quality,” he said. Although many drivers still use cars more than ten years old, Mr McNally assured these were still viable vehicles. “Some of these guys keep older vehicles on the road relatively cheaply, but they have to pass the same tests as a brand new car,” he said. “If it doesn’t pass, then it’s taken off the road.” The Edinburgh Taxi Association polled its 500 members and four in five said they would find it difficult to continue in the trade in light of the new fleet requirements. New age restrictions on taxis come into effect in April next year, though drivers get a year’s leeway if their car’s licence expires in 2020. Patrick Gallagher, 44, from Moredun, has been a cabbie in Edinburgh for a decade and drives a 12-year-old model. “I found out in March it had to be off the road next April – 13 months’ notice, whereas in London they got five years’ notice,” Mr Gallagher said. “There are guys in their 50s and 60s taking part-time work because they can’t afford 50, 60 or 70 grand on a new taxi. After April, they might not have jobs and who’s going to employ them at that age? Council-owned Lothian Buses got £800m from the government [to meet emissions targets]. Taxi drivers get no help.” A council spokeswoman said drivers were consulted since June 2016, leading to tweaked plans, including dropping the age limit of cars from five years to ten. Proposals were drawn up to bring the Capital in line with most other UK cities. “The council is responding to growing public concern about the impact of air pollution on their health by introducing a range of measures to ensure people can breathe clean air in the city,” the spokesman said.
[Laparoscopic Treatment of Splenic Artery Aneurysm]. Visceral artery aneurysms are a rare but dangerous vascular pathology. The branches of the coeliac trunc are most frequently affected, especially the splenic artery. A visceral aneurysm is usually diagnosed only when a bleeding complication occurs due to rupture. It is therefore recommended to treat this pathology at an early stage after diagnosis. Endovascular elimination is the preferred procedure. However, if endovascular elimination is not suitable, the visceral aneurysm can be successfully treated by minimally invasive surgery. Splenic artery aneurysms are located at the splenic hilum, and are therefore considered to be at high risk of splenic ischemia and secondary complications following endovascular coiling. Laparoscopic treatment of splenic artery aneurysm. In the case of complex vascular pathologies unsuitable for an endovascular approach, laparoscopic treatment of splenic artery aneurysm is a safe and effective minimally invasive option and alternative.
Zim migrants leader met VP Mohadi over documented nationals in SA A leader of the Zimbabwe Migrants in South Africa Ngqabutho Nicholas Mabhena has met the Vice President Kembo Mohadi discuss over the documented citizens in that country. "l had occasion to meet with the Vice President Hon Mohadi in Harare to discuss documentation of Zimbabweans in South Africa in the following categories, 2010 amnesty to those who had fraudulently acquired authentic South African documents; South African White paper on International Migration which speaks to SADC Work Visa, SADC Business Visa & SADC Cross-Border Trade Visa - according to the new policy, there must be a bilateral agreement between the two governments to issue the said visas to Zimbabweans. This seeks to assist undocumented Zimbabweans who missed out on special permits in 2010," he said. "Zimbabweans in South Africa who are not documented as Zimbabweans (who do not have Zim birth certificates, Zim ID & passport). The request is to have the Zimbabwean office of the Register General to set up a registration centre in South Africa to assist the affected." He said without saying much on the outcome of the meetings as there are internal consultation within the Zimbabwean government and also with the South African government, talks are going on well.
Animal-rights Expert Endorses Kosher Plant A leading expert on the humane treatment of animals is giving a stamp of approval to the nation’s largest kosher slaughterhouse after previously criticizing the plant. Temple Grandin, an animal science professor, issued her endorsement last week, after visiting the AgriProcessors slaughterhouse in Postville, Iowa. The plant, owned and run by Chabad-Lubavitch Hasidim, has been the subject of criticism since 2004, when an animal rights group, People for the Ethical Treatment of Animals, released video footage from inside the slaughterhouse that showed cows going to a loud and violent death. When Grandin initially viewed the video, she said it was the “most disgusting thing I’d ever seen.” After her June 27 visit to Postville, Grandin stood by her original statements but said that AgriProcessors appears to have improved its slaughter process. “What I saw there today was working very well,” she told the Forward after her full-day visit, for which she was paid as a consultant by AgriProcessors.
Food & Drink The Best Pizza Spots in Atlanta Max's Coal Oven Pizza | Andrew Thomas Lee Photography Max's Coal Oven Pizza | Andrew Thomas Lee Photography Atlanta’s been killing the pizza game for a while now. It still may not get the top billing of certain other cities that will go unmentioned, but if you live here, chances are you don’t feel you need to go further than the Old Fourth Ward to get slices and pies in various sizes and shapes that compete with the best pizzas in the country, if not the world. These are the very best in Atlanta. MTH Pizza Smyrna Smyrna Brick-oven pizza with new takes on the classics Todd Mussman, Ryan Turner, and Chris Hall have proven with restaurants like Muss & Turner’s and Local 3 that they know how to do food. So it shouldn’t surprise anyone that their new brick-oven pizzeria is rising like its high-protein, 72-hour-cold-fermented dough, which makes the light and crisp crust upon which supercharged flavors are delivered. Don’t let it bother you that they don’t do slices (16-inchers only) or substitutions (you can take stuff off your pie but can’t add anything). They have solid, stepped-up takes on the classics, including a roasted ‘shroom, pecorino, and ricotta funghi with crisped thyme, or the “My Tenderoni (P.Y.T.),” which is basically a pepperoni pie, but with the meat cut fresh from a slab. There are also chef-driven pies like “Jimmy Two Times” with fennel sausage and fennel salami, and “The Hell Boy,” which brings spice from Calabrian chilies and spreadable spicy salumi. Slim & Husky’s West Midtown West Midtown Artisan pies, craft beer, and hip-hop culture The Nashville-bred “Pizza Beeria” takes its artisan pies as seriously as it takes craft beer and hip-hop culture, all of which you’ll see tastefully curated at the Howell Mill location, and its soon-coming second setup near the Atlanta University Center. The three main partners are all HBCU grads, and are particularly interested in giving back to gentrifying African-American communities through jobs and student scholarships, but all of that is only possible because they make good money making really good pizzas, proven recently by winning a televised “Best Cheese Pizza” contest on Good Morning America. Adding to all that goodness is how they name their ovular flatbreads, like the all-meat “Cee No Green” and the spicy margherita “Red Light Special.” Amalfi Pizza Downtown Downtown True Neapolitan pizza, with the certification to prove it It’s one thing to serve Neapolitan pizza; it’s another to be included in the Associazione Verace Pizza Napoletana, which identifies and promotes “true Neapolitan pizza (verace pizza napoletana), i.e., the typical product made in accordance with the characteristics described in the AVPN International Regulations.” Amalfi is the first to receive the recognition in Georgia, and when take your first bite, you’ll understand how they earned it (operating partners Stephen de Haan and Greg Grant studied at two highly respected pizzerias in Italy under pizzaiolo maestros). The pies are made with ingredients from Italy’s Campania region, and baked at 900+ degrees in one of their 6,000-pound Italian brick ovens, so they’re done in one minute flat. Nina & Rafi Old Fourth Ward Old Fourth Ward Detroit-style pizza from the folks behind O4W O4W Pizza is incredible, but it’s in Duluth. Unless you live in Duluth, you’re probably more likely to visit this more central intown location from O4W pizzaoli Anthony Spina and partner Billy Streck (of Cypress Street Pint & Plate, Hampton + Hudson, etc.). You can get the “Super Margherita” in round or thick square Detroit format -- and both are good, but it's crazy to not get the Motor City versions, particularly if you’re a meathead. The “Toni Pepperoni” is outstanding, but the Detroit-style “I Love Pepperoni” takes it to a whole new level with its deeper dish and crispy-cheesed crust. Jack’s Pizza & Wings Old Fourth Ward Old Fourth Ward A dingy neighborhood classic for humongous slices Don’t let that terrible photo on their website fool you; this dingy, graffiti- and sticker-splashed O4W pizza den has outlasted lots of neighbors because they do a great job at their namesake. You can get whole pies, but most folks do fine with the huge, unevenly cut slices. These slices are not beautiful, but they are wonderful in their way. Also, your pizza won’t be fast, and you should park somewhere very, very visible. But these are all minor inconveniences when you get inside and find community, cheap beer and pizza. The classics here are the jalapeno, bacon, spinach, green pepper, and sliced tomato-topped “Hangover,” or the ridiculous-but-still-delicious “Soul Food,” with fried chicken, collards, mashed potatoes, and gravy instead of marinara. Fellini’s Various locations Various locations An affordable, dependable Atlanta classic Not including Fellini’s in a list of Atlanta’s best places for pizza amounts to a diss, and dissing Fellini’s is damn-near like dissing OutKast. Sure, it may not be the absolute best, but Fellini’s slices are still sizeable and super-affordable -- you’ll pay more for that overrated slice at your local mall food court, and feel less happy when you’re done. Whether you’re out on the patio at Howell Mill, Ponce de Leon, South Buckhead on Peachtree, Roswell Road, or wherever, you can’t deny that the balance of properly baked crust and pie, with fail-proof red sauce and a modest but sturdy selection of toppings, makes for an authentic lunch or dinner that’s low-brow enough to remind you that Old Atlanta is still very much alive, and loves you. Never hate on Fellini’s. Ammazza Old Fourth Ward, Decatur Old Fourth Ward, Decatur New York-meets-Naples pizza and an extensive craft beer selection The Edgewood location survived two head-on collisions (we'll never understand how two cars hit a very noticeable brick building on different occasions in the same place). It took a while to reopen, and more than a few places came around in the time it took to get back to the business of making pizza. But Ammazza is back, and opened a new Decatur location, and didn’t lose a single step in saucing, cheese-ing and baking their NY-meets-Naples style of pie. You can tell that’s still The Spotted Trotter’s meats, you can taste the balance of Caputo 00 flour, water, yeast, and sea salt that makes the dough that becomes perfect crust once heated in those beautifully tiled 900-degree ovens on the other side of the kitchen’s looking glass, and you can still match your desired pizza with a great selection of craft beer, whether you prefer Georgia or regional, or even something from as far as Sweden, Switzerland, or Quebec. Get here immediately and have the red-sauced Amarena (black cherry sausage, peppadew peppers, caramelized onions), the white-sauced Terra (sauteed wild mushrooms, goat cheese, truffle oil), or even the green Springer Mountain chicken pesto. Just take your time parking, please. Sebastian Davis/Thrillist Argosy East Atlanta East Atlanta Great wood-fired pizza in a veritable nerd heaven Not just a great beer bar, an analog arcade or comic book bruncher’s dream palace, EAV’s nautical-themed watering hole is home to way more than wooden sea creatures and owl murals. The wood-fired oven at the back of the joint slings tantalizing pies like the roasted garlic, asiago cream sauce-based Butternut Pie with black truffle honey pancetta; or the pepperoni/grass-fed beef/house-made sausage Mutiny on the Bounty, which according to the Beastie Boys is "what it's all about." Sapori di Napoli Decatur Decatur Neapolitan-style pies from two brothers who grew up on the Amalfi Coast Two brothers from Naples, who grew up spending weekends in a small country town on the Amalfi Coast named Agerola, moved to Decatur and started make the bossest pies in the suburb. That’s really about it. If you like Italian meats, San Marzano tomatoes and all that other Neapolitan tradition, you’ll be just fine here, especially if you order the Carnosa, which also includes bufala mozzarella, Italian sausage, spicy sopressata salame, and cotto ham. Avellino's Brookhaven Brookhaven A pizza so good you'll visit Brookhaven Take a great neighborhood pizza pub that started in Decatur, combine it with people who have impressive sauce skills, and you have the best place in Brookhaven for your pizza fix. They seem to specialize in spicy pizzas, but the bravest among you will try the legendary Alla Diavola, or “deviled” pie. They require you sign a waiver for that one. Sebastian Davis/Thrilist Antico Home Park/Westside Home Park/Westside The pizza that beat the crowd to the Neapolitan boom Little Italy is now a thing just outside of Georgia Tech’s campus, and it began with Antico. Though there have been lots of Neapolitan places that have sprung up since its success, Antico still has a loyal following and multiple locations now, including Avalon, The Battery at SunTrust Park, and even Miami. They still won’t let you build your own pizza or make any mods, but the sausage and sweet red peppers are still perfectly deserving of the line that forms out the door every day. Fritti Inman Park Inman Park Reasonably sized hand-tossed happiness that's been around but still feels brand new Perfectly puffed crusts lie just beneath sauce and cheese (and pesto, and salami, and beef, and ricotta, etc.). Seriously, with approximately 30 houses recipes, you’ll have to take your time getting through all of these. Varuni Napoli Krog Street Market Krog Street Market True Neapolitan pie makes KSM's wait well worth it It didn’t take long before folks noticed the lack of pizza at Krog, but things were made right with the introduction of Luca Varuni’s beloved recipe of all-Naples-sourced ingredients. With a stretched 800-square-foot counter stand that lets you see the action going in and out of the tiled ovens, Varuni Napoli offers a faster version of its heavily praised Morningside location’s pies, letting you choose toppings to build your own from a margherita or bianca base; or order a few standards from the menu like the buffalo mozzarella Nonna Mia or the pepperoni and pork sausage Bastardo. It’s also hard to not love a pizza you can order with a tapped Negroni Reserva cocktail. Heidi Geldhauser Colletta Alpharetta Alpharetta Show-stealing pizza in a flossy Italian restaurant That wood-fired pizza oven puts out one of the tastiest pies in the metro area -- one that’s actually worth the drive to the Avalon shopping center off 400 North. The crisped crust makes a fantastic margherita or even prosciutto pizza, which comes with ricotta, pickled baby bell peppers, fontina, and garlic, and you’ll enjoy it enough that you might not try their upstanding duck ragu pappardelle, lump crab, and shrimp farfalle, or other pasta dishes. Max's Coal Oven Pizza Downtown Downtown Consistently good crustiness from hot coals Those who remember Max’s will tell you it’s worth braving the traffic around Marietta Street in order to get a boxful of these quick cooking pies. There are 18 topping choices (capicola, crimini mushrooms, etc.) and 10 steady menu options, ranging from the four-meat salumi to the arugula and prosciutto pie, which is dressed with lemon pepper arugula. They keep things quality by the use of what they say is “the only genuine coal-burning oven in Georgia.” Savage Pizza Little Five Points Little Five Points The OG of L5P that makes one of ATL’s greatest pies It’s hard to say exactly what makes Savage so good, because it predates most of the pizza spots that are doing the Neapolitan thing. It’s just always reliable, simply delicious, still the same ol’ pizza that has an honesty you don’t need to double-check. You can still get a 9-inch small with four slices, a six-slice medium 12-inch, or go large on 16 inches and eight slices. They don’t kill you with cheese, they don’t slop you with sauce, and they don’t collapse with the crust. This is how you quietly survive almost 30 years of whatever “pizza wars” you’ve been hearing about in ATL. Sign up here for our daily Atlanta email and be the first to get all the food/drink/fun the ATL has to offer.
Device-to-device communication is a well-known and widely used component of many existing wireless technologies, including ad hoc and cellular networks. Examples include Bluetooth and several variants of the IEEE 802.11 standards suite, such as WiFi Direct. These example systems operate in unlicensed spectrum Recently, the use of device-to-device (D2D) communications as an underlay to cellular networks has been proposed as a means to take advantage of the proximity of wireless devices operating within the network, while also allowing devices to operate in a controlled interference environment. In one suggested approach, D2D communications share the same spectrum as the cellular system, for example by reserving some of the cellular uplink resources for D2D communications use. However, dynamic sharing of the cellular spectrum between cellular services and D2D communications is a more likely alternative than dedicated reservation, because cellular spectrum resources are inherently scarce and because dynamic allocation provides greater network flexibility and higher spectrum efficiency. The Third Generation Partnership Project (3GPP) refers to Network Controlled D2D as “Proximity Services” or ProSe, and efforts aimed at integrated D2D functionality into the Long Term Evolution (LTE) specifications are underway. The ProSe Study Item (SI) recommends supporting D2D operation between wireless devices—referred to as user equipments or UEs by the 3GPP—that are out of network coverage, and between in-coverage and out-of-coverage wireless devices. In such cases, certain UEs may regularly transmit synchronization signals to provide local synchronization to neighboring wireless devices. The ProSe SI also recommends supporting inter-cell D2D scenarios, where UEs camping on possibly unsynchronized cells are able to synchronize to each other. Still further, the ProSe SI recommends that in the LTE context, D2D-capable UEs will use uplink (UL) spectrum for D2D communications, for Frequency Division Duplex (FDD) cellular spectrum, and will use UL subframes from Time Division Duplex (TDD) cellular spectrum. Consequently, the D2D-capable UE is not expected to transmit D2D synchronization signals—denoted as D2DSS—in the downlink (DL) portion of the cellular spectrum. That restriction contrasts with network radio nodes or base stations, referred to as eNodeBs or eNBs in the 3GPP LTE context, which periodically transmit Primary Synchronization Signals, PSS, and Secondary Synchronization Signals, SSS, on the downlink. The PSS/SSS enable UEs to perform cell search operations and to acquire initial synchronization with the cellular network. The PSS/SSS are generated based on pre-defined sequences with good correlation properties, in order to limit inter-cell interference, minimize cell identification errors and obtain reliable synchronization. In total, 504 combinations of PSS/SSS sequences are defined in LTE and are mapped to as many cell IDs. UEs that successfully detect and identify a sync signal are thus able to identify the corresponding cell-ID, too. To better appreciate the PSS/SSS configurations used by eNBs on the DL in LTE networks, FIG. 1 illustrates time positions for PSS and SSS in the case of FDD and TDD spectrums. FIG. 2 illustrates PSS generation and the resulting signal structure, FIG. 3 illustrates SSS generation and the resulting signal structure. FIG. 2 particularly highlights the formation of PSS using Zadoff-Chu sequences. These codes have zero cyclic autocorrelation at all nonzero lags. Therefore, when a Zadoff-Chu sequence is used as a synchronization code, the greatest correlation is seen at zero lag—i.e., when the ideal sequence and the received sequence are synchronized. FIG. 3 illustrates SSS generation and the resulting signal structure. In LTE, the PSS as transmitted by an eNB on the downlink is mapped into the first 31 subcarriers on either side of the DC subcarrier, meaning that the PSS uses six resource blocks, with five reserved subcarriers on each side, as shown in the following figure. Effectively, the PSS is mapped on to the middle 62 subcarriers of the OFDM resource grid at given symbol times, where “OFDM” denotes Orthogonal Frequency Division Multiplexing, in which an overall OFDM signal comprises a plurality of individual subcarriers spaced apart in frequency and where each subcarrier at each OFDM symbol time constitutes one resource element. As FIG. 3 illustrates, the SSS are generated not using Zadoff-Chu sequences, but rather using M sequences, which are pseudorandom binary sequences generated by cycling through each possible state of a shift register. The shift register length defines the sequence length. SSS generation in LTE currently relies on M-sequences of length 31. With the above in mind, the following equation defines the physical cell identifier of a given cell in an LTE network,NIDCELL=3NID(1)+NID(2),where NID(1) is the physical layer cell identity group (0 to 167), and where NID(2) is the identity within the group (0 to 2). As noted, this arrangement defines a cell identifier space of 504 values. The PSS is linked to the cell identity within the group NID(2), while the SSS is linked to the cell identity within the group NID(1) and the cell identity within the group NID(2). In particular, the PSS is a Zadoff-Chu sequence of complex symbols having length-62. There are three root sequences, indexed by the cell identity within the group NID(2). As for the SSS, two length-31 sequences are scrambled as a function of the cell identity from the group NID(1) and from the group NID(2). A receiver obtains the cell identity conveyed by the PSS and SSS by demodulating the PSS to obtain the value within the group NID(2) and then uses that knowledge to demodulate the SSS to obtain the value within the group NID(1). Because of the desirable properties of the Zadoff-Chu and M sequences used to generate the PSS and SSS in LTE, and because of the preexisting investment in algorithms and associated device-side processing as just outlined, there is an express interest in reusing these “legacy” PSS/SSS signal generation and detection techniques for D2D Synchronization Signals, D2DSS. Further aspects of D2DSS were considered at the TSG RAN1 #74bis meeting of the Technical Specifications Group or TSG responsible for the Radio Access Network (RAN) in 3GPP. TSG RAN is responsible for defining the functions, requirements and interfaces of the Universal Terrestrial Radio Access Network (UTRAN) and the Evolved UTRAN (E-UTRAN), for both FDD and TDD modes of operation. The following working assumptions were set forth in the meeting: Synchronization sources transmit at least a D2DSS: D2D Synchronization Signal a. May be used by D2D UEs at least to derive time/frequency b. May (FFS) also carry the identity and/or type of the synchronization source(s) c. Comprises at least a PD2DSS i. PD2DSS is a ZC sequence ii. Length FFS d. May also comprise a SD2DSS i. SD2DSS is an M sequence ii. Length FFS As a concept for the purpose of further discussion, without implying that such a channel will be defined, consider a Physical D2D Synchronization Channel or PD2DSCH: e. May carry information including one or more of the following (For Further Study or FFS): i. Identity of synchronization source ii. Type of synchronization source iii. Resource allocation for data and/or control signaling iv. Data v. others FFS A synchronization source is any node transmitting D2DSS f. A synchronization source has a physical identity PSSID g. If the synchronization source is an eNB the D2DSS is Rel-8 PSS/SSS h. Note: in RAN1#73, “synchronization reference” therefore means the synchronization signal(s) to which T1 relates, transmitted by one or more synchronization source(s). Even though a range of different distributed synchronization protocols are possible, one option under consideration by the 3GPP is based on hierarchical synchronization with the possibility of multi-hop sync-relay. In short, some nodes adopt the role of synchronization masters—sometimes referred to as Synchronization Heads (SH) or as Cluster Heads (CH)—according to a distributed synchronization algorithm. If the synchronization master is a UE, it provides synchronization by transmitting D2DSS and/or PD2DSCH. If the synchronization master is an eNB it provides synchronization by PSS/SSS and broadcast control information, such as being sent using MIB/SIB signaling, where MIB denotes “Master Information Block” and SIB denotes “System Information Block.” The synchronization master is a special case of synchronization source that acts as an independent synchronization source, i.e., it does not inherit synchronization from other nodes by use of the radio interface. UEs that are under coverage of a synchronization source may, according to predefined rules, transmit D2DSS and/or PD2DSCH themselves, according to the synchronization reference received by their synchronization source. They may also transmit at least parts of the control information received from the synchronization master by use of D2DSS and/or PD2DSCH. Such a mode of operation is referred to herein as “sync-relay” or “CP-relay.” It is also helpful to define a “synchronization reference” as a time and/or frequency reference associated with a certain synchronization signal. For example, a relayed synchronization signal is associated with the same synchronization reference as the sync signal in the first hop. A number of advantages or benefits flow from reusing legacy PSS/SSS for D2DSS. sync signals. For example, because UEs must already detect and process PSS/SSS signals transmitted from eNBs in the network, substantially the same algorithms and processing can be reused for detecting D2DSS if the same PSS/SSS sequences are used for D2DSS. However, it is recognized herein that a number of potential issues arise with such reuse. Consider, for example, the assumption that the cell-ID [0, . . . , 503] identifies a synchronization reference or source provided from an eNB operating in an LTE network. In a similar fashion, one assumes that a D2D identity will be used to identify a synchronization reference or source provided from a D2D-enabled UE. The D2D-identity may be significantly longer than the cell-ID, e.g., 16 bits or more, and it cannot be mapped to the D2DSS without significantly degrading sync detection performance.
Lazaro cardenas single hispanic girls Women in postrevolutionary mexico: the emergence of a new in its arguments, revolutionary women in postrevolutionary mexico this preliminary study of a single. On being a single woman of a certain age in puerto vallarta - part one do i have any wise advice or suggestions for single women colonia lazaro cardenas. Latin american studies excellence women ’s studies the book narrates the story of the spanish conquest and the widespread violations against the hispanic. Find out more about the history of michoacán, including videos, interesting articles, pictures, historical features and more get all the facts on historycom. Canal once is a broadcast television station located in mexico monte maria is a hispanic christian tv only television station based in lazaro cardenas. #locationgwp# single women meet single women from lázaro cárdenas (quintana roo) on mobifriends, is 100% free, via. Mexican presidents serve a single six year this brought attention to the blatant inequalities between men and women in mexico cardenas, c - son of lazaro. On behalf of executive director, heather alexander, and artistic director, roy alan from the winter park playhouse, orlando, fl we are delighted to announce that the stranger from seville, our musical adapted from the extraordinary novel ‘the mapmaker’s opera’ by béa gonzalez, has been accepted into the 2018 florida festival of new. Wyndham hotel group’s fastest growing brand enters mexico with 400th hotel. Latin american studies sheds light on the tactics and careers of cardenas and tejeda in estudios interdisciplinarios de america latina y el cribe. Before the 1910 mexican revolution that overthrew porfirio díaz, most of the land was owned by a single elite ruling class land reform in mexico, 1910 - 1980. Freehold borough schools superintendent was honored amid the district are hispanic to making the world a better place,” said lazaro cardenas. Ap comparative government chapter 6: mexico chapter 6 mexico lazaro cardenas parties must run at least 30% women for both lists for the proportional. Top ten heros 1 cuauhtemo- (1502–1525) one of the aztec emperors (1520-21), who became emperor at 18 lazaro cardenas president of mexico, 1934-40. The 15 best places that are good for singles in cabo san lucas created by foursquare lists • published on: may 9, 2018 share tweet 1 tipsy bar 84. Lazaro cardenas environmental josh also serves as the head coach for the school’s girls he worked as the youth and hispanic regional director for nextgen. Chat and meet single lazaro cardenas become a member our mexican matchmaing website and get a real chance to meet attractive men seeking for lonely women. 2 charter schools in this town accused of segregation not a select demographic, said lazaro cardenas and 30% hispanic students compared. Find best value and selection for your norteno mexican costume ballet folklorico norteno mexican costume ballet folklorico tamaulipas nuevo lazaro cardenas. Lázaro cárdenas — heroic leader of mexico and its hispanic link news service invites readers in an lazaro cardenas political propaganda for. The united states is a country of many cultures which, through immigrants, had an influence on the unique fiber of american life today some of these immigrants who had a profound effect, were the spanish and the hispanics from mexico, cuba, santo domingo, and puerto rico. Women's suffrage cárdenas had ^ faces of the revolution: lazaro cardenas currently in spain, people bear a single or composite given name.
50 Calculate the remainder when 56274 is divided by 18753. 15 Calculate the remainder when 37901560 is divided by 352. 312 What is the remainder when 66969985 is divided by 9948? 49 Calculate the remainder when 6164126 is divided by 158053. 59 Calculate the remainder when 1941612 is divided by 1743. 1653 What is the remainder when 652031 is divided by 2012? 143 Calculate the remainder when 168039052 is divided by 261. 205 Calculate the remainder when 2232301 is divided by 2231957. 344 What is the remainder when 24630569 is divided by 6157623? 77 Calculate the remainder when 1778227 is divided by 1332. 7 What is the remainder when 338519 is divided by 112745? 284 Calculate the remainder when 10468502 is divided by 11. 0 What is the remainder when 3998811 is divided by 153799? 37 What is the remainder when 2419631 is divided by 85? 21 What is the remainder when 2143449 is divided by 194578? 3091 Calculate the remainder when 543177 is divided by 543175. 2 Calculate the remainder when 518316 is divided by 800. 716 Calculate the remainder when 6966345 is divided by 12308. 17 Calculate the remainder when 112503 is divided by 37500. 3 What is the remainder when 1351493 is divided by 4490? 3 What is the remainder when 1721609 is divided by 98? 43 Calculate the remainder when 1791451 is divided by 159. 157 What is the remainder when 8683439 is divided by 81? 77 What is the remainder when 15122505 is divided by 52? 21 Calculate the remainder when 24835 is divided by 4778. 945 Calculate the remainder when 84910 is divided by 7804. 6870 Calculate the remainder when 133626926 is divided by 83. 80 Calculate the remainder when 73394 is divided by 269. 226 What is the remainder when 8471 is divided by 422? 31 Calculate the remainder when 93811256 is divided by 813. 812 What is the remainder when 1596036 is divided by 954? 948 Calculate the remainder when 5980870 is divided by 14. 0 Calculate the remainder when 651815309 is divided by 870. 869 What is the remainder when 3052679 is divided by 8? 7 What is the remainder when 139999 is divided by 69956? 87 What is the remainder when 187739 is divided by 58? 51 What is the remainder when 2424792 is divided by 808260? 12 Calculate the remainder when 2735206 is divided by 683796. 22 Calculate the remainder when 153601 is divided by 5124. 5005 Calculate the remainder when 63033807 is divided by 3707869. 34 Calculate the remainder when 222471 is divided by 353. 81 Calculate the remainder when 5612392 is divided by 485. 457 What is the remainder when 54405965 is divided by 764? 761 What is the remainder when 14614740 is divided by 1094? 1088 Calculate the remainder when 4592052 is divided by 765341. 6 Calculate the remainder when 11887626 is divided by 53. 44 Calculate the remainder when 144381874 is divided by 244. 242 Calculate the remainder when 440606 is divided by 236. 230 Calculate the remainder when 374723 is divided by 124786. 365 What is the remainder when 18004 is divided by 235? 144 What is the remainder when 839574 is divided by 595? 29 Calculate the remainder when 6366905 is divided by 39. 38 Calculate the remainder when 74180 is divided by 63. 29 What is the remainder when 235024013 is divided by 2818? 2813 What is the remainder when 5245282 is divided by 293? 289 What is the remainder when 44384820 is divided by 86520? 60 Calculate the remainder when 12768490 is divided by 53. 48 What is the remainder when 33302559 is divided by 54684? 3 Calculate the remainder when 49224107 is divided by 24612035. 37 What is the remainder when 75664 is divided by 488? 24 Calculate the remainder when 16164 is divided by 222. 180 What is the remainder when 514754 is divided by 11437? 89 What is the remainder when 2799489 is divided by 311052? 21 What is the remainder when 1250713 is divided by 2270? 2213 What is the remainder when 902768 is divided by 106? 72 Calculate the remainder when 1177524 is divided by 2872. 4 What is the remainder when 108702 is divided by 36193? 123 Calculate the remainder when 22405361 is divided by 1018425. 11 What is the remainder when 13321868 is divided by 33641? 32 What is the remainder when 254071413 is divided by 445? 443 Calculate the remainder when 5237485 is divided by 149. 135 Calculate the remainder when 1985550 is divided by 208. 190 Calculate the remainder when 252385 is divided by 95. 65 Calculate the remainder when 948456 is divided by 3809. 15 Calculate the remainder when 1014054 is divided by 70. 34 What is the remainder when 2067862 is divided by 108824? 206 What is the remainder when 5778303 is divided by 2575? 3 What is the remainder when 145860471 is divided by 4404? 4395 What is the remainder when 1347745 is divided by 65? 35 What is the remainder when 171350 is divided by 2116? 2070 What is the remainder when 46927658 is divided by 1466489? 10 What is the remainder when 179728 is divided by 42? 10 Calculate the remainder when 2181049 is divided by 64. 57 What is the remainder when 2099194 is divided by 21641? 17 Calculate the remainder when 2096881674 is divided by 535. 534 Calculate the remainder when 280776 is divided by 280635. 141 What is the remainder when 13520112 is divided by 1831? 8 Calculate the remainder when 62277 is divided by 20553. 618 What is the remainder when 56120 is divided by 27946? 228 Calculate the remainder when 36326781 is divided by 903. 897 Calculate the remainder when 1980503 is divided by 1323. 1295 Calculate the remainder when 350211625 is divided by 3714. 3709 What is the remainder when 492051 is divided by 247? 27 Calculate the remainder when 9273 is divided by 2286. 129 What is the remainder when 3471865 is divided by 323? 261 What is the remainder when 6215389 is divided by 62? 13 Calculate the remainder when 26973453 is divided by 80. 13 Calculate the remainder when 370345811 is divided by 35. 31 What is the remainder when 675516 is divided by 210? 156 Calculate the remainder when 2203365 is divided by 17. 12 Calculate the remainder when 2784388 is divided by 163787. 9 What is the remainder when 51596 is divided by 3834? 1754 What is the remainder when 54051477 is divided by 214? 213 What is the remainder when 1668144 is divided by 238267? 275 What is the remainder when 432405 is divided by 3300? 105 Calculate the remainder when 2526840 is divided by 172. 160 What is the remainder when 2913641 is divided by 303? 296 What is the remainder when 701166 is divided by 69815? 3016 Calculate the remainder when 2945471 is divided by 566. 7 What is the remainder when 168161303 is divided by 335? 13 Calculate the remainder when 376556 is divided by 410. 176 Calculate the remainder when 16534321 is divided by 17646. 19 What is the remainder when 6176935 is divided by 43? 28 Calculate the remainder when 22741 is divided by 5580. 421 What is the remainder when 243413 is divided by 80? 53 What is the remainder when 2778358 is divided by 17? 14 What is the remainder when 8898769 is divided by 552? 529 Calculate the remainder when 33728 is divided by 1467. 1454 Calculate the remainder when 14627799 is divided by 14627792. 7 Calculate the remainder when 159761 is divided by 39541. 1597 Calculate the remainder when 349873 is divided by 1366. 177 Calculate the remainder when 901809 is divided by 2947. 27 Calculate the remainder when 21087 is divided by 5245. 107 Calculate the remainder when 626554 is divided by 5496. 10 What is the remainder when 2388796 is divided by 3268? 3156 What is the remainder when 1658400 is divided by 2400? 0 What is the remainder when 575225 is divided by 5139? 4796 What is the remainder when 4519847 is divided by 410? 7 Calculate the remainder when 4990550 is divided by 93. 77 Calculate the remainder when 520467691 is divided by 4. 3 Calculate the remainder when 657098 is divided by 1880. 978 Calculate the remainder when 335851 is divided by 2125. 101 Calculate the remainder when 20852678 is divided by 2378. 2374 What is the remainder when 126005046 is divided by 11308? 2 What is the remainder when 5426578 is divided by 63842? 8 What is the remainder when 503886 is divided by 18660? 66 What is the remainder when 12614 is divided by 1678? 868 Calculate the remainder when 11652275 is divided by 38. 31 What is the remainder wh
Sampling properties of random graphs: the degree distribution. We discuss two sampling schemes for selecting random subnets from a network, random sampling and connectivity dependent sampling, and investigate how the degree distribution of a node in the network is affected by the two types of sampling. Here we derive a necessary and sufficient condition that guarantees that the degree distributions of the subnet and the true network belong to the same family of probability distributions. For completely random sampling of nodes we find that this condition is satisfied by classical random graphs; for the vast majority of networks this condition will, however, not be met. We furthermore discuss the case where the probability of sampling a node depends on the degree of a node and we find that even classical random graphs are no longer closed under this sampling regime. We conclude by relating the results to real Eschericia coli protein interaction network data.
Vermont woman dies in kayak accident on Buffalo National River BUFFALO NATIONAL RIVER, Ark. (KTHV) - The National Park Service says a woman kayaking dies in an accident while on the Buffalo National River. On May 6 at 2:30 p.m. a 79-year-old Vermont native drowned after her head struck a tree and her kayak overturned. A friend that was with her says she was underwater for several minutes before she could reach the victim's kayak. She says she was unable to turn the kayak upright or get her friend's head out of the water. Boaters floating down the river helped turn the kayak upright and get the woman out of the water. It is thought the woman was underwater for about 40-45 minutes. The accident happened just upstream from the Ozark Campground. National Park Service rangers, along with Newton County Search and Rescue, recovered the body. The river was running at 588 cubic feet/second.
Who wants to become first in any stream of life, they always do all the things in advance. Great personality quotes us “Kal Kare So Aaj Kar, Aaj Kare So Ab” means, do your work right now, which you are thinking to do tomorrow. Leaving work or task on tomorrow is like going to hellish condition of life. Advance planning for the trip, advance booking, and advance thinking about your future and anything practical you do in advance, you will get success on the way. So from today onwards don’t postpone anything which you can do today itself. Some government gives relief if you pay your taxes in advance. See there are lots of good incidents. If you take precautions of any work which required the critical situation to do it and full of madness to do, but still you can come out with a newer idea why because you are an advanced thinker on the situations too. Sometimes your advance thinking idea can create a history. Why Albert Einstein got success in his work. overly we can say because of advance thinking Agree or not? New Year wishes In Advance Nature has all the things in its palm, who don’t like first step of their child? How many steps you have walked from your birth? Can you count? Year after year we are walking, walking and walking… but truly speaking our parents has counted our step when we just started waking. Our First steps on this earth! You can Image how courage you have taken to come up? Or can say how much courage you are going to feed in your child to lead a life, year after year. As parents, we all are thinking this. This thinking only gives us strength to welcome New Year; as this New Year hope is going to be true to your baby’s life as well as your life. Your cute new year wishes is going to mark, Sorry a remarkable mark in the mind of your kids as they are in the learning stage. Your well wish to your lovely baby has two side of coin. First one side is – your lovely baby will be in full of joy for the upcoming new year as you wished them; and another side is your promise to get your kid happiest moment throughout of the next entire year. Here your message New Year wish to your baby child is your promise to provide that. It’s not just Happy New Year, not only three words; it has a thousand of bonding to it. New Year Wishes Quotes for Kids and Children New Year always brings something new to every man kind; our life runs behind the every new moments or New Year. There are joy, sorrow, loneliness, love, lust, pleasure, pride and other different emotions which are going to come and go, on its way. We each one individual has inner core sensitivity or ability to make sense through that; we really knows such and such things will be going to come, occur in our life. Warmness to those feelings we express our words to friends, relatives or beloved one say like Happy New Year. This three word sentence is not enough to describe what kind of happiness you are going to get in these New Year. Happy New Year Wishes for Family Members On this entire earth, the topmost thing is love, being in love is amazing. What a special romantic moment for a New Year occasion and your love is with you. You will never ever lose these all romantic moment from your memory; to make them such a nice event we all are in the queue to become topmost. Why not? After all, that is for love. People say “Love me Pagal ho Gaya hai” “Love ka Pagal pan AAP ko shayar Bana Sakta hai.” Express your feelings with Shayari to your beloved one on the occasion of this New Year. The idea is great! Amazing! Through Shayari, you can express your different opinion in different ways. New Year Love Shayari Want to propose your loved one on this New Year day. Proposes to someone in the process to express your feelings about him or her. With the use of Shayari, you can propose your hearts feelings in innovative ways. Your lots of words, thoughts, and intentions to the person can be revealed by one Shayari. Shayari is the one which becomes a voice of your heart. With the use of Shayari, you will get impressed soon. Your physical status will improve a lot in the reader’s mind. Shayari born in around the 12th century, from those days people use this form of saying to mark their feelings on different occasions and celebrations. Shayari has a bunch of words, line, and phrases. Each word of Shayari has a deep core understanding or meaning. You feel amazing or seamless based on the Shayari quotes. Shayari quotes are the best ways to express your New Year happiness. Your beloved one will never forget your heart touching lines it will mark a deep in their heart. Now, what are you waiting for? No more need required to research for Love Shayari. We have gathered all best one for you. Read; catch the best one amongst these for your special one. Conclusion People say that Shayari don’t have any color, creed or cast. It takes a color of users heart-feels and relieves on who reads them. Each Shayari has an independent meaning, uniqueness. Different situations can be described in different worlds or by different Shayari. Describe lots of points your beloved on in one stroke is Shayari. So you can use Shayari for New year wishes as a bunch of happiest thoughts. So enjoy while sharing or saying Shayari to your beloved. If you have some more special tags which would be helpful to others, share it with us. Comment on us once you become favorite in the eye of your colleague. Do you know why 31st December is coming every year, my feeling is because you let go your all wrong going or bitter period and think on full of new wish with new innovative ideas to lead a new upcoming year life. To give courage or encourage your love on the occasion of New Year with New Year Sad Quotes is a good idea. Has your luck not been proven up to your thinking of mark in this entire year? Have you lost your relationship or you lose your loved one? Lots of miss-happening come your way to this year? You may have tears in your eyes due to your friends none happening work. Tears and sorrow fill your life, with no more hope on this year. Sometimes it happens when our heart says No! No! No! I don’t want this life to live, because of your luck or your astrology starts were not supported, due to that, you are getting lots of trouble.
/* * Copyright (c) 2015-present, Facebook, Inc. * All rights reserved. * * This source code is licensed under the license found in the LICENSE file in * the root directory of this source tree. */ @import "ui-variables"; // Unfortunately, only SOME themes have values for tooltip text/background colour variables. // (;´༎ຶД༎ຶ`) ༼ つ ಥ_ಥ ༽つ (ง'̀-'́)ง // So, we define fallback values to these variables based on the Atom UI variables. // This may not be ideal, but is the best solution until tooltip colours are added to `ui-variables`. .define-tooltip-colors-if-not-defined() { @tooltip-text-color: @text-color; @tooltip-background-color: @base-background-color; } .define-tooltip-colors-if-not-defined(); // According to the styling in `atom/notifications`, the z-index of notifications is set to 1000. // https://github.com/atom/notifications/blob/master/styles/notifications.less#L15 // The NUX should live on a layer above all UI elements, but below the notification layer. // So we set it to the largest possible value that does not interfere with notifications. (~˘▾˘)~ .nuclide-nux-tooltip { z-index: 999; } .nuclide-nux-tooltip-helper-parent { position: relative; } .nuclide-nux-tooltip-helper { position: absolute; height: 100%; width: 100%; top: 0; left: 0; } .nuclide-nux-content-container { display: flex; flex-direction: column; } .nuclide-nux-navigation { align-self: flex-end; margin-top: 10px; color: @tooltip-text-color; } // Style the links like buttons using the tooltip color sceheme. // This is done since the Atom button styling is quite inconsistent across themes. .nuclide-nux-link { border: 1px solid @tooltip-text-color; background-color: @tooltip-background-color; border-radius: @component-border-radius; padding: 3px 8px 3px 8px; -webkit-user-select: none; /* Disable text selection, useful when double clicking */ margin-left: 3px; font-size: 11px; } // Invert colours on hover. .nuclide-nux-link-enabled:hover { color: @tooltip-background-color; background-color: @tooltip-text-color; } .nuclide-nux-link-enabled { cursor: pointer; } .nuclide-nux-link-disabled { cursor: not-allowed; } .nuclide-nux-link-disabled { border: 1px solid @tooltip-text-color; opacity: 0.6; }
Cypress Cone Oil of the St. Gregory Palamas Monastery Monastery medicinal oil with cypress cone, made in the Holy Monastery of Saint Gregory Palamas. Beneficial properties: Appropriate for wounds, bruises and skin rashes. Against seborrhea, as an anti-aging, for headaches, colds and cough. External use. Precautions - Contraindications: Caution to people, who are sensitive to cedar or peach. The above information is not medical advice nor substitute advice of another health care professional. They are provided for information only. Do not stop any other medical care recommendations without consulting your doctor.
Add your own rating Only items marked with (*) are averaged into the displayed overall rating. General Rating Criteria *Temperament (1=Awful,10=Excellent) *Scholarship (1=Awful,10=Excellent) *Industriousness (1=Not at all industrious,10=Highly industrious) *Ability to Handle Complex Litigation (1=Awful,10=Excellent) *Punctuality (1=Chronically Late,10=Always on Time) *General Ability to Handle Pre-Trial Matters (1=Not all Able, 10=Extremely Able) *General Ability as a Trial Judge (1=Not all Able, 10=Extremely Able) Flexibility In Scheduling (1=Completely Inflexible,10=Very Flexible) Criminal Rating Criteria (if applicable) *Evenhandedness in Criminal Litigation (1=Demonstrates Bias,10=Entirely Evenhanded) General Inclination Regarding Bail (1=Pro-Defense,10=Pro-Government) Involvement in Plea Discussions (1=Not at all Involved, 10=Very Involved) General Inclination in Criminal Cases Pretrial Stage (1=Pro-prosecution,10=Pro-defense) General Inclination in Criminal Cases Trial Stage (1=Pro-prosecution,10=Pro-defense) General Inclination in Criminal Cases Sentencing Stage (1=Pro-prosecution,10=Pro-defense) Civil Rating Criteria (if applicable) *Evenhandedness in Civil Litigation (1=Not at all Evenhanded,10=Entirely Evenhanded) Involvement in Settlement Discussions (1=Not at all Involved,10=Very Involved) General Inclination (1=Pro-defendant, 10=Pro-plaintiff) Comments What others have said about Hon. Michael Mattice Comments Other Comment #: CA5834 Rating:1.0 Comments: HIs ruling is unreasonable, because petition for stay order of "5 yards" won't do any justice. The petitioner lives within 5 feet and we share the same hallway and staircase leading down to the front door… Also I contacted the petitioner's insurance company to see if it would cover for my auto accident but I did not mean to contact to his employee to interfere with his job. If this judge could not see the motivation the petitioner brings, I'll h ave to file appeal to argue. Because the petitioner and his landlord have a malice in its intention, and I showed why I was objecting to grant his petition by showing the surveillance camera mounted pointing at my bedroom to bring any normal activity as something objectionable. Civil Litigation - Govt. Comment #: CA5313 Rating:9.5 Comments: He is a solid judge he is very cautious and slow, which can be frustrating at times. However, for the most part all parties are better served due to his meticulous nature. Other Comment #: CA5227 Rating:Not Rated Comments: Judge Mattice ruled on a demurrer without leave to amend in a Securitized Mortgage Fraud. He did not know what an allonge is and did not understand about the Pooling and Servicing Agreement, which is instrumental if a Note is owned by a Trustee. Hereby he is assisting in perpetuating fraud and denying due process to present pertinent material. Shameful and a disgrace. Other Criminal Defense Lawyer Comment #: CA2951 Rating:1.0 Comments: This judge has no idea what has happened to the plaintiff, in fact doesn’t' know who the plaintiff is or why the complaint was brought! It is my opinion he needs to retire because his memory and cognition is deficient. As a plaintiff, I'm going to do everything I can to get him off my case as he is clearly incompetent due to a memory/cognition problem. My case has been in front of him for over a year, however he doesn't know who I am as a plaintiff! - J Other Comment #: CA2950 Rating:1.0 Comments: This judge doesn't know who I am as referenced by public court documents. Even thought I am a valid, documented plantiff in his court for over a year, under his rule for over a year, he has had to ask my name from my attorney(confused by the defendants name, calling me her name) for more than 2 hearings! What a travesty!! I hope he starts reading my pleadings and figures out who I am to bring my Aunt justice! Civil Litigation - Private Comment #: CA560 Rating:2.1 Comments: I saw Judge Mattice on the bench in Family Court. He was indulgent of the family law practioners he saw everyday regardless of their incompetence, and foul to everyone else, particularly pro per litigants. A disgrace.
Molecular heterogeneity in mitochondrial and chloroplast DNAs from normal and male sterile cytoplasms in sugar beets. Mitochondrial (mt) and chloroplast (ct) DNAs were prepared from normal (N) and male sterile (S) cytoplasmic lines of sugar beet. The DNAs were cleaved with BamHI, EcoRI, HindIII and SalI enzymes, and the resultant fragments were fractionated by agarose gel electrophoresis. The results showed that N and S cytoplasms contained distinct mtDNA. Although most of the DNA fragments were common to these two cytoplasms, each cytoplasm was readily characterized by bands specific to that cytoplasm. In addition, these distinctive cleavage patterns were invariant in different nuclear backgrounds. In contrast to the marked variation in mtDNA, restriction fragment analyses of ctDNA demonstrated little difference between both cytoplasms. Only HindIII digestion showed one band missing in the S genome. The data presented here provides circumstantial evidence for mitochondrial involvement in the inheritance of cytoplasmic male sterility in sugar beet.
Q: newgeometry makes content run to next page Thank you all from the beginning for your help in this. I'm completely new to LaTeX as I'm writing my bachelor thesis, so I'm sure I've made lots of mistakes... I'm using the geometry package because I need to modify the text width at a certain point in my thesis, and to do so I wanted to use \newgeometry inside a custom environment. The problem is that the part of the text that should be influenced by the new geometry runs all the way down to the next page. I have read this Using \restoregeometry in environment, next page runs off the page bottom question and, as you can see, I also used the \aftergroup comman, but still the issue persists. My custom environment is: \newenvironment{esempio}[1]% { \vspace{1.5ex} \noindent \underline{#1} \nopagebreak \newgeometry{textwidth=\textwidth} %\leftskip=1cm %\rightskip\leftskip } { \par \aftergroup\restoregeometry } And i use it like: \begin{esempio}{$k=2$} Il Poligono di Controllo \`e formato da $\{P_0, P_1, P_2\}$. La curva di Bezier \`e definita come $$P_0^2(t) = (1-t)P_0^1(t) + tP_1^1(t)$$ \end{esempio} I get the " K=2 " underlined in the correct position, but the following text is all down to the next page. My aim would be to have an environment that lets me format a body of text like this: bla bla bla bla bla bla bla bla bla bla bla bla K=2 bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla bla As a sidenote, you can see I also tried with the \leftskip. It gets the job done very well with the normal text, but most of the times I'll also include a picture on the side and I want the text to wrap around it, and with the \leftskip it leaves the space on the left of the image and so the text does not wrap nicely. @ egreg: thank you for the warm welcome and the tips :) @ Gonzalo: thanks, your code works like a charm for the text! However, as i said in the original question, a very common situation would be to insert a figure inside that text and i'd like the text to be wrapped around the picture. However, this causes a couple of warnings and bad boxes in the compilation process and the figure gets moved down and placed not after the "esempio" box, but after even the next paragraph, and it also creates huge blank spaces... quite weird. Below the MWE (rettaBezier is a picture 4cmx4cm square) \documentclass[a4paper,11pt,italian]{book} \usepackage[italian]{babel} \usepackage{graphicx, wrapfig, subfig} \usepackage{changepage} \usepackage{calc} \usepackage{amsmath, amsthm, amssymb, mathrsfs, setspace} \usepackage{mycommands} \usepackage{indentfirst} \usepackage[format=hang,font=footnotesize,labelfont=bf]{caption} \graphicspath{{./images/}{./matlab/}} \usepackage{lipsum} \begin{document} \lipsum[2] \begin{esempio}{$k=1$} \begin{wrapfigure}{o}[1cm]{0cm} \centering \includegraphics{rettaBezier} \caption{Retta ottenuta con $P_0 = [0, 0] , P_1 = [1, 1]$} \label{rettaBezier} \end{wrapfigure} \lipsum[2] \end{esempio} \lipsum[2] \end{document} A: As egreg mentions in a comment, \newgeometry affects whole pages. To temporarily change the text area width, you can use the adjustwidth command from the changepage package: \documentclass{article} \usepackage{changepage} \usepackage{lipsum} \begin{document} \lipsum[2] \begin{adjustwidth}{-2cm}{-2cm} \lipsum[2] \end{adjustwidth} \lipsum[2] \end{document} Your esempio environment then could look like in the following example (change the lengths according to your needs): \documentclass{article} \usepackage{changepage} \usepackage{soul} \usepackage{lipsum} \newenvironment{esempio}[1]% {\vspace{1.5ex} \begin{adjustwidth}{1cm}{1cm} \rlap{\ul{#1}}\par\nobreak } {\end{adjustwidth}} \begin{document} \begin{esempio}{$k=2$} \lipsum[2] \end{esempio} \lipsum[2] \end{document} Without additional packages, you can use a list: \documentclass{article} \usepackage{soul} \usepackage{lipsum} \newenvironment{esempio}[1]% {\vspace{1.5ex} \list{}{\setlength\leftmargin{1cm}\setlength\rightmargin{1cm}}\item\relax \rlap{\ul{#1}}\nobreak } {\endlist} \begin{document} \lipsum[2] \begin{esempio}{$k=2$} \lipsum[2] \end{esempio} \lipsum[2] \end{document} To be able to use wrapfigure inside the esempio environment, one option is to use a minipage to enclose the figure and its wrapping text (of course, now the material inside the minipage won't admit page breaks): \documentclass{article} \usepackage{soul} \usepackage{wrapfig} \usepackage{lipsum} \newenvironment{esempio}[1]% {\vspace{1.5ex} \list{}{\setlength\leftmargin{1cm}\setlength\rightmargin{1cm}}\item\relax \rlap{\ul{#1}}\nobreak } {\endlist} \begin{document} \lipsum[2] \begin{esempio}{$k=2$} \begin{minipage}[t]{\linewidth} \begin{wrapfigure}{r}{5cm} \centering \rule{4cm}{3cm} \caption{a test figure} \label{fig:test} \end{wrapfigure} \lipsum[2] \end{minipage} \end{esempio} \lipsum[2] \end{document} And the result: A: Thank you Gonzalo for your suggestion. However, i hate the minipage and avoid any use of it as much as i can, therefore i came up with this solution \newenvironment{esempio}[3]% { \vspace{1.5ex} \rlap{\underline{#1}} \par \setlength{\parindent}{0cm} \nopagebreak \leftskip=#2cm \rightskip=#3cm } { \par } It might not be the most elegant code ever, but it gets the job done and fits well in my project expecially because it allows pagebreaks in the middle of the "esempio".
Exacerbation of collagen arthritis by noise stress. We evaluated the effect of noise exposure on collagen arthritis in rats. Ninety-eight female Wistar rats were transported and subjected to a 90-decibel noise during a 1-h period for 7 days commencing 3 days prior to immunization with type II collagen. As a control, 99 rats received simultaneous immunizations but were otherwise left undisturbed. Noise exposure was associated with statistically significant increases in the prevalence of arthritis (p less than 0.05) and, early in its course, in the severity (p less than 0.001) of arthritis.
2010 NCAA Division II football rankings The 2010 NCAA Division II football rankings are from the American Football Coaches Association (AFCA). This is for the 2010 season. Legend American Football Coaches Association poll Notes References Rankings Category:NCAA Division II football rankings
Q: authorizeRequests() order makes a difference? I am learning Spring and want to know why there is difference when I change order of these two authorizeRequests() methods: This works fine: security.authorizeRequests() .antMatchers("/css/**") .permitAll(); security.authorizeRequests() .anyRequest() .authenticated(); This does not: security.authorizeRequests() .anyRequest() .authenticated(); security.authorizeRequests() .antMatchers("/css/**") .permitAll(); What I mean by "doesn't work" is that in my login page CSS is not applied while using second example. Why order of these two methods actually matters? A: When multiple children to the http.authorizeRequests() method each matcher is considered in the order they were declared. In your second example it define every request require authentication.
The present invention relates to an image processing system, and more particularly to a game computer system processing a variety of images such as a natural picture together with an animation picture. In a conventional game computer system, image data are defined by color for each dot. The colors of the image data are managed by a color pallet formed in a memory, the color pallet storing many pallet codes (PLT) corresponding to color data. In the conventional game computer system, image data are compressed (encoded) to be transmitted, and then the compressed data are extended (decoded) to be displayed. Each piece of image data is composed of the pallet code (PLT) and the number (CRL) thereof, which is called a pallet length. The compression method is called a "run-length" method. When a single color mode is employed for each screen, image data may be fixed in length (bits); however, when plural color modes are used for one screen, the lengths of the image data are different depending on the color mode. FIG. 1 shows the formats of image data according to the conventional game computer system, which employs 16, 32, 64 and 128 color modes. The pallet codes are defined by data of 4, 5, 6, and 7 bits for the 16, 32, 64 and 128 color modes, respectively. The length "L" of the pallet code in a color mode "m" is given by the following equation. EQU L=log.sub.2 m For example, the length "L" of the pallet code in the 128 color mode becomes "7" as follows: EQU L=log.sub.2 128=log.sub.2 2.sup.7 =7 The data needs to have a width corresponding to a bus line to be transmitted thereon, because the widths of buses vary depending on the system. When the image data are transmitted on a 8 bit bus, the data for the 16 color mode may be transmitted in entirely, as shown in FIG. 1; however, when the length of the image data to be transmitted is not a multiple of 8 bits, the data need to be divided, as shown in FIG. 2. For example, image data for the 32 color mode are compressed to 9 bits, the data are divided as 8 and 1 to be transmitted, and as a result, the left over one bit is transmitted with the following data. In the conventional game computer system, when the screen is divided into plural areas of different colors, the color mode of the greatest number is selected, because each picture is displayed using only one color mode. For example, when an animation picture with 16 colors and a natural picture with 16M colors are synthesized on the screen, the synthesized picture is displayed in the 16M color mode. Such a processing method is not effective for using the memory.
In fact in Haskell inefficient implementations sometimes turn out to be wrong implementations. In fact in Haskell inefficient implementations sometimes turn out to be wrong implementations. A very simple examples is the function <code>reverse . reverse</code> which seems to be an inefficient implementation of <code>id</code>. A very simple examples is the function <code>reverse . reverse</code> which seems to be an inefficient implementation of <code>id</code>. − But if you look closer it is not exactly equivalent to <code>id</code>, because for infinite input list, the function <code>reverse . reverse</code> is undefined, whereas <code>id</code> is defined. + In a language with [[strict semantics]], these two functions are the same. + But since the [[non-strict semantics]] of Haskell allows infinite data structures, there is subtle difference, + because for infinite input list, the function <code>reverse . reverse</code> is undefined, whereas <code>id</code> is defined. Now lets consider a more complicated example. Now lets consider a more complicated example. Revision as of 16:55, 28 December 2008 In this article we want to demonstrate how to check efficiency of an implementation by checking for proper results for infinite input. In general it is harder to reason about time and memory complexity of an implementation than on its correctness. In fact in Haskell inefficient implementations sometimes turn out to be wrong implementations. A very simple examples is the function reverse . reverse which seems to be an inefficient implementation of id. In a language with strict semantics, these two functions are the same. But since the non-strict semantics of Haskell allows infinite data structures, there is subtle difference, because for infinite input list, the function reverse . reverse is undefined, whereas id is defined. Now lets consider a more complicated example. Say we want to program a function, that removes elements from the end of a list, just like dropWhile removes elements from the beginning of a list. We want to call it dropWhileRev. (As a more concrete example imagine a function which removes trailing spaces.) A simple implementation is You will already guess, that it also does not work for infinite input lists. And actually it is also inefficient, because reverse requires to compute all list nodes (but it does not require to compute the values stored in the nodes). Thus the full list skeleton must be hold in memory. However it is possible to implement dropWhileRev in a way that works for more kinds of input. Here, foldr formally inspects the list from right to left, but actually it processes data from left to right. Whenever a run of elements occurs, which matches the condition p, then these elements are hold until the end of the list is encountered (then they are dropped), or a non-matching list element is found (then they are emitted). The crux is the part null xs, which requires to do recursive calls within foldr. This works in many cases, but fails if the number of matching elements becomes too large. The maximum memory consumption depends on the length of the runs of non-matching elements, which is much more efficient than the naive implementation above.
Identifying juvenile and adult yellow perch females from males is no longer an obstacle for aquaculture producers of this high-value fish, thanks to U.S. Department of Agriculture (USDA) scientists. A new step-by-step procedure developed by the scientists makes it easier to separate fish by gender for growth performance, physiological studies and to manage broodstocks for reproduction and genetic selection. Physiologist Brian Shepherd and his colleagues at the Agricultural Research Service (ARS) Dairy Forage and Aquaculture Research Unit in Milwaukee, Wisconsin, developed the systematic method to segregate yellow perch females from males during early growth stages. Because females tend to grow faster and larger than males, females could often be mistaken for males when being selected for genetic improvement prior to reproductive maturity. Previously, it was extremely difficult to identify gender until fish matured (up to two years). Read more … Farmed salmon is rich in fat, but fillets of farmed salmon contain less marine omega-3 than previously, because a large fraction of the fish oil in the feed has been replaced by plant oil. Scientists have discovered a way of stimulating farmed salmon to convert plant oil in the feed to marine omega-3. This means that farmed salmon may become a net producer of marine omega-3. Read more … Demand for salmon is soaring and is driving expansion of aquaculture—fish farming. For years, environmentalists advised conscientious consumers to avoid farmed salmon, but that’s starting to change, thanks to an evolving industry. Rich in protein and heart-healthy omega-3 fatty acids and low in saturated fats, salmon is increasingly being marketed as a healthy food. Demand for salmon has risen more than 20 percent in the last decade. Consumption is three times what it was in 1980. Read more … WASHINGTON – The U.S. Department of Agriculture’s Animal and Plant Health Inspection Service (APHIS) is lifting the Viral Hemorrhagic Septicemia (VHS) Federal Order that was first issued in 2006 in response to an outbreak of the fish disease in the Great Lakes region. After studying the disease, conducting surveillance and evaluating the latest science, APHIS has determined it can safely remove the Federal Order as long as states maintain existing VHS regulations and other practices to reduce risk. By removing the Federal Order, which has become duplicative with state regulations, we can still protect the health of farmed and wild fish while also supporting the interstate movement needs of the aquaculture industry. Beginning June 2, APHIS will no longer prohibit or restrict the interstate movement of VHS-susceptible species of live fish from VHS-affected or at-risk states, including: Illinois, Indiana, Michigan, Minnesota, New York, Ohio, Pennsylvania and Wisconsin. In addition, APHIS will no longer restrict the importation of the same species of live fish from Ontario and Quebec, Canada into the United States. However, this action does not affect the U.S. Fish and Wildlife Service’s salmonid importation requirements as found in title 50 of the Code of Federal Regulations. Although APHIS will no longer regulate VHS, the Agency’s Veterinary Services program will continue to work with states and industry to promote sound biosecurity practices and share scientific updates regarding the disease. Please find attached (PDF) a Federal Register notice announcing a series of public meetings to obtain comments on a proposed rulethat would establish requirements for shippers, carriers by motor vehicle and rail vehicle, and receivers engaged in the transportation of food, including food for animals, to use sanitary transportation practices to help ensure the safety of the food or feed products they transport. This may have implications for the commercial aquaculture feed manufacturing and transportation industries. The Food and Drug Administration (FDA or we) is announcing three public meetings to discuss the proposed rule that would establish requirements for shippers, carriers by motor vehicle and rail vehicle, and receivers engaged in the transportation of food, including food for animals, to use sanitary transportation practices to help ensure the safety of the food they transport. The proposed rule is part of our larger effort to focus on prevention of food safety problems throughout the food chain and is part of our implementation of the Sanitary Food Transportation Act of 2005 (2005 SFTA) and the FDA Food Safety Modernization Act (FSMA). The purpose of the public meetings is to inform the public of the provisions of the proposed rule and the rulemaking process (including how to submit comments, data, and other information to the rulemaking docket) as well as solicit oral stakeholder and public comments on the proposed rule and to respond to questions about the rule. usa online casino echeck World Fishing & Aquaculture interviews Norwegian modern fish farming pioneer, Bjørn Myrseth to discover what he believes the future holds for the fast growing industry. Fish that do not require fish protein in their feed will become important in tomorrow’s aquaculture, says Norwegian fish farming pioneer, Bjørn Myrseth. “I will continue to work with marine fish that can be grown in cages, but at the moment I am also interested in taking a look at the herbivore or omnivore fish that are sold at low prices such as tilapia and pangasius.” This is far removed from the farming of rainbow trout and Atlantic salmon in Norway, which is where Mr Myrseth began his long career in aquaculture. This career has taken him all over the world and has involved the farming of many species. Read more … A young Chilean researcher was awarded a prize by a prestigious US institution for her research aimed at developing technology to obtain antibiotics substitute for farmed salmon. The new technological solution is environmentally friendly. Its goal is to improve productivity in salmon farming through the use of a food additive with an active compound derived from indigenous marine bacteria off the coast of Valparaíso Region. Due to her innovative research, co-founder of the company Micro Marine Biotech and researcher at the University of Valparaiso, Claudia Ibacache, received an award from MIT Technology Review, a magazine from Massachusetts Institute of Technology (MIT), reported Valparaiso University. Read more … Recirculating aquaculture systems (RAS) are increasingly used to rear aquatic animals – as this use increases so does the potential for loss from disease. Unfortunately, few drugs are presently approved in the U.S. for use in RAS to control disease. This survey<http://www.surveymonkey.com/s/Viterbo_RAS> seeks to develop information that will make it easier for safe and effective drugs to be approved for use in RAS. As part of the drug approval process, the U.S. Food and Drug Administration assesses not only the efficacy of the drug for its intended purpose, but also its potential direct and indirect effects on the treated animals and the environment. There is little information currently available that summarizes how the various types of RAS are actually operated and the conditions under which a drug might be applied in those RAS. There is even less information available on how drugs might affect RAS biofilter operation and function; this information is important for determining the fate of drugs within RAS facilities, their potential effects on water quality, and for evaluating how much of the drugs might be released in RAS effluents. The lack of this broad baseline information on RAS and biofilters complicates the FDA’s assessments of potential risks to both the treated animal and the environment. Viterbo University (La Crosse, Wisconsin) developed this survey<http://www.surveymonkey.com/s/Viterbo_RAS> to describe current RAS operations and procedures, which, when combined with other drug effects information, will help inform the decisions FDA makes in regards to the approval of drugs to control disease in animals reared in RAS. * Most respondents will complete the survey in about 30 minutes. What information will you be asked to provide? * Information about your RAS (biofilter type/media/volume or size; rearing tank volumes; water flow/replacement/clarification/chemistry; species reared and feed protein level) * Information about oral or waterborne drugs used or considered for use in your RAS including * Drug impacts on biofilter function * Annual number of treatments administered, treatment timing, number of tanks treated * How treated water is handled within the RAS – i.e., can tank water bypass the biofilter and be discharged, etc. * Pathogens/diseases that impact your facility What if I don’t have time to complete the survey<http://www.surveymonkey.com/s/Viterbo_RAS>? * We understand time is precious and you might not be able to complete the entire survey. If you can’t complete the entire survey, please complete Section 1 and the first two questions of Section 2. How will the information be used? * Viterbo University will summarize the information at national and regional scales; individual facility or state summaries will not be developed * National summaries of RAS characteristics will be provided to FDA; regional summaries will be submitted only if needed. Individual facility or state information will not be submitted. Who should complete the survey<http://www.surveymonkey.com/s/Viterbo_RAS>? * All facilities that operate RAS, whether operated for profit, public fish stocking, education, or research, are encouraged to complete the survey. How will my information be protected? * Individual and facility identity and responses will be kept strictly confidential. * Facilities that respond to the survey will be assigned a unique facility identification number. * Individual facility information (name or location) will not be submitted to FDA; only nationally or regionally summarized data will be submitted to FDA. * Viterbo University will retain completed and coded surveys on secure media accessible only to the survey coordinator Why is Viterbo University completing this survey? * Students and faculty from Viterbo have supported several research projects to enhance access to safe and effective drugs for use in aquaculture through collaborations with the U.S. Geological Survey Upper Midwest Environmental Sciences Center. * Our involvement with this survey furthers the mission of the Biology Department at Viterbo to train the next generation of scientists by engaging students outside the classroom through involvement in real-world research problems to tackle science challenges How can I get more information? The National Aquaculture Association (NAA) is presenting a free webinar, “Fish Nutrition 101: What Fish Farmers Need to Know about Feeds and Feeding”, on November 19, 2013. Feed represents the largest cost of production in aquaculture, and there are very few aspects of aquaculture that aren’t directly or indirectly influenced by feeds and feeding practices. Growth and efficiency, broodstock performance and gamete quality, product value, water quality and effluent management, budgets and business planning, environmental impacts, etc.—what you feed and how you feed it affects virtually everything from egg to plate. In this webinar, we will discuss the basic nutritional requirements of fish and how these differ from terrestrial livestock, attributes of feeds and how to choose the best one for your operation, and feeding strategies to maximize efficiency. Special topics, including fish meal/fish oil sparing and omega-3 enriched products will also be discussed. Please join us at this free National Aquaculture Association webinar to learn more about the practical aspects of fish nutrition, aquafeeds, and feeding practices. Dr. Jesse Trushenski is an Associate Professor with the Center for Fisheries, Aquaculture, and Aquatic Sciences (CFAAS) at Southern Illinois University Carbondale where she heads a research team dedicated to aquaculture nutrition and fish physiology. Holding degrees from Western Washington University (B.S., 2002) and Southern Illinois University Carbondale (Ph.D., 2006), Dr. Trushenski is also President of the Fish Culture Section of the American Fisheries Society. Linda ODierno has over 25 years of experience working with the fish and seafood industry and is currently the Outreach Specialist for the National Aquaculture Association. Prior to that, she served as Coordinator of Fish and Seafood Development for the New Jersey Department of Agriculture and was a Regional Seafood Specialist with New York Sea Grant. Host: This is the third in a series of webinars presented by the National Aquaculture Association. The NAA outreach program provides educational programming for the U.S. aquaculture industry, foodservice professionals, nutritionists and dieticians, retailers and wholesalers, and consumers. Webinar topics are selected with industry input to meet actual information needs. Recorded webinars are available on the NAA website:www.thenaa.net Registration Link: Here is the link for people to register for the https://naa.ilinc.com/perl/ilinc/lms/event.pl?int=1. To register for the event “Fish Nutrition 101: What Fish Farmers Need to Know about Feeds and Feeding” check the name to register and then click the “Register” button below the list of items. Please see the information below on a new decision support tool for screening non-native freshwater fishes is now available on the Centre for Environment, Fisheries & Aquaculture (CEFAS) website. Thanks to Jeff Hill with the Tropical Aquaculture Laboratory at the University of Florida for providing this information. This is the Fish Invasiveness Screening Kit (FISK) v2.03 developed in a collaboration between UF Fisheries and Aquatic Sciences (Hill and Larry Lawson), CEFAS (Gordon Copp), and the Florida Fish and Wildlife Conservation Commission (Scott Hardin) with assistance of Lorenzo Vilizzi (La Trobe University/Murray-Darling Freshwater Research Center). This risk screening tool has been used in several international projects and published journal articles and is now freely available online (Excel spreadsheet tool and user guide pdf).
Robyn+Jeff // Cancun, Mexico // Married I've always loved what Bob Goff says about extravagant love: "Love is never stationary. In the end, love doesn't just keep thinking about it or keep planning for it. Simply put: love does" Robyn and Jeff met years ago at a wedding in Cancun, Mexico. Jeff wrote Robyn a letter once the week was over letting he know how much he loved getting to know her. And then years later, they got married on the same beach where they first met. But what happened in-between those two weddings was what really mattered: They continued to get to know each other, ran (literal) races together, challenged each other to train and PUSH themselves, Jeff came to South Carolina to help take care of Robyn's sister when she got sick and Robyn did the same when Jeff's sister was diagnosed with the same cancer. They DO. They are a couple that takes love as a verb and runs with it. And even with me: I don't think I'll be able to express the gratitude I have for Robyn, Jeff and the Galvan clan for not only bringing me to Mexico to document their wedding but for bringing me there for their entire wedding week to simply relax, soak up the sun and simply do nothing at the amazing resort we all stayed at. Talk about extravagant love! AND these two take marriage hardcore and finished an IRONMAN (!!!) competition yesterday. Crazy. Incredible.
1. Introduction {#sec1-cancers-10-00375} =============== Mechanistic (formally mammalian) target of rapamycin (mTOR) is often referred to as a master regulator of cell growth \[[@B1-cancers-10-00375]\]. mTOR is a serine/threonine protein kinase that coordinates cell growth, metabolism, protein synthesis and autophagy as a protein kinase complex, termed mTOR complex 1 (mTORC1). mTORC1 is fundamentally involved in the build-up of cellular biomass, which is rate-limiting for hyper-proliferative cancer cells. As well as cell growth control, mTORC1 promotes metabolic transformation, neovascularization, cell survival and metastasis (reviewed in reference \[[@B2-cancers-10-00375]\]). Genetic mutations leading to mTORC1 hyperactivation often occur in sporadic cancer but also underlie several tumor predisposition syndromes, such as Tuberous Sclerosis Complex (TSC) \[[@B3-cancers-10-00375]\] and Cowden disease/PTEN hamartoma syndrome (reviewed in reference \[[@B4-cancers-10-00375]\]). mTORC1 hyperactivity has also been identified in numerous cancer types including lung (\>50% in both small cell and non-small cell lung cancer cases \[[@B5-cancers-10-00375],[@B6-cancers-10-00375]\]), ovarian (approximately 70% of ovarian cancers \[[@B7-cancers-10-00375]\]), renal cell carcinoma \[[@B8-cancers-10-00375]\] and head and neck cancers \[[@B9-cancers-10-00375]\]. Although uncommon, activating mutations within the kinase domain of mTOR occurs in cancer \[[@B10-cancers-10-00375],[@B11-cancers-10-00375]\]. More frequently, genes that function upstream of the mTORC1 signaling pathway are mutated in cancer, such as PI3K, PTEN and RAS, which then cause aberrant signal transduction through mTORC1 to drive cancer growth \[[@B12-cancers-10-00375]\]. TSC is an autosomal dominant condition caused by mutations in either TSC1 or TSC2 and is characterized by mTORC1 hyperactivity, tumor growth in multiple organs, neurocognitive problems and epilepsy \[[@B13-cancers-10-00375]\]. TSC1 and TSC2 form a tumor suppressor protein complex that functions as a GTPase-activating protein (GAP) towards the small G-protein, Ras homologue enriched in brain (Rheb) \[[@B14-cancers-10-00375],[@B15-cancers-10-00375]\]. When associated with TSC1, the GAP function of TSC2 switches Rheb from its active GTP-bound state to an inactive GDP-bound state to turn off mTORC1. Consequently, loss-of-function mutations in either TSC1 or TSC2 cause Rheb to become predominantly GTP-loaded, thereby constitutively turning on mTORC1. mTORC1 inhibitors, such as rapamycin analogues (rapalogues), are currently used as the first-line therapy to treat TSC and are effective at shrinking tumors and delaying disease progression. However, mTORC1 inhibitors are cytostatic meaning that TSC tumor cells regrow after treatment withdrawal (reviewed in reference \[[@B16-cancers-10-00375],[@B17-cancers-10-00375]\]). This cytostatic nature of mTOR inhibitors is a principal reason why rapalogues have had limited clinical success in the cancer setting, where the lack of cytotoxicity leads to acquired drug resistance (reviewed in reference \[[@B2-cancers-10-00375]\]). To overcome the shortcomings of directly targeting mTORC1 in cancer, an alternative therapy might be to exploit the intrinsic vulnerabilities of cancer cells that have constitutive mTORC1 activation. In principle, you could take advantage of the cancer's inability to restore homeostatic balance during prolonged periods of drug induced cell stress to trigger a cytotoxic response. In contrast to cancer cells, non-cancerous cells would have greater flexibility in their homeostatic pathways, allowing them to better tolerate drug treatment. As loss of TSC1/2 and mTORC1 hyperactivity results in an increased burden of unfolded protein within the endoplasmic reticulum (ER), which causes membrane expansion of the ER and activation of the unfolded protein response (UPR) \[[@B18-cancers-10-00375]\], a possible therapy would be to employ ER stress inducers. To enhance the cytotoxic effect, you could also target cell survival pathways, such as autophagy. Previously, we examined the combined effect of chloroquine (an autophagy inhibitor) with nelfinavir (an ER stress inducer), which selectively enhanced cell death in mTORC1 hyperactive cells \[[@B19-cancers-10-00375]\]. Nelfinavir is a clinically approved HIV inhibitor that induces ER stress and shows promise as an anti-cancer agent \[[@B20-cancers-10-00375]\]. While these clinically viable drugs were previously shown to be selectively cytotoxic to mTORC1-active cells, combined chloroquine and nelfinavir treatment was not sufficient to kill all the cells, showing a heterogeneous drug effect and a significant level of drug resistance. In this current study, we wanted to refine our previous combinatory drug therapy to enhance their cytotoxic potency. To do this, we considered dual treatment with nelfinavir and mefloquine, which appeared to kill a higher percentage of *Tsc2*-deficient cells when compared to nelfinavir and chloroquine in our initial analysis \[[@B19-cancers-10-00375]\]. Mefloquine is a chloroquine derivative used in both the prevention and treatment of malaria and unlike chloroquine, more readily crosses the blood-brain barrier. Mefloquine has been reported to inhibit autophagy (via blocking the formation of autophagosomes \[[@B21-cancers-10-00375]\]), to induce reactive oxygen species (ROS) \[[@B22-cancers-10-00375]\] and to prevent PI3K/Akt/mTORC1 signaling \[[@B23-cancers-10-00375]\]. In this study, we demonstrate that nelfinavir and mefloquine show therapeutic promise as a drug combination that has high potency to kill TSC2-deficient cells and cancer cells with high mTORC1 activity. 2. Results {#sec2-cancers-10-00375} ========== 2.1. Nelfinavir and Mefloquine Synergize, and Selectively Kill Tsc2-Deficient Cell Lines {#sec2dot1-cancers-10-00375} ---------------------------------------------------------------------------------------- To build on our previous study \[[@B19-cancers-10-00375]\] and to further explore the impact of nelfinavir and mefloquine dual therapy, we tested the ability of nelfinavir and mefloquine to synergize and induce cell death. Loss of cell viability was quantified with escalating concentrations of drug, either as single agents or in combination using flow cytometry with DRAQ7 staining. DRAQ7 can only penetrate membranes of cells that are compromised (i.e., are no longer viable). Loss of cell viability (determined by an increase of DRAQ7 fluorescence) with single drug treatments for nelfinavir and mefloquine is shown in [Figure 1](#cancers-10-00375-f001){ref-type="fig"}A,B, respectively. To ascertain how synergistic this combination was, a range of mefloquine concentrations were tested with a fixed 10 µM concentration of nelfinavir ([Figure 1](#cancers-10-00375-f001){ref-type="fig"}C). Quantification of cell death was processed in CompuSyn using a non-constant ratio approach to generate a Combination Index (CI) value, where it is considered that a score \<1 is synergistic, a score of 1 is additive, and a score \>1 is antagonistic. Results reveal that 10 µM mefloquine was synergistic with 10 µM nelfinavir to induce cell death in the *Tsc2*−/− mouse embryonic fibroblasts (MEFs) (CI value = 0.03). Due to this finding, further assays were carried out using 10 µM mefloquine and 10 µM nelfinavir. We observed selective cytotoxicity with the combinatory drug treatment in cells lacking *Tsc2*, i.e., (37.6% −/+ 11.8 standard deviation (St-Dev) in the *Tsc2*+/+ MEFs versus 96% −/+ 2 in the *Tsc2*−/− MEFs) ([Figure 1](#cancers-10-00375-f001){ref-type="fig"}D). Etoposide was employed as a positive control and induced cell death in both the *Tsc2*+/+ and *Tsc2*−/− MEFs. To confirm that the genetic loss of *Tsc2* was responsible for selective cell death, mefloquine and nelfinavir were then tested in ELT3 (Eker rat leiomyoma-derived) cells ([Figure 1](#cancers-10-00375-f001){ref-type="fig"}E), another model of TSC \[[@B24-cancers-10-00375]\]. Results were comparable to that observed in the *Tsc2*−/− MEFs, where ELT3-V3 cells (lacking *Tsc2*) had a significant induction of cell death (76% −/+ 6.3) when compared to the rescued ELT3-T3 with *Tsc2* re-expressed (42% −/+ 5.4). To determine whether combined nelfinavir/mefloquine drug treatment could also target sporadic cancer cell lines, we tested three mTORC1-hyperactive cancer cells ([Figure 1](#cancers-10-00375-f001){ref-type="fig"}F). Results from flow cytometry showed that combined 10 µM nelfinavir with 10 µM mefloquine treatment caused a high degree of cytotoxicity in the MCF7 breast cancer cell line (67% −/+ 8.30), in the HCT116 colorectal cancer cell line (96% −/+ 1.95), and in the NCI-H460 lung cancer cell line (89% −/+ 2.5). As single drug treatments, nelfinavir and mefloquine were not cytotoxic to MCF7 cells. As a mono-agent, mefloquine was observed to potently kill the HCT116 and NCI-H460 cell lines (but to a lesser degree when compared to combination treatment with nelfinavir). Etoposide was used as a control, which was effective at killing the HCT116 and NCI-H460 cells, while the MCF-7 cells were notably resistant. 2.2. Combined Nelfinavir and Mefloquine Treatment Inhibits Tumor Formation, Induces Cytotoxicity in Tsc2−/− Spheroids, and Prevents Spheroid Outgrowth {#sec2dot2-cancers-10-00375} ------------------------------------------------------------------------------------------------------------------------------------------------------ To determine whether nelfinavir and mefloquine combination treatment prevents the formation of tumors in vitro, *Tsc2*−/− MEFs were grown in soft agar in the continued presence or absence of drug over 14 days ([Figure 2](#cancers-10-00375-f002){ref-type="fig"}A). We observed that when combined, nelfinavir and mefloquine significantly reduced colony growth compared to single drug treatments, which as mono agents did not significantly reduce colony size. Average colony size was 49.1 µm +/− 19.27 in diameter after nelfinavir/mefloquine treatment compared to 90.7 µm +/− 22.12 observed in samples treated with dimethyl sulfoxide (DMSO). To examine whether drug treatments could induce cell death in established tumor spheroids, *Tsc2*−/− MEFs were grown as spheroids and once established were then incubated with either DMSO, nelfinavir or mefloquine (as single drug treatments or in combination) for 4 days ([Figure 2](#cancers-10-00375-f002){ref-type="fig"}B). To measure loss of cell viability, spheroids were DRAQ7-stained during drug incubation. Intensity of DRAQ7 fluorescence was higher after combined treatment with nelfinavir and mefloquine (1424 MFU (mean fluorescent units) −/+ 404), when compared to single drug treatment with nelfinavir (668 MFU −/+ 146), and DMSO (768 MFU −/+ 214). Elevated DRAQ7 fluorescence was also observed with single drug treatment with mefloquine (1216 MFU −/+ 328). Tumor spheroids were transferred to plastic tissue culture plates to assess recovery and outgrowth of cells after drug removal ([Figure 2](#cancers-10-00375-f002){ref-type="fig"}C). Cell outgrowth was measured over 72 h, and showed cell recovery from the tumor spheroid after single drug treatment with either nelfinavir or mefloquine, although cell outgrowth was much slower with mefloquine when compared to either DMSO or nelfinavir treatments. In contrast, cells were unable to recover after combined treatment with mefloquine and nelfinavir, as observed by no detectable outgrowth of cells over the 72 h period. These data reveal that the combination of mefloquine and nelfinavir is effective at killing the bulk tumor. 2.3. Homeostatic Balance is Lost in Tsc2−/− MEFs after Combination Treatment with Nelfinavir and Mefloquine {#sec2dot3-cancers-10-00375} ----------------------------------------------------------------------------------------------------------- We postulated that the most likely mechanism of cell death would be because of prolonged ER stress and an inability to recover. Therefore, we initially examined ER stress signaling. Western blots were carried out on a panel of ER stress markers (inositol-requiring and ER-to-nucleus signaling protein 1α (IRE1α), activating transcription factor 4 (ATF4), C/EBP homologous protein (CHOP), growth arrest and DNA damage-inducible protein (GADD34), and Sestrin 2 (SESN2)), with thapsigargin treatment as a positive control. We observed a large difference in expression of ER stress markers when comparing *Tsc2*−/− to their wildtype *Tsc2*+/+ controls ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}A). For instance, the *Tsc2*−/− MEFs have a higher level of IRE1α, ATF4, CHOP and GADD34 protein expression after thapsigargin and nelfinavir as single drug treatments, while mefloquine weakly induced ER stress. The control cells exhibit a lower basal level of ER stress and show little ER stress induction with single or combination treatments. Analysis of p70 ribosomal protein S6 kinase 1 (S6K1) phosphorylation showed that *Tsc2*−/− MEFs had a high level of mTORC1 activity that was not altered with drug treatments. To confirm that ER stress was enhanced after treatment, X-box binding protein 1 (*Xbp1*) mRNA splicing was determined ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}B), revealing a higher level of *Xbp1* mRNA splicing in *Tsc2*−/− cells after combination drug treatment when compared to the untreated *Tsc2*−/− and treated *Tsc2*+/+ controls. We also observed a high level of *Xbp1* mRNA splicing in untreated *Tsc2*−/− MEFs, showing that these cells have a higher basal level of ER stress. To assess transcriptional changes, RNA sequencing was carried out on mRNA samples generated from *Tsc2*+/+ and *Tsc2*−/− MEFs treated with either DMSO or nelfinavir and mefloquine. We observed elevated expression of ER stress genes in both *Tsc2*+/+ and *Tsc2*−/− MEF cell lines after combined nelfinavir and mefloquine drug treatment, which was higher in the absence of *Tsc2* ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}C,D and [Table S1](#app1-cancers-10-00375){ref-type="app"}). We also observed elevated expression of ER stress genes in untreated *Tsc2*−/− MEFs when compared to wildtype. Conversely, negative regulators of ER stress such as CREBRF and IMPACT were expressed at higher levels in the *Tsc2*+/+ MEFs when compared to the *Tsc2*−/− MEFs ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}E and [Table S2](#app1-cancers-10-00375){ref-type="app"}). The volcano plot illustrates the differential expression of genes involved in ER stress between the *Tsc2*+/+ and *Tsc2*−/− MEFs after combined nelfinavir and mefloquine treatment ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}E). To determine the capacity of *Tsc2*+/+ and *Tsc2*−/− MEFs to effectively restore ER stress during treatment, we examined the expression of ER stress proteins after combined nelfinavir and mefloquine treatment at an early 6 h time point and at a longer 48 h time point. The 6 h time point reflects acute ER stress induction after treatment, while the 48 h time point reflects ER stress recovery during prolonged drug treatment ([Figure 3](#cancers-10-00375-f003){ref-type="fig"}F). In wildtype *Tsc2*+/+ MEFs, nelfinavir and mefloquine treatment increased the expression of CHOP and ATF4 protein at 6 h. However, by 48 h the expression of ATF4 and CHOP was restored back down to a level equivalent to untreated cells, indicating a complete recovery to the ER stress signaling pathway in the *Tsc2*+/+ MEFs. Similarly, the *Tsc2*−/− MEFs also recovered from ER stress at the 48 h time point. The ability to restore the ER stress signaling pathway in the *Tsc2*−/− MEFs implies that the ER stress pathway is unlikely the main trigger that instigates the observed cell death response. 2.4. Cytotoxicity of Tsc2-Deficient Cells with Nelfinavir and Mefloquine is Energy Dependent {#sec2dot4-cancers-10-00375} -------------------------------------------------------------------------------------------- We next wanted to determine whether mTORC1-hyperactivity was important for the induction of cell death. We carried out flow cytometry with DRAQ7 staining in *Tsc2*−/− MEFs, HCT116, NCI-H460 and MCF7 cells treated with either DMSO, or combination with nelfinavir and mefloquine in the presence or absence of rapamycin for 48 h ([Figure 4](#cancers-10-00375-f004){ref-type="fig"}A). Rapamycin failed to rescue cell death in these cell lines, showing that the cytotoxic effect of this drug combination is not prevented when mTORC1 is inhibited. Control western blots showing that rapamycin treatment was sufficient to block ribosomal protein S6 (rpS6) phosphorylation in these cells are shown ([Figure 4](#cancers-10-00375-f004){ref-type="fig"}B). As mefloquine is classically known as an autophagy inhibitor, we next analyzed build-up of lipidated LC3 to measure autophagy inhibition after treatment with either mefloquine or chloroquine after 3 h of treatment ([Figure 4](#cancers-10-00375-f004){ref-type="fig"}C). While mefloquine by itself did not cause an increase of lipidated LC3, there was an increase in the LC3-II lipidated isoform when mefloquine was combined with nelfinavir. It should be noted that the level of lipidated LC3 was not enhanced as much as chloroquine and nelfinavir treated *Tsc2*−/− MEFs, indicating that mefloquine is not as potent as chloroquine at inhibiting autophagy in these cells. Given the weaker level of autophagy inhibition with mefloquine, we considered it unlikely that blockade of autophagy was triggering the cell death response. To explore alternative mechanisms that might cause cell death, the RNA sequencing data was further analyzed to compare gene expression changes in the *Tsc2*+/+ and *Tsc2*−/− MEFs ([Figure 5](#cancers-10-00375-f005){ref-type="fig"}A−C). Key genes involved in the regulation of metabolism and energy homeostasis were markedly up-regulated in the *Tsc2*−/− MEFs during combined treatment with nelfinavir and mefloquine. Of interest, PPARGC1α (typically referred to as PGC1α (peroxisome proliferator-activated receptor gamma coactivator 1α)) was basally expressed at a higher level in *Tsc2*−/− MEFs when compared to the *Tsc2*+/+ controls, which was further enhanced upon combined treatment with nelfinavir and mefloquine. Transcriptionally regulated genes of PGC1α were also observed to be more highly expressed in the *Tsc2*−/− MEFs when compared to wildtype. These included genes involved in glucose and lipid metabolism, such as peroxisome proliferator-activated receptor delta (PPARδ) and gamma (PPARγ) where there is more than a 2-fold increase in PPARδ gene expression in *Tsc2*−/− MEFs compared to *Tsc2*+/+ MEFS and a nearly 17-fold difference in PPARγ expression. Expression of genes involved in glycolysis were also upregulated, which is suggestive of metabolic stress. These genes included pyruvate dehydrogenase kinase 1 (PDK1), pyruvate carboxylase (PCX), lactate dehydrogenase B (LDHB) and glycerol-3-phosphate dehydrogenase 1 (GPD1). AMP-dependent protein kinase (AMPK) is known to function upstream of PGC1α, and is involved in the gene-expression of PGC1α as well as its activity \[[@B25-cancers-10-00375]\]. AMPK-regulated genes involved in glucose metabolism/storage such as acetyl-CoA carboxylase 2 (ACC2, encoded by the ACACB gene) and glycogen synthase 1 (GYS1) were expressed at a much higher level in *Tsc2*−/− MEFs and were further increased upon drug treatment with nelfinavir and mefloquine. The overall increase of mRNA expression of key genes involved in energy metabolism indicates that the *Tsc2*−/− MEFs are likely energy stressed. To determine whether energy stress could be a possible mechanism of cell death, flow cytometry with DRAQ7 labelling was carried out with samples treated for 48 h with either DMSO, or nelfinavir and mefloquine combination in the presence or absence of methyl pyruvate ([Figure 5](#cancers-10-00375-f005){ref-type="fig"}D). Methyl pyruvate can alleviate energy stress when supplemented to cells, as it is a direct substrate of the citric acid cycle to produce energy by oxidative phosphorylation \[[@B26-cancers-10-00375]\]. Methyl pyruvate partially rescued nelfinavir and mefloquine induced cell death when supplemented to the *Tsc2*−/− MEFs (causing a 40% rescue of cell death). We confirmed that methyl pyruvate partially restored energy homeostasis within cells, as observed by a reduction in phosphorylated 5'-AMP-activated protein kinase (AMPK) and acetyl-CoA carboxylase (ACC). Methyl pyruvate also reduced the protein levels of SESN2 at the 24 h time point ([Figure 5](#cancers-10-00375-f005){ref-type="fig"}E). To further assess the capacity of *Tsc2*−/− MEFs to effectively restore energy stress, we examined phosphorylation levels of ACC after combined nelfinavir and mefloquine treatment at 6 h and 24 h time points ([Figure 5](#cancers-10-00375-f005){ref-type="fig"}F). In the presence of methyl pyruvate, the *Tsc2*−/− MEFs behaved similarly to the *Tsc2*+/+ MEFs after treatment with nelfinavir and mefloquine, where ACC phosphorylation was increased at 6 h and was then restored to a level equivalent to untreated cells at 24 h. In contrast, in the absence of methyl pyruvate the *Tsc2*−/− MEFs had a delayed induction of ACC phosphorylation during nelfinavir and mefloquine treatment. For instance, phosphorylation of ACC was only marginally enhanced at 6 h, but was then markedly elevated at 24 h. Expression of the ER stress proteins, ATF4, CHOP and GADD34 were significantly reduced at 24 h, showing recovery of ER stress. Collectively, our data shows that the *Tsc2*−/− MEF cells become energy stressed during the longer time points of nelfinavir and mefloquine treatment while they recover from ER stress. 3. Discussion {#sec3-cancers-10-00375} ============= The central hypothesis of this study was to exploit the homeostatic vulnerabilities within TSC2-deficient cells with mTORC1 hyperactivity. The volume of unfolded protein within the ER is intensified by mTORC1 hyperactivity, caused by heightened levels of de novo protein translation and reduced efficiency of autophagy to remove the unfolded protein. Consequently, cell lines lacking TSC2 become more sensitive to drug treatments that induce ER stress. In this study, we employed nelfinavir as the ER stress inducer, and mefloquine as a potential autophagy inhibitor. We revealed that nelfinavir and mefloquine synergize to selectively kill TSC2-deficient cells. The concentrations of both drugs used in this study are clinically viable. For instance, 10 µM falls within the concentration range of mefloquine found in patient serum \[[@B27-cancers-10-00375]\]. Regarding nelfinavir, the 10 µM concentration used in this study is higher than the manufacturer recommended trough concentration (1−3 µM). However, nelfinavir serum concentration has previously been reported in HIV patients at a similar concentration, ranging from 4.96 µM \[[@B28-cancers-10-00375]\] up to 18 µM \[[@B29-cancers-10-00375]\]. In fact, it has been reported that nelfinavir is well tolerated in cancer patients at doses 2.5 times the FDA-approved dose for HIV management \[[@B30-cancers-10-00375]\]. We observed that the combination of nelfinavir and mefloquine had a long-lasting effect on *Tsc2*−/− tumor spheroids, when compared to single drug treatments. For instance, with mefloquine, despite an increase in DRAQ7 staining, indicating a level of cell death, cell recovery and outgrowth from the bulk tumor after drug withdrawal was apparent. However, no cells recovered when nelfinavir was combined with mefloquine. As well as cells lacking *Tsc2*, the nelfinavir and mefloquine drug combination was effective at killing sporadic cancer cell lines with aberrant signaling through the TSC1/TSC2 signaling axis, with MCF7 cells being much more sensitive to the dual therapy when compared to single drug treatments. Through transcriptional profiling of nelfinavir and mefloquine treated cells, we observed an ER stress signature at the early 6 h time-point of treatment, implying that ER stress might be involved in the observed cytotoxic response. However, when we examined the longer time points of combined nelfinavir and mefloquine treatment, we observed recovery of the ER stress ATF4-CHOP signaling pathway within the *Tsc2*−/− MEF cells. ER stress-mediated cell death occurs when the protein level of CHOP remains persistently elevated over a long time course of drug treatment. Given the temporary elevation of CHOP in the continued presence of drug in both the *Tsc2*+/+ and *Tsc2*−/− MEFs, our data indicates a good level of ER stress recovery and excludes the possibility that CHOP triggers cell death. Further analysis of the RNA sequencing data in the *Tsc2*−/− MEFs implied that energy homeostasis pathways were elevated after co-treatment with nelfinavir and mefloquine. We rescued a significant degree of cell death with methyl pyruvate, indicating that cytotoxicity by nelfinavir and mefloquine was at least partially mediated through energy deficiency. Methyl pyruvate is a methyl ester of pyruvic acid that restores cellular energy production downstream of glycolysis by acting as a mitochondrial citric acid cycle substrate. While ER stress is unlikely the trigger of cell death, recovery from ER stress might contribute to the depletion of energy through the de novo protein synthesis of chaperone and heat shock proteins that are required for the unfolding and refolding of protein aggregates in the ER, processes that heavily consume ATP. A recent study showed that TSC2-knockdown leads to mitochondrial oxidative stress \[[@B31-cancers-10-00375]\]. This degree of energy stress is presumably why TSC2-deficent cells are vulnerable to conditions that induce energy starvation \[[@B32-cancers-10-00375]\]. Mefloquine has shown promise as an anti-cancer drug in several cancer model settings. Liu et al. showed that mefloquine (whether as a single agent or in combination with paclitaxel) induced apoptosis in gastric cancer cell lines and gastric cancer xenograft mouse models \[[@B23-cancers-10-00375]\]. PC3 cells, a prostate cancer cell line, were also sensitive to mefloquine at 10 µM after 24 h of treatment \[[@B22-cancers-10-00375]\]. However, no further toxicity was detected between 24 h to 72 h of mefloquine treatment, suggesting that mefloquine may not be fully effective as a monotherapy. Mefloquine was shown to inhibit autophagy, trigger ER stress and induced cell death in human breast cancer lines, and was more cytotoxic when compared to chloroquine \[[@B33-cancers-10-00375]\]. Our data is in line with these previous studies, except we observed weaker inhibition of autophagy with mefloquine treatment. We further enhanced the cytotoxic effectiveness of mefloquine with dual treatment with nelfinavir and determined that cell death was caused by an imbalance in energy homeostasis. In the cancer setting, mefloquine is currently in a phase I clinical factorial trial (NCT01430351) in combination with the DNA-damaging agent temozolomide for glioblastoma multiforme. It should be noted that mefloquine has advantages over chloroquine for the treatment of brain cancers, as mefloquine has better penetration through the blood brain barrier. In summary, we show that mefloquine synergizes with nelfinavir to induce a high degree of cell death through energy stress. There is clinical application for this drug combination in the cancer setting, where we can exploit the lack of flexibility that cancer cells have in restoring homeostatic balance. 4. Materials and Methods {#sec4-cancers-10-00375} ======================== 4.1. Cell Culture and Drug Treatments {#sec4dot1-cancers-10-00375} ------------------------------------- Tsc2-null ELT3-V3 and control ELT3-T3 cells re-expressing TSC2 were kindly provided by Cheryl Walker (M.D. Anderson Cancer Centre, Houston, USA). *Tsc2*+/+ *p53*−/− and *Tsc2*−/− *p53*−/− MEFs were kindly provided by David J. Kwiatkowski (Harvard University, Boston, MA, USA). Human breast cancer (MCF7), human colorectal cancer (HCT116) and human lung carcinoma (NCI-H460) cell lines were purchased from ATCC. All cell lines were routinely tested using the Venor GeM Classic PCR kit from (CamBio) and were clear of mycoplasma. Cells were cultured in Dulbecco's Modified Eagle's Medium (DMEM), while the MCF7 cell line was incubated in Roswell Park Memorial Institute (RPMI) 1640. Cell media was supplemented with 10% (v/v) Fetal Bovine Serum (FBS) and 100 U/mL Penicillin and 100 μg/mL Streptomycin (Life Technologies Ltd., Paisley, UK). All cell lines were incubated at 37 °C, 5% (v/v) CO~2~ in a humidified incubator. Nelfinavir mesylate hydrate, chloroquine di-phosphate salt, mefloquine hydrochloride, rapamycin, etoposide and thapsigargin were purchased from Sigma Aldrich Ltd. (Gillingham, Dorset, UK). DMSO was used as a solvent to make stock solutions of etoposide (100 mM), mefloquine (50 mM), nelfinavir (30 mM), thapsigargin (10 mM) and rapamycin (100 µM). Chloroquine was made fresh in culture medium at a 100 mM stock and further diluted in culture medium to the required concentration before use. Drug(s) dissolved in DMSO or DMSO vehicle control were added to cells in culture media so that they were diluted to a final concentration that was below 0.8% (v/v) DMSO. 4.2. Flow Cytometry {#sec4dot2-cancers-10-00375} ------------------- Treated cells were collected and incubated with 3 µM DRAQ7 (Biostatus, Leicestershire, UK) at room temperature for 10 min. Flow cytometry was performed using a FACS Calibur flow cytometer (Becton Dickinson (Cowley, UK)) with excitation set at 488 nm and detection of fluorescence in log mode at wavelengths greater than 695 nm. Cell Quest Pro software was used for signal acquisition. A total of 10,000 events per sample were collected. 4.3. Western Blotting {#sec4dot3-cancers-10-00375} --------------------- S6K1, phospho-S6K1(Thr389), rpS6, phospho-rpS6(Ser235/236), IRE1α, CHOP, GADD34, ATF4, TSC2, SESN2, AMPK, phospho-AMPK(Thr172), ACC, phospho-ACC (Ser79) and β-actin antibodies were purchased from Cell Signaling Technology (Danvers, USA). LC3 antibodies were bought from Novus Ltd. (Cambridge, UK). Cells were washed in ice-cold phosphate buffered saline (PBS) and then lysed in cell lysis buffer (20 mM Tris (pH 7.5), 125 mM NaCl, 50 mM NaF, 5% (v/v) glycerol, 0.1% (v/v) Triton X-100, supplemented with 1 mM dithiothreitol (DTT), 1 µg/mL pepstatin, 20 µM leupeptin, 1 mM benzamidine, 2 µM antipain, 0.1 mM PMSF, 1 mM sodium orthovanadate and 1 nM okadaic acid prior to cell lysis). Cell lysates were sonicated using a diagenode bioruptor (Diagenode, Seraing, Belgium) and centrifuged at 13,000 rpm for 8 min. Protein concentrations were determined by a Bradford assay (Thermo Fisher Scientific, Paisley, UK). Samples were diluted in 4 × NuPAGE loading sample buffer (Life Technologies) with 25 mM DTT and boiled at 70 °C for 10 min. Western blot was performed as previously described \[[@B34-cancers-10-00375]\]. 4.4. mRNA Extraction and Reverse Transcription {#sec4dot4-cancers-10-00375} ---------------------------------------------- Treated cells were washed in ice cold PBS then lysed using RNAprotect Cell Reagent (Qiagen, West Sussex, UK). RNA concentrations were determined by measuring the absorbance at 260 nm and 280 nm in a Nanodrop spectrophotometer. Total RNA from each sample (1 μg) was converted to copy DNA (cDNA) using the Quantitect reverse transcription kit (Qiagen) following the manufacturer's protocol. 4.5. XBP-1 Splicing {#sec4dot5-cancers-10-00375} ------------------- Protocol for XBP-1 splicing is as described in reference \[[@B19-cancers-10-00375]\]. *Xbp1* primers (forward: 5′-AAA CAG AGT AGC AGC TCA GAC TGC-3′, reverse: 5′-TCC TTC TGG GTA GAC CTC TGG GA-3′, were synthesized by MWG Operon-Eurofin (Ebersberg, Germany)). β-actin primers were purchased from Qiagen (QT01136772). PCR was performed in an Applied Biosystems GeneAmp 9700 PCR system in the following conditions: Initial denaturation step (94 °C, 3 min); 31 cycles of denaturation (94 °C, 45 s); annealing step (60 °C, 30 s); extension step (72 °C, 1 min); final extension step (72 °C, 10 min). A 3% (w/v) agarose (Appleton, Birmingham, UK) 1 × Tris-acetate-EDTA (4.84 g Tris-base (pH 8.0), 0.372 g EDTA and 1.7 mL acetic acid in 1 L deionized water) was made with 0.005% (v/v) GelRed nucleic acid stain (Biotium, Fremont, CA, USA). DNA samples were loaded with Orange G loading buffer (15 mL 30% (v/v) glycerol, 100 mg Orange G powder, deionized water, total volume 50 mL) and resolved on the gel at 100 V. After 1 h, β-actin samples were analyzed. Samples were resolved for an additional 2 h for XBP-1 splicing. PCR products of *Xbp1* were 480 bp (unspliced) and 454 bp (spliced). 4.6. Tumor Formation Assay {#sec4dot6-cancers-10-00375} -------------------------- A 1.2% (w/v) agar solution in PBS (Difco Agar Noble (BD, Oxford, UK)) was diluted with DMEM to a final concentration of 0.6% (w/v) then added to a 6 well plate and allowed to set. A total of 150,000 *Tsc2*−/− MEF cells were added to a 0.3% (w/v) agar-PBS:DMEM mixture and 3 mL was layered on top and left to set. Plates were incubated overnight with 2 mL of DMEM. The following day, the 2 mL of DMEM was replaced with DMEM containing either drugs or DMSO. Media and drugs were refreshed every 48--72 h for 14 days. Images were taken using an EVOS XL Core camera (Life Technologies). Tumor diameter was analyzed using Image J. 4.7. Tumor Outgrowth Assay {#sec4dot7-cancers-10-00375} -------------------------- A total of 70 µl of 1.5% (w/v) agarose-PBS was added to each well of a 96 well plate and allowed to set. A total of 1000 *Tsc2*−/− MEF cells in 140 μL complete DMEM were added to each well. Spheroids were formed over 72 h. Following formation, spheroids were drug treated for 48 h. Cells were imaged and then treated for 48 h. Treatment was refreshed and a final concentration of 3 µM DRAQ7 was added to each well and incubated for an additional 48 h. Imaging and subsequent outgrowth was performed as previously described \[[@B35-cancers-10-00375]\]. 4.8. RNA Sequencing {#sec4dot8-cancers-10-00375} ------------------- Total RNA quality and quantity was assessed using Agilent 2100 Bioanalyser and an RNA Nano 6000 kit (Agilent Technologies, Cheshire, UK). A total of 100--900 ng of total RNA with a RIN value \>8 was used as the input and the sequencing libraries were prepared using the Illumina^®^ TruSeq^®^ RNA sample preparation v2. (Illumina Inc., Cambridge, UK). The steps included two rounds of purification of the polyA containing mRNA molecules using oligo-dT attached magnetic beads followed by RNA fragmentation, 1st strand cDNA synthesis, 2nd strand cDNA synthesis, adenylation of 3'-ends, adapter ligation, PCR amplification (15 cycles) and validation. The manufacturer's instructions were followed. The libraries were validated using the Agilent 2100 Bioanalyser and a high-sensitivity kit (Agilent Technologies) to ascertain the insert size, and the Qubit^®^ (Life Technologies) was used to perform the fluorometric quantitation. Following validation, the libraries were normalized to 4 nM and pooled together. The pool was then sequenced using a 75 base paired-end (2 × 75 bp PE) dual index read format on the Illumina^®^ HiSeq2500 in rapid mode according to the manufacturer's instructions. The pool was then sequenced using a 75 base paired-end (2 × 75 bp PE) dual index read format on the Illumina^®^ HiSeq2500 in high-output mode according to the manufacturer's instructions. Quality control checks of the resultant reads were performed using FastQC before mapping to the UCSC mouse mm10 reference genome using Tophat and Bowtie. Differentially expressed transcripts were identified using a DeSeq2 analysis \[[@B36-cancers-10-00375]\] on normalized count data with the design formula setup to analyze all pairwise comparisons in the dataset using contrasts. The resultant *p*-values were corrected for multiple testing and false discovery issues using the FDR method. Genes involved in cell survival were selected based on GO: 0008219 (cell death) from the complete list on AmiGo 2. 4.9. Statistical Analysis {#sec4dot9-cancers-10-00375} ------------------------- All experiments were carried out with a minimum of three biological repeats. Where applicable, results are written as mean with a +/− St-Dev. Either a two-way ANOVA (with Bonferroni's multiple comparison post-hoc test) or one-way ANOVA (with Tukey's multiple comparison post-hoc test) was used to determine statistical significance. Significance was reported as a *p* value: \* *p* \< 0.05, \*\* *p* \< 0.01, \*\*\* *p* \< 0.001. 5. Conclusions {#sec5-cancers-10-00375} ============== In conclusion, mefloquine synergizes with nelfinavir to induce a high degree of cell death through energy stress. There is potential to use mefloquine with nelfinavir in the cancer setting, to treat cancer cells that have a lack of flexibility in restoring homeostatic balance. We would like to thank the Wales Gene Park for their contribution to this study. The following are available online at <http://www.mdpi.com/2072-6694/10/10/375/s1>, Table S1: RNAseq raw data of mefloquine and nelfinavir treatment of *Tsc2* MEFs, Table S2: RNAseq of *Tsc2*−/− compared to *Tsc2*+/+ after mefloquine and nelfinavir treatment. ###### Click here for additional data file. Conceptualization, D.M.D. and A.R.T.; Investigation, H.D.M. and C.E.J.; Resources, R.J.E. and A.R.T.; Data Curation, H.D.M. and E.A.D.; Methodology, R.J.E. and E.A.D.; Formal Analysis, H.D.M. and C.E.J.; Writing-Original Draft Preparation, A.R.T.; Writing-Review & Editing, H.D.M., E.A.D. and A.R.T.; Supervision, R.J.E., E.A.D., D.M.D. and A.R.T.; Project Administration, D.M.D. and A.R.T.; Funding Acquisition, R.J.E., D.M.D. and A.R.T. This research was funded by the Tuberous Sclerosis Association (to H.D.M., E.A.D. and A.R.T.), grant numbers (2013-P05) and (2015-S01); Cancer Research Wales (to C.E.J., R.J.E. and A.R.T.), grant number (508502); the Tuberous Sclerosis Alliance (to A.R.T.), grant number (03-15); and the Hospital Saturday Fund (to A.R.T.). This work was also supported by Health and Care Research Wales (the Wales Cancer Research Centre) (to E.A.D. and A.R.T.). The authors declare no conflict of interest. RE is non-executive director of Biostatus Ltd., the vendor of DRAQ7.The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, and in the decision to publish the results. ![Mefloquine and nelfinavir synergize to kill *Tsc2*−/− Mouse embryonic fibroblasts (MEFs), ELT3-T3 and sporadic cancer cells. Dose response curves were performed in *Tsc2*+/+ and *Tsc2*−/− MEFs using flow cytometry to measure cell death following treatment with (**A**) nelfinavir (NFV); (**B**) mefloquine (MQ) and (**C**) combined mefloquine with a fixed concentration of 10 µM nelfinavir (MQ/NFV); (**D**) *Tsc2*+/+ and *Tsc2*−/− MEFs; (**E**) ELT3-T3 and ELT3-V3; (**F**) MCF7, HCT116 and NCI-H460 were treated with either DMSO, etoposide (ETO), 10 µM mefloquine (MQ), 10 µM nelfinavir (NFV) or mefloquine combined with nelfinavir (MQ/NFV) for 48 h. Cells were then tested by flow cytometry and cells were separated into viable and non-viable cell populations via DRAQ7 staining. Statistical significance is shown with combination treated *Tsc2*−/− MEFs or the ELT3-V3 cells to their wild-type controls, and comparing single drug treatment of mefloquine and combination with the MCF7, HCT116 and NCI-H460.](cancers-10-00375-g001){#cancers-10-00375-f001} ![Mefloquine and nelfinavir prevents colony formation and spheroid growth. (**A**) Colony formation was tested in *Tsc2*−/− MEFs seeded on soft agar that were treated for 14 days with Dimethyl Sulfoxide (DMSO), 10 µM mefloquine (MQ), 10 µM nelfinavir (NFV) or in combination. Tumor diameters were measured using Image J; scale bar is 200 μm. Significance was observed when comparing combined nelfinavir and mefloquine treatment to DMSO vehicle control. (**B**) *Tsc2*−/− MEF spheroids were treated under the same conditions as (**A**) for 96 h. DRAQ7 was supplemented for the final 36 h to monitor cell death before images were taken and DRAQ7 fluorescence quantified. (**C**) Spheroids treated in (**B**) were re-plated onto standard tissue culture plates and grown in drug-free media. Images were taken every 24 h and the area of outgrowth calculated using Image J, scale bar is 200 μm and outgrowth area is graphed.](cancers-10-00375-g002){#cancers-10-00375-f002} ![Mefloquine and nelfinavir drug combination causes increased ER stress in *Tsc2*−/− MEFs. (**A**) *Tsc2*+/+ and *Tsc2*−/− MEFs were treated with either DMSO, 1 µM thapsigargin (TPG), 10 µM mefloquine (MQ), 10 µM nelfinavir (NFV), or mefloquine and nelfinavir combination for 6 h, where indicated. Total protein levels of TSC2, IRE1α, ATF4, CHOP, GADD34, S6K1 and β-actin and S6K1 phosphorylated at Thr389 were detected by Western blot. (**B**) *Xbp1* mRNA splicing was determined from the same treatments as described in (**A**). PCR products were resolved on agarose gels (unspliced = 480 bp upper band, spliced = 454 bp lower band). (**C**--**E**) *Tsc2*+/+ and *Tsc2*−/− MEFs were treated with either DMSO or mefloquine and nelfinavir combination (MQ/NFV) for 6 h before being processed for RNA sequencing. A heat map for a panel of ER stress-linked genes is shown (**C**) and are graphed in (**D**). (**E**) Differences of mRNA expression between *Tsc2*+/+ and *Tsc2*−/− MEFs treated with mefloquine and nelfinavir is shown as a volcano plot and highlights ER stress genes. (**F**) *Tsc2*+/+ and *Tsc2*−/− MEFs were treated with DMSO or mefloquine (MQ) and nelfinavir (NFV) combination for 6 h and 48 h. Total protein levels of ATF4, IRE-1α, GADD34, CHOP and β-actin were determined by Western blot.](cancers-10-00375-g003){#cancers-10-00375-f003} ![Mefloquine and nelfinavir drug cytotoxicity is not associated with mTORC1 hyperactivity and causes minimal autophagy inhibition. (**A**) *Tsc2*−/− MEFs, NCI-H460, MCF7 and HCT116 cells were pre-treated with 50 nM rapamycin (RAP) for 1 h, where indicated, before being treated with 10 μM nelfinavir (NFV) and 10 µM mefloquine (MQ) for 48 h. Cells were stained with DRAQ7 and % cell death determined using flow cytometry. (**B**) Western blotting was carried out to determine rp-S6 phosphorylation at Ser235/236 in the cells treated in (**A**) after 48 h of treatment. (**C**) *Tsc2*+/+ and *Tsc2*−/− cells were treated with DMSO, 10 µM mefloquine (MQ), 20 µM chloroquine (CQ), 10 µM mefloquine or 20 µM chloroquine combined with 10 µM nelfinavir for 3 h. Accumulation of lipidated LC3-II were analyzed by Western blot. Total protein levels of β-actin were used as a loading control.](cancers-10-00375-g004){#cancers-10-00375-f004} ![Mefloquine and nelfinavir combined drug treatment induces cytotoxicity via energy stress in *Tsc2*−/− MEFs. (**A**) The RNA sequencing data used for [Figure 3](#cancers-10-00375-f003){ref-type="fig"}C−E was assessed for gene-expression of genes involved in energy homeostasis. A heatmap for a panel of energy stress-linked genes is shown. Differences of mRNA expression between *Tsc2*+/+ and *Tsc2*−/− MEFs treated with mefloquine and nelfinavir is shown as a volcano plot (**B**) and graphed (**C**). (**D**) *Tsc2*−/− cells were treated with DMSO, 10 μM mefloquine and 10 μM nelfinavir combination (MQ/NFV) or mefloquine/nelfinavir combination with the addition of 8 mM methyl pyruvate (MQ/NFV/MP) for 48 h. Cells were then stained with DRAQ7 and % cell death determined by flow cytometry. (**E**) *Tsc2*−/− were treated with either DMSO or 10 μM mefloquine and 10 μM nelfinavir combination in the presence or absence of 8 mM methyl pyruvate for 24 h and total and phosphorylated ACC and AMPK was determined by western blot. (**F**) *Tsc2*+/+ and *Tsc2*−/− cells were treated with either DMSO or 10 μM mefloquine and 10 μM nelfinavir combination in the presence or absence of 8 mM methyl pyruvate for 6 and 24 h, where indicated. Total protein levels of ACC, CHOP GADD34 and ATF4 as well as phosphorylated ACC were detected by Western blot.](cancers-10-00375-g005){#cancers-10-00375-f005}
Monday, May 21, 2018 Spectre Number 4, STEP RIGHT UP! Updated based on IBM's documentation. In the continuing saga of Meltdown and Spectre (tl;dr: G4/7400, G3 and likely earlier 60x PowerPCs don't seem vulnerable at all; G4/7450 and G5 are so far affected by Spectre while Meltdown has not been confirmed, but IBM documentation implies "big" POWER4 and up are vulnerable to both) is now Spectre variant 4. In this variant, the fundamental issue of getting the CPU to speculatively execute code it mistakenly predicts will be executed and observing the effects on cache timing is still present, but here the trick has to do with executing a downstream memory load operation speculatively before other store operations that the CPU (wrongly) believes the load does not depend on. The processor will faithfully revert the stores and the register load when the mispredict is discovered, but the loaded address will remain in the L1 cache and be observable through means similar to those in other Spectre-type attacks. The G5, POWER4 and up are so aggressively out of order with memory accesses that they are almost certainly vulnerable. In an earlier version of this post, I didn't think the G3 and 7400 were vulnerable (as they don't appear to be to other Spectre variants), but after some poring over IBM's technical documentation I now believe with some careful coding it could be possible -- just not very probable. The details have to do with the G3 (and 7400)'s Load-Store Unit, or LSU, which is responsible for reading and writing memory. Unless a synchronizing instruction intervenes, up to one load instruction can execute ahead of a store, which makes the attack theoretically possible. However, the G3 and 7400 cannot reorder multiple stores in this fashion, and because only a maximum of two instructions may be dispatched to the LSU at any time (in practice less since those two instructions are spread across all of the processor's execution units), the victim load and the confounding store must be located immediately together or have no LSU-issued instructions between them. Even then, reliably ensuring that both instructions get dispatched in such a way that the CPU will reorder them in the (attacker-)desired order wouldn't be trivial. The 7450, as with other Spectre variants, makes the attack a bit easier. It can dispatch up to four instructions to its execution units, which makes the attack more feasible because there is more theoretical flexibility on where the victim load can be located downstream (especially if all four instructions go to its LSU). However, it too can execute at most just one load instruction ahead of a store, and it cannot reorder stores either. That said, as a practical matter, Spectre in any variant (including this one) is only a viable attack vector on Power Macs through native applications, which have far more effective methods of pwning your Power Mac at their disposal than an intermittently successful attempt to read memory. Although TenFourFox has a JavaScript JIT, no 7450 and probably not even the Quad is fast enough to obtain enough of a memory timing delta to make the attack functional (let alone reliable), and we disabled the high-resolution timers necessary for the exploit "way back" in FPR5 anyway. The new variant 4 is a bigger issue for Talos II owners like myself because such an attack is possible and feasible on the POWER9, but we can confidently expect that there will be patches from IBM and Raptor to address it soon.
Q: Running a .NET 4.5 program on XP with .NET 2.0 We have a Windows Service written in C# 5.0 for .NET 4.5. My code is using some of the newer language features (async and await) and framework features (Task.Run, Task.Delay, IProgress, CancellationToken, etc). Our product works fine on Windows 7 and 8 when .NET 4.5 is installed. Our problem is, we now need this to run on an XP machine. Even if .NET 4.5 could be installed on Windows XP, there's not enough space for it on this machine anyway. Is there any way I can get a .NET 4.5 application to run on this machine? I've looked into compiling the program to native code using either Mono AOT Compiler or NGen.exe, but I haven't had any luck - I don't think either of them actually achieve what I'm trying to do. Failing that - we've thought of creating a bootable preinstalled flash drive that runs a simple version of Windows (like WinPE?) with our application installed, but this is out of my depth and I wouldn't really know where to start. Is this a good idea? How could I approach this? Another option we've had is to try and install .NET 4.0 and use the Async Targeting Pack. Or does anyone have any other ideas? Is there any way I can build/compile/run my .NET 4.5 application (that makes use of .NET 4.5 features) on an XP machine running .NET 2.0? A: AsyncBridge is like the MS Async Targeting Pack, but also available for .NET 3.5. However, it won't work with .NET 2.0; as the Task Parallel Library (which defines Task<T>) is only available for .NET 3.5 (and built into .NET 4.0). As for Mono, to create a standalone application that includes the relevant portions of the framework, you need to create a bundle, but I'm not sure if those are supported on Windows. mono --aot and ngen just precompile an application for faster startup on the local machine, they don't remove the need for the framework to be installed.
A multilevel excess hazard model to estimate net survival on hierarchical data allowing for non-linear and non-proportional effects of covariates. The excess hazard regression model is an approach developed for the analysis of cancer registry data to estimate net survival, that is, the survival of cancer patients that would be observed if cancer was the only cause of death. Cancer registry data typically possess a hierarchical structure: individuals from the same geographical unit share common characteristics such as proximity to a large hospital that may influence access to and quality of health care, so that their survival times might be correlated. As a consequence, correct statistical inference regarding the estimation of net survival and the effect of covariates should take this hierarchical structure into account. It becomes particularly important as many studies in cancer epidemiology aim at studying the effect on the excess mortality hazard of variables, such as deprivation indexes, often available only at the ecological level rather than at the individual level. We developed here an approach to fit a flexible excess hazard model including a random effect to describe the unobserved heterogeneity existing between different clusters of individuals, and with the possibility to estimate non-linear and time-dependent effects of covariates. We demonstrated the overall good performance of the proposed approach in a simulation study that assessed the impact on parameter estimates of the number of clusters, their size and their level of unbalance. We then used this multilevel model to describe the effect of a deprivation index defined at the geographical level on the excess mortality hazard of patients diagnosed with cancer of the oral cavity. Copyright © 2016 John Wiley & Sons, Ltd.
1. Background {#sec6549} ============= B-cell chronic lymphocytic leukemia (B-CLL) is usually described as the most common leukemia in the United States, Canada, and Western Europe, whereas it is rare in Japan and infrequent in other Asian societies ([@A4990R1]). CLL is with an incidence of 30% among the entire leukemia's ([@A4990R2]) as the matter of fact it is 20 new cases per 100,000 inhabitants above the age of 60 years. The reported male-to-female sex ratio is about 1.5--2:1, which means B-CLL is more common in men than women ([@A4990R3]). It is a common cancer not even in the Western countries, but also in Iran ([@A4990R4]). B-CLL results from the expansion of mature-appearing monoclonal B cells with a characteristic immunophenotype (CD5, CD19, CD20, and CD23 positive) ([@A4990R5]). Clinical course of the disease is highly variable. Some patients show symptoms at diagnosis or early thereafter and need early therapy, but others have no or minimal symptoms for many years ([@A4990R8]). Infrequently, there are constitutional symptoms; on the contrary, the most frequent physical finding is lymphadenopathy, which is followed by splenomegaly. Other symptoms are fatigue, infection and autoimmune hemolytic anemia. There are several treatments for B-CLL patients such as using monoclonal antibodies, new experimental drugs and stem cell transplantation (autologous or allogeneic) ([@A4990R9]). Variable prognosis is carried by B-CLL, which is associated with distinct parameters such as genetic aberrations that are helpful to predict the clinical outcome of the disease ([@A4990R10], [@A4990R11]). Age, sex, and clinical staging (Rai or Binet staging), peripheral blood lymphocyte count, lymphocyte-doubling time, bone marrow (BM) histology, serum thymidine kinase, and serum β2-microglobulin levels have all been identified as important independent prognostic factors ([@A4990R12]). Chromosomal aberrations in B-CLL rose to more than 80% ([@A4990R16]), which means that genetic markers could have a critical role in prediction and diagnosis of B-CLL. The most frequent aberrations are: 1) Deletion on the long arm of chromosome 13 (13q14) in \> 50% of the cases ([@A4990R19]); 2) Trisomy of chromosome 12 in 10--20% of the cases; 3) Deletions in bands 11q22--q23 in 10--20% of the cases, where the ATM gene is located ([@A4990R20]); 4) Deletion on the short arm of the chromosome 17 (17p13). Association of Deletion 11q22 and 17p13 with poor prognosis and deletion 13q14 with good prognosis has been demonstrated ([@A4990R23]). Trisomy 12 is correlated with atypical morphology and shortened survival (worse prognosis) ([@A4990R24], [@A4990R25]). Conventional cytogenetic methods can detect 40-50% abnormalities in B-CLL patients ([@A4990R26], [@A4990R26]), while using fluorescent in situ hybridization (FISH) has brought the sensitivity of cytogenetic analysis to a higher level, by which, 80% of abnormalities can be detected ([@A4990R27]). In the previous study, we analyzed the correlation of del (13q), del (11q) and trisomy 12 with features of B-CLL in Iranian cases ([@A4990R28]). 2. Objectives {#sec6550} ============= The aims of this study were first to verify the frequency of del (6q21) and del (17p13) in B-CLL patients by conventional cytogenetic methods and FISH technique, and second to determine the correlation between these two abnormalities and prognostic factors, including Rai staging, CD38 marker and family history. 3. Patients and Methods {#sec6551} ======================= Patients were recruited for 20 months between 2008 and 2010 from four major hematology/oncology hospitals in Tehran, Iran. The patients were new cases or already identified as B-CLL cases and not taking any therapeutic treatment for six months before sampling. The diagnostic inclusion criteria were based on the National Cancer Institute Working Group (NCI-WG) guidelines for diagnosis of B-CLL ([@A4990R29]). Seventy patients' peripheral blood samples or bone marrow aspirations were analyzed. Immunophenotypic data and blood parameters were collected from patients' hospital files. The questions about age, family history, the onset of disease and treatment measures were organized. The patients with one first or two second degree relatives affected by any type of cancer in their families were considered as positive family history. Blood samples and/or bone marrow aspirations were collected in heparinized collection tubes. The study was approved in Shahrekord Medical University and Ethical Committee of the School of Medical Sciences of Tarbiat Modares University. Each patient signed the written consent. Peripheral blood and bone marrow samples were pretreated by different protocols for culturing. 27 bone marrows, 36 blood samples and 7 of both specimens were provided. Peripheral blood was washed three times with minimal culture medium (RPMI-1960 with no further supplementation) and then the white blood cells (WBC) were counted by Neubauerhemocytometer. Bone marrow specimens only were counted by Neubauerhemocytometer. 106 cells/ml was cultured in 5 ml completed culture medium \[RPMI-1960 medium (Gibco, USA), 10% fetal bovine serum (Gibco, USA), 1% antibiotics and 1% L-glutamine (Gibco, USA)\]. Five cultures were set up for each patient whose conditions were as follows: overnight (ONC), 24 hours and 72 hours without mitogen stimulation and 24 hours and 72 hours with phorbolmyristate acetate (50 ng/mL) (Biomol, Germany). Harvesting and slide preparations were completed according to the standard cytogenetic methods (Hypotonic treatment, and methanol-acetic acid, Merk, Germany; 3:1 ratio of fixation). G-banding technique was used and up to 20 metaphases were analyzed for each patient utilizing Applied Imaging Powergene Intelligent Karyotyping software (Applied Imaging, USA) according to the International system for Human Cytogenetic Nomenclature ([@A4990R30]). The FISH analysis was performed on slides taken from the 24-hour culture without stimulation. To evaluate del (6q21) and del (17p13), Dual-color probes were purchased from Kreatech Company and utilized according to the manufacturer's instructions. For each probe per patient, two hundred nuclei were analyzed by Olympus BX51 fluorescence microscope (Japan). To capture images, Cytovision software, version 3.6, was used. SPSS software program (SPSS for Windows, version 15, Chicago, IL, USA) was used for statistical analysis. Correlations between family history and these two deletions were analyzed by Chi-Square test. The Mann-Whitney test was considered for analyzing the correlation between the Rai staging and chromosomal abnormalities. The Mann-Whitney test was used to evaluate the correlation between these chromosomal abnormalities and CD38. A p value less than 0.05 were considered significant. 4. Results {#sec6552} ========== The culture of four cases failed, and 66 specimens were successfully analyzed. In [Table 1](#tbl2447){ref-type="table"}, patients' clinical characteristics are summarized. The gender ratio was 2.9 with 49 male and 17 female patients. Patients were 40 to 81 years old with the mean age of 61.73 years old. They could be classified into two subgroups. Twenty three cases have been recently diagnosed (new cases), and they were following their clinical diagnosis. Forty three cases suffered from the disease from 8 to 236 months. The number of bone marrow specimens in [Table 1](#tbl2447){ref-type="table"} is actually the sum of cases with bone marrow and cases with both bone marrow and peripheral blood samples. The information of Rai staging and family history is also available in [Table 1](#tbl2447){ref-type="table"}. ###### Clinical and Laboratory Features of Iranian B-Cell Patients Patients Genomic Changes Total (n = 66) -------------------------------------- ----------------- ---------------- ---- **Gender**, No. Male 1 10 49 Female 3 1 17 **Age, y, No.** \< 50 0 1 8 50-60 0 4 17 60-70 2 4 25 \> 70 2 2 16 **Tissue** BM [a](#fn1416){ref-type="table-fn"} 2 6 27 PB [a](#fn1416){ref-type="table-fn"} 3 6 32 **Rai stage** O 0 0 6 I 0 1 9 II 3 3 31 III 1 6 15 IV 0 1 5 **Family history** Positive 1 9 15 Negative 3 2 47 ^a^Abbreviation: BM, bone marrow; PB, peripheral blood Interphase FISH for del (6q21) and del (17p13) was performed on 66 patients; out of which 5 cases demonstrated the micro deletion in 6q21 (5/66 or 7.5%); and 11 cases turned out to have the micro deletion in 17p13 (11/66 or 16.6%); among these 16 patients with these chromosomal abnormalities, one patient had both del (6q21) and del (17p13). The range of abnormalities was different in patient's specimens. Compared to cut off points for positive values, this range verified the abnormalities in patients properly. Correlation of family history and del (17p13) were analyzed by Chi square (P = 0.48 for Fisher's exact test) and did not show any significance. This test also was performed between family history and del (6q21) (P = 1.000 for Fisher's exact test) but the correlation was not significant. The Mann-Whitney test was considered for analyzing the correlation between Rai staging and del (6q21). The result was P = 0.7, which means there is no significance. This test was repeated for the correlation of chromosomal abnormalities and CD38. The result for del (6q21) and del (17p13) was P = 0.368 and P = 0.13 respectively, which means CD38 did not have significance in any of these two abnormalities. FISH analysis revealed 22.7% chromosomal abnormalities in total comprising 7.5% for del (6q21) and 16.6% for del (17p13). The results of other published studies and the current study are summarized in [Table 2](#tbl2448){ref-type="table"}. ###### Detection Rate of Cytogenetic Aberrations byFIsh Analysis in Different Populations No Reference Country Del 17p13, % Del 6q21, % FISH Abnormality, No. (%) -------- ---------------------------------- ---------------- -------------- ------------- --------------------------- **1** Gunn et al. ([@A4990R32]) USA 4.6 7.5 174 (89) **2** Gaidano et al. ([@A4990R33]) USA 4 10 100 (14) **3** Dohner et al. ([@A4990R34]) Germany 12 \- 90 (12) **4** Dohner et al. ([@A4990R17]) Germany 7 6 325(82) **5** Chevallier et al. ([@A4990R35]) France 7 \- 111 (75) **6** Haferlach et al. ([@A4990R36]) Germany **7** 4.6 500 (78.4) **7** Juliusson et al. ([@A4990R37]) Sweden 4 6 649 (48) **8** Turgut et al. ([@A4990R38]) Turkey 14 \- 36 (47) **9** Sindelarova et al. ([@A4990R39]) Czech republic 16 \- 206 (16) **10** Durak et al. ([@A4990R40]) Turkey 7.6 \- 79 (50.6) **11** Dewald et al. ([@A4990R41]) USA 8 0 113 (77) **12** Dicker et al. ([@A4990R42]) USA 5.3 7 132 (79) **13** Present study Iran 16.6 7.5 66 (22.7) 5. Discussion {#sec6553} ============= CLL is the most prevalent type of leukemia, but its annual incidence of new cases is low and consequently; the number of new cases is generally low ([@A4990R4]). So in the present study the number of new cases hardly comprises one-third of all cases. The gender ratio shifted towards more males because, some female patients avoided to take part in the study. There were two reasons why bone marrow specimens were less than peripheral blood samples. First, the greater numbers of cases were those who had already been diagnosed as B-CLL and the physicians did not feel necessary to perform another bone marrow aspiration; second, an analysis of both bone marrow and blood samples demonstrated that there are no differences between these two tissues in FISH results and patients prefer to give blood samples instead of bone marrow aspiration. So the number of peripheral blood specimens increased in this study. The weak mitogen stimulation of malignant cells in B-CLL and the fact that in B-CLL unlike other forms of leukemia, the cells are in G0 phase of the cell cycle, we had a decrease of metaphases in the cytogenetic analysis. Moreover, many of the chromosomes were small and short and less liable to good banding. So it was so difficult to identify chromosomal abnormalities such as translocations, inversions and micro deletions. Because of these reasons, we could just identify 27.7% chromosomal abnormalities using conventional cytogenetic methods. As mentioned before, up to 80% of chromosomal abnormalities in B-CLL patients could be detected using I-FISH ([@A4990R43]). Oscier et al. ([@A4990R44]), using FISH; found chromosomal aberrations in 69% of patients , Dewald et al. ([@A4990R33]), found 77%; Chena et al. ([@A4990R2]), 80.7%; Gunn et al. ([@A4990R31]), 89%; and Dohner et al. ([@A4990R17]), 82%. In some of these studies ([@A4990R17], [@A4990R33]), B-CLL patients were examined for deletion of 13q, 11q, and 17p; also for 6q, 14q chromosomal abnormalities and for 8q24 and 3q trisomy ([@A4990R17]). So the rate of chromosomal aberrations detected by FISH is high. The greater detection of chromosomal abnormalities by FISH method in comparison to conventional cytogenetic methods reveals the higher sensitivity of this method. Therefore, FISH testing helps outstandingly in establishing a diagnosis for these patients. In our study where I-FISH was only used for two genomic changes and patients were examined with two DNA probes (for detection of deletion 6q21 and deletion 17p) we could identify these abnormalities in just 22.7% of patients, comprising 16.6% for del 17p13 and 7.5% for del 6q21. The results of this study and some similar studies are summarized in [Table 2](#tbl2448){ref-type="table"}. We found deletion of TP53 gene on 17p13 in 11 (16.6%) of studied cases. In addition to deletion of 17p13 chromosomal band or occurring mutations in TP53 gene, inactivation of this gene could be because of other factors. For example Pettitt et al. ([@A4990R45]) revealed that p53 inactivation occurred because of defects of ATM protein, which has a role in dephosphorylation of p53. Some other studies have shown that deletion of 17p13 band has a significant effect on clinical course of the disease. B-CLL patients with deletion of TP53 gene have an obvious shorter survival time and show failure to treatment with purine analogs ([@A4990R17], [@A4990R34], [@A4990R46], [@A4990R47]). Anomalies of short arm of chromosome 6 are reported from 0-10% ([@A4990R33], [@A4990R32], [@A4990R33]). In this region most abnormalities are deletions and diverse regions have been identified at 6q15, 6q21, 6q23 and 6q25-27 ([@A4990R37], [@A4990R49]). Stilgenbauer et al. showed that 6q21 is the most commonly deleted region and 6q27 deletion occurs only in patients with del (6q21) ([@A4990R35]). The genes in this region which are involved in the pathogenesis of the disease are not known yet but the TLX gene at 6q21 has been shown to be involved in non-Hodgkin's lymphoma patients ([@A4990R38], [@A4990R39]). In analysis of clinical importance of 6q deletions in B-CLL patients contentious results were observed. While Juliusson et al. ([@A4990R37]) didn't observe any poor prognostic effect of del (6q), Oscier et al. ([@A4990R40]) observed that those patients with del (6q) had shorter survival times without treatment. In our study, del (6q21) was detected in 5 (7.5%) of studied cases. One of the patients had both abnormalities, del (6q21) and del (17p13). As it is evident, the frequency of patients with del (17p13) and del (6q21) and so the type and the rate of these abnormalities in Iranian patients is similar to those in other populations. This suggests that probable mechanisms involved in this disease are not different from patients from other countries. As far as, correlation between clinical staging, family history and immunophenotyping with these two cytogenetic subgroups is concerned, after statistical analysis, significant correlation was only shown between Rai staging and del (17p13). This deletion has the most frequency in Rai stage II, so Rai stage II could be somehow a prognostic factor for this deletion. We can arrange Rai stages in descending sort (II, III, I, O, IV) for this correlation. Cases with familial B-CLL comprise about 5% of B-CLL patients. Cytogenetic and immunophenotypic results of familial and sporadic B-CLL patients were similar to each other. Some other studies analyzing familial B-CLL cases have also suggested similar findings ([@A4990R42], [@A4990R54]). None declared. **Implication for health policy/practice/research/medical education:** FISH analysis in B-CLL patients is improving diagnosis and helps to manage the disease better. Considering the high frequency of del17p13 observed within patients in this study, we recommend that cytogenetic evaluation of del17p13 be performed for patients routinely **Please cite this paper as:** Teimori H, Ashoori S, Akbari MT, Mojtabavi Naeini M, Hashemzade Chaleshtori M. FISH Analysis for del6q21 and del17p13 in B-cell Chronic Lymphocytic Leukemia in Iranians. Iran Red Cres Med J. 2013;15(2):107-12. DOI: 10.5812/ircmj.4990 **Financial Disclosure:** None declared. **Funding/Support:** None declared.
package kotlinx.nosql import kotlin.reflect.KClass open class ListColumn<C: Any, S : AbstractSchema> (name: String, valueClass: KClass<C>) : AbstractColumn<List<C>, S, C>(name, valueClass, ColumnType.CUSTOM_CLASS_LIST) { }
tag:blogger.com,1999:blog-7287998325898369187.post3878322900179631223..comments2018-08-05T23:49:00.161-07:00Comments on Randy Finch's Film Blog: Fox v. Time Warner and Bright House: Why the Economic Model for Cable TV is ChangingRandy Finchhttp://www.blogger.com/profile/13730612169523869199noreply@blogger.comBlogger1125tag:blogger.com,1999:blog-7287998325898369187.post-68121219781155921052010-01-04T18:55:22.455-08:002010-01-04T18:55:22.455-08:00This comment has been removed by a blog administrator.Anonymousnoreply@blogger.com
The use of bacterial larvicides in mosquito and black fly control programmes in Brazil. Bacillus spp. based larvicides are increasingly replacing, with numerous advantages, chemical insecticides in programmes for controlling black fly and mosquito populations. Brazil was among the pioneers in adopting Bacillus thuringiensis israelensis (B.t.i) to control black flies. However, the major current mosquito control programme in Brazil, the Programme for Eradication of Aedes aegypti launched in 1997, only recently decided to replace temephos by B.t.i based larvicides, in the State of Rio de Janeiro. In the last decade, works developed by research groups in Brazilian institutions have generated a significant contribution to this subject through the isolation of B. sphaericus new strains, the development of new products and the implementation of field trials of Bacillus efficacy against mosquito species under different environmental conditions.
Some clients of the defunct DKM Microfinance Limited on Wednesday expressed anger in Bolgatanga, the Upper East regional capital, over an alleged move by liquidators to reimburse only Gh¢10 or Gh¢20 to each customer for huge investments made. The clients had thronged the premises of the Bolgatanga branch of GCB Bank for the repayment of the withheld savings. But they were shocked after many of them were handed Gh¢10 or Gh¢20 as compensation. According to them, the compensation was an insult given the huge investments they have made. Many of them have vowed to vote against President John Mahama and the governing National Democratic Congress (NDC) on the matter. “How can you invest this heavy money and come and collect Gh¢10? Imagine- are you going to give it to fowls? If you are rearing fowls, Gh¢10 cannot even feed your fowls, not to talk about feeding your family and paying your children’s school fees,” Robert Aduko, one of the depositors lamented. “Some of us came last night. We had to queue here. And they are telling us they are giving us Gh¢10. We don’t understand how come Gh¢10. We did an investment and you are giving us Gh¢10. What are we going to do with Gh¢10? All we want is our money. We don’t want any Gh¢10 because that is not what we invested. “They should just give us our money. That is what we demand. If they are not ready to do that, they should just forget about it. We are not ready to take that Gh¢10,” Asuah Zakari, told Starr FM. A customer of the embattled microfinance company DKM allegedly died on Monday after he was shocked by the paltry sum given him as his repayment package on Monday (October 17).
Indian Point plan must protect air quality The state Public Service Commission is acting wisely by ordering energy companies to start preparing now in case Indian Point is shut down in the future. It will take time to ensure the right transmission and generating assets are in place to ensure a steady supply of energy. What has been missing from the planning process, however, is a parallel plan to deal with the air- quality impacts of the replacement power. Nuclear power presents many complex environmental challenges, but air pollution is not among them. Indian Point generates almost none of the kind of airborne pollutants that are linked to asthma and other respiratory ailments. Even more importantly, Indian Point generates minimal carbon pollution - the kind that warms our climate and contributes to extreme weather events like Superstorm Sandy. If Indian Point were closed today, the energy needed to replace it would derive almost entirely from fossil fuels that dirty our air and worsen global climate change. The environmental and public health consequences of simply switching to fossil fuels are too great to ignore. As part of any plan to replace Indian Point's power, the Cuomo administration must present a comprehensive and concrete strategy to protect the Hudson Valley's air quality and not hasten the warming of our climate. Marcia Bystryn New York City The writer is president of the New York League of Conservation Voters. ADVERTISEMENT ADVERTISEMENT ADVERTISEMENT Email this article Indian Point plan must protect air quality Re 'Utilities study alternative energy sources; State seeking options in case Indian Pt. shuts,' Nov. 28:The state Public Service Commission is acting wisely by ordering energy companies to start A link to this page will be included in your message. Real Deals Flip, shop and save on specials from your favorite retailers in the Lower Hudson Valley, 10604.
--- abstract: 'The paper introduces fuzzy linguistic logic programming, which is a combination of fuzzy logic programming, introduced by P. Vojtáš, and hedge algebras in order to facilitate the representation and reasoning on human knowledge expressed in natural languages. In fuzzy linguistic logic programming, truth values are linguistic ones, e.g., *VeryTrue*, *VeryProbablyTrue*, and *LittleFalse*, taken from a hedge algebra of a linguistic truth variable, and linguistic hedges (modifiers) can be used as unary connectives in formulae. This is motivated by the fact that humans reason mostly in terms of linguistic terms rather than in terms of numbers, and linguistic hedges are often used in natural languages to express different levels of emphasis. The paper presents: $(i)$ the language of fuzzy linguistic logic programming; $(ii)$ a declarative semantics in terms of Herbrand interpretations and models; $(iii)$ a procedural semantics which directly manipulates linguistic terms to compute a lower bound to the truth value of a query, and proves its soundness; $(iv)$ a fixpoint semantics of logic programs, and based on it, proves the completeness of the procedural semantics; $(v)$ several applications of fuzzy linguistic logic programming; and $(vi)$ an idea of implementing a system to execute fuzzy linguistic logic programs.' author: - | VAN HUNG LE, FEI LIU\ Department of Computer Science and Computer Engineering\ La Trobe University, Bundoora, VIC 3086, Australia\ - | DINH KHANG TRAN\ Faculty of Information Technology\ Hanoi University of Technology, Vietnam\ title: | Fuzzy Linguistic Logic Programming\ and its Applications --- \#1 \[firstpage\] Fuzzy logic programming, hedge algebra, linguistic value, linguistic hedge, computing with words, databases, querying, threshold computation, fuzzy control Introduction ============ People usually use words (in natural languages), which are inherently imprecise, vague and qualitative in nature, to describe real world information, to analyse, to reason, and to make decisions. Moreover, in natural languages, linguistic hedges are very often used to state different levels of emphasis. Therefore, it is necessary to investigate logical systems that can directly work with words, and make use of linguistic hedges since such systems will make it easier to represent and reason on knowledge expressed in natural languages. Fuzzy logic, which is derived from fuzzy set theory, introduced by L. Zadeh, deals with reasoning that is approximate rather than exact, as in classical predicate logic. In fuzzy logic, the truth value domain is not the classical set $\{False,True\}$ or $\{0, 1\}$, but a set of linguistic truth values [@Zadeh75b] or the whole unit interval \[0,1\]. Moreover, in fuzzy logic, linguistic hedges play an essential role in the generation of the values of a linguistic variable and in the modification of fuzzy predicates [@Zadeh89]. Fuzzy logic provides us with a very powerful tool for handling imprecision and uncertainty, which are very often encountered in real world information, and a capacity for representing and reasoning on knowledge expressed in linguistic forms. Fuzzy logic programming, introduced in , is a formal model of an extension of logic programming without negation working with a truth functional fuzzy logic in narrow sense. In fuzzy logic programming, atoms and rules, which are many-valued implications, are graded to a certain degree in the interval \[0,1\]. Fuzzy logic programming allows a wide variety of many-valued connectives in order to cover a great variety of applications. A sound and complete procedural semantics is provided to compute a lower bound to the truth value of a query. Nevertheless, no proofs of extended versions of Mgu and Lifting lemmas are given. Fuzzy logic programming has applications such as threshold computation, a data model for flexible querying [@PV01], and fuzzy control [@Gerla05]. The theory of hedge algebras, introduced in , forms an algebraic approach to a natural qualitative semantics of linguistic terms in a term domain. The hedge-algebra-based semantics of linguistic terms is qualitative, relative, and dependent on the order-based structure of the term domain. Hedge algebras have been shown to have a rich algebraic structure to represent linguistic domains [@HoKhang99], and the theory can be effectively applied to problems such as linguistic reasoning [@HoKhang99] and fuzzy control [@Ho08]. The notion of an *inverse mapping of a hedge* is defined in for monotonic hedge algebras, a subclass of linear hedge algebras. In this work, we integrate fuzzy logic programming and hedge algebras to build a logical system that facilitates the representation and reasoning on knowledge expressed in natural languages. In our logical system, the set of truth values is that of linguistic ones taken from a hedge algebra of a linguistic truth variable. Furthermore, we consider only finitely many truth values. On the one hand, this is due to the fact that normally, people use finitely many degrees of quality or quantity to describe real world applications which are granulated [@Zadeh97]. On the other hand, it is reasonable to provide a logical system suitable for computer implementation. In fact, the finiteness of the truth domain allows us to obtain the Least Herbrand model for a finite logic program after a finite number of iterations of an immediate consequences operator. Moreover, we allow the use of linguistic hedges as unary connectives in formulae to express different levels of accentuation on fuzzy predicates. The procedural semantics in is extended to deduce a lower bound to the truth value of a query by directly computing with linguistic terms. The paper is organised as follows: the next section gives a motivating example for the development of fuzzy linguistic logic programming; Section 3 presents linguistic truth domains taken from hedge algebras of a truth variable, inverse mappings of hedges, many-valued modus ponens w.r.t. such domains; Section 4 presents the theory of fuzzy linguistic logic programming, defining the language, declarative semantics, procedural semantics, and fixpoint semantics, and proving the soundness and completeness of the procedural semantics; Section 5 and Section 6 respectively discuss several applications and an idea for implementing a system where such logic programs can be executed; the last section summarises the paper. Motivation ========== Our motivating example is adapted from the hotel reservation system described in . Here, we use logic programming notation. A rule to find a convenient hotel for a business trip can be defined as follows: $$\begin{aligned} convenient\_hotel(Business\_location,Time, Hotel)\leftarrow \\ \wedge(near\_to(Business\_location,Hotel), \\ reasonable\_cost(Hotel,Time), \\ fine\_building(Hotel)). \mbox{with truth value=}VeryTrue \end{aligned}$$ That is, a hotel is regarded to be convenient for a business trip if it is near the business location, has a reasonable cost at the considered time, and is a fine building. Here, *fine\_building(Hotel)* is an *atomic formula* (atom), which is a fuzzy predicate symbol with a list of arguments, having a truth value. There is an option that the truth value of *fine\_building* of a hotel is a number in \[0,1\] and is calculated by a function of its age as in . However, in fact, the age of a hotel may not be enough to reflect its fineness since the fineness also depends on the construction quality and the surroundings. Similarly, the truth value of *reasonable\_cost* can be computed as a function of the hotel rate at the time. Nevertheless, since the rate varies from season to season, the function should be modified accordingly to reflect the reasonableness for a particular time. Thus, a more realistic and appropriate way is to assess the fineness and the reasonableness of the cost of a hotel using linguistic truth values, e.g., *ProbablyTrue*, after considering all possible factors. Note that there can be more than one way to define the convenience of a hotel, and the above rule is only one of them. Furthermore, since any of such rules may not be absolutely true for everybody, each rule should have a degree of truth (truth value). For example, *VeryTrue* is the truth value of the above rule. In addition, since linguistic hedges are usually used to state different levels of emphasis, we desire to use them to express different degrees of requirements on the criteria. For example, if we want to emphasise closeness, we can use the formula ***Very** near\_to(Business\_location,Hotel)* instead of *near\_to(Business\_location,Hotel)* in the rule, and if we do not care much about the cost, we can relax the criterion by using the hedge *Probably* for the atom *reasonable\_cost(Hotel,Time)*. Thus, the rule becomes: $$\begin{aligned} convenient\_hotel(Business\_location,Time, Hotel)\leftarrow \\ \wedge(Very~near\_to(Business\_location,Hotel), \\ Probably~reasonable\_cost(Hotel,Time), \\ fine\_building(Hotel)). \mbox{with truth value=}VeryTrue \end{aligned}$$ In our opinion, in order to model knowledge expressed in natural languages, a formalism should address the twofold usage of linguistic hedges, i.e., in generating linguistic values and in modifying predicates. To the best of our knowledge, no existing frameworks of logic programming have addressed the problem of using linguistic truth values as well as allowing linguistic hedges to modify fuzzy predicates. Hedge algebras and linguistic truth domains =========================================== Hedge algebras -------------- Since the mathematical structures of a given set of truth values play an important role in studying the corresponding logics, we present here an appropriate mathematical structure of a linguistic domain of a linguistic variable *Truth* in particular, and that of any linguistic variable in general. In an algebraic approach, values of the linguistic variable *Truth* such as *True*, *VeryTrue*, *ProbablyFalse*, *VeryProbablyFalse*, and so on can be considered to be generated from a set of generators (primary terms) $G = \{False, True\}$ using hedges from a set $H=\{Very,More,Probably,$ $...\}$ as unary operations. There exists a natural ordering among these values, with $a\leq b$ meaning that $a$ indicates a degree of truth less than or equal to $b$. For example, $True < VeryTrue$ and $False <LittleFalse$, where $a<b$ iff $a\leq b$ and $a\neq b$. The relation $\leq$ is called the *semantically ordering relation* (SOR) on the term domain, denoted by $X$. There are natural semantic properties of linguistic terms and hedges that can be formulated in terms of the SOR as follows. Let *V, M, L, P*, and *A* stand for the hedges *Very, More, Little, Probably*, and *Approximately*, respectively. $(i)$ Hedges either increase or decrease the meaning of terms they modify, so they can be regarded as *ordering operations*, i.e., $\forall h\in H, \forall x\in X, \mbox{ either } hx\geq x \mbox{ or } hx\leq x$. The fact that a hedge $h$ modifies terms more than or equal to another hedge $k$, i.e., $\forall x \in X$, $hx\leq kx\leq x$ or $x\leq kx\leq hx$, is denoted by $h\geq k$. Note that since the sets $H$ and $X$ are disjoint, we can use the same notation $\leq$ for different ordering relations on $H$ and on $X$ without any confusion. For example, we have $L>P$ ($h>k$ iff $h\geq k$ and $h\neq k$) since, for instance, $LTrue<PTrue<True$ and $LFalse>PFalse>False$. $(ii)$ A hedge has a semantic effect on others, i.e., it either strengthens or weakens the degree of modification of other hedges. If $h$ strengthens the degree of modification of $k$, i.e., $\forall x \in X$, $hkx\leq kx\leq x$ or $x\leq kx \leq hkx$, then it is said that $h$ is *positive* w.r.t. $k$; if $h$ weakens the degree of modification of $k$, i.e., $\forall x \in X$, $kx\leq hkx\leq x$ or $x\leq hkx\leq kx$, then it is said that $h$ is *negative* w.r.t. $k$. For instance, $V$ is positive w.r.t. $M$ since, e.g., $VMTrue>MTrue>True$; $V$ is negative w.r.t. $P$ since, e.g., $PTrue<VPTrue<True$. $(iii)$ An important semantic property of hedges, called *semantic heredity*, is that hedges change the meaning of a term a little, but somewhat preserve the original meaning. Thus, if there are two terms $hx$ and $kx$, where $x\in X$, such that $hx\leq kx$, then all terms generated from $hx$ using hedges are less than or equal to all terms generated from $kx$. This property is formulated by: $(a)$ If $hx\leq kx$, then $H(hx)\leq H(kx)$, where $H(u)$ denotes the set of all terms generated from $u$ by means of hedges, i.e., $H(u) = \{\sigma u|\sigma\in H^{*}\}$, where $H^{*}$ is the set of all strings of symbols in $H$ including the empty one. For example, since $MTrue\leq VTrue$, we have $VMTrue\leq LVTrue$ and $H(MTrue)\leq H(VTrue)$; $(b)$ If two terms $u$ and $v$ are incomparable, then all terms generated from $u$ are incomparable to all terms generated from $v$. For example, since $AFalse$ and $PFalse$ are incomparable, $VAFalse$ and $MPFalse$ are incomparable too. Two terms $u$ and $v$ are said to be *independent* if $u\notin H(v)$ and $v\notin H(u)$. For example, $VTrue$ and $PMTrue$ are independent, but $VTrue$ and $LVTrue$ are not since $LVTrue\in H(VTrue)$. [@Ho90] An abstract algebra $\underline{X}=(X,G,H,\leq)$, where $X$ is a term domain, $G$ is a set of primary terms, $H$ is a set of linguistic hedges, and $\leq$ is an SOR on $X$, is called a *hedge algebra* (HA) if it satisfies the following: (A1) Each hedge is either positive or negative w.r.t. the others, including itself; (A2) If terms $u$ and $v$ are independent, then, for all $x\in H(u)$, we have $x\notin H(v)$. In addition, if $u$ and $v$ are incomparable, i.e., $u \not< v$ and $v \not< u$, then so are $x$ and $y$, for every $x\in H(u)$ and $y\in H(v)$; (A3) If $x \neq hx$, then $x\notin H(hx)$, and if $h\neq k$ and $hx \leq kx$, then $h'hx\leq k'kx$, for all $h, k, h', k' \in H$ and $x \in X$. Moreover, if $hx\neq kx$, then $hx$ and $kx$ are independent; (A4) If $u \notin H(v)$ and $u\leq v$ ($u\geq v$), then $u \leq hv$ ($u\geq hv$) for any $h\in H$. Axioms (A2)-(A4) are a weak formulation of the semantic heredity of hedges. Given a term $u$ in $X$, the expression $h_{n}...h_{1}u$ is called a *representation* of $x$ w.r.t. $u$ if $x = h_{n}...h_{1}u$, and, furthermore, it is called a *canonical representation* of $x$ w.r.t. $u$ if $h_{n} h_{n-1}...h_{1}u \neq h_{n-1}...h_{1}u$. The following proposition shows how to compare any two terms in $X$. The notation $x_{u|j}$ denotes the suffix of length $j$ of a representation of $x$ w.r.t. $u$, i.e., for $x = h_{n}...h_{1}u$, $x_{u|j}$ = $h_{j-1}...h_{1}u$, where $2\leq j\leq n+1$, and $x_{u|1} = u$. Let $I\notin H$ be an artificial hedge called the *identity* on $X$ defined by the rule $\forall x\in X$, $Ix = x$. \[prop1\] [@HoWech92] Let $x=h_{n}...h_{1}u$, $y=k_{m}...k_{1}u$ be two canonical representations of $x$ and $y$ w.r.t. $u$, respectively. Then, there exists the largest $j\leq min(m,n)+1$ (here, as a convention it should be understood that if $j=min(m,n)+1$, then $h_{j}=I$, for $j=n+1$, and $k_{j}=I$, for $j=m+1$) such that $\forall i < j$, $h_i=k_i$, and ($i$) $x=y$ iff $n=m$ and $h_{j}x_{u|j}=k_{j}x_{u|j}$; ($ii$) $x<y$ iff $h_jx_{u|j}<k_jx_{u|j}$; ($iii$) $x$ and $y$ are incomparable iff $h_jx_{u|j}$ and $k_jx_{u|j}$ are incomparable. Linear symmetric hedge algebras ------------------------------- Since we allow hedges to be unary connectives in formulae, there is a need to be able to compute the truth value of a hedge-modified formula from that of the original. To this end, the notion of *an inverse mapping of a hedge* is utilised. In order to define this notion, we restrict ourselves to linear HAs. The set of primary terms $G$ usually consists of two comparable ones, denoted by $c^{-}<c^{+}$. For the variable *Truth*, we have $c^{-}=False<c^{+}=True$. Such HAs are called *symmetric* ones. For symmetric HAs, the set of hedges $H$ can be divided into two disjoint subsets $H^{+}$ and $H^{-}$ defined as $H^{+} = \{h| hc^{+}>c^{+}\}$ and $H^{-} = \{h| hc^{+}<c^{+}\}$. Two hedges $h$ and $k$ are said to be *converse* if $\forall x \in X$, $hx\leq x$ iff $kx\geq x$, i.e., they are in different subsets; $h$ and $k$ are said to be *compatible* if $\forall x \in X$, $hx\leq x$ iff $kx\leq x$, i.e., they are in the same subset. Two hedges in each of sets $H^{+}$ and $H^{-}$ may be comparable, e.g., $L$ and $P$, or incomparable, e.g., $A$ and $P$. Thus, $H^{+}$ and $H^{-}$ become posets. A symmetric HA $\underline{X} = (X,G=\{c^{-},c^{+}\},H,\leq)$ is said to be a *linear symmetric* HA (lin-HA, for short) if the set of hedges $H$ is divided into $H^{+} = \{h| hc^{+}>c^{+}\}$ and $H^{-} = \{h| hc^{+}<c^{+}\}$, and $H^{+}$ and $H^{-}$ are linearly ordered. \[ex1\] Consider an HA $\underline{X} = (X,G=\{c^{-},c^{+}\},H=\{V, M, P,L\},\leq)$. $\underline{X}$ is a lin-HA as follows. $V$ and $M$ are positive w.r.t. $V$, $M$, and $L$, and negative w.r.t. $P$; $P$ is positive w.r.t. $P$, and negative w.r.t. $V$, $M$, and $L$; $L$ is positive w.r.t. $P$, and negative w.r.t. $V$, $M$, and $L$. $H$ is decomposed into $H^{+}=\{V,M\}$ and $H^{-}=\{P,L\}$. Moreover, in $H^{+}$, we have $M<V$, and in $H^{-}$, we have $P<L$. [@Ho90] \[def3\] A function $Sign: X\rightarrow \{-1,0,+1\}$ is a mapping defined recursively as follows, where $h, h'\in H$ and $c\in \{c^{-},c^{+}\}$: a\) $Sign(c^{-}) = -1$, $Sign(c^{+}) = +1$; b\) $Sign(hc) = -Sign(c)$ if either $h\in H^{+}$ and $c=c^{-}$ or $h\in H^{-}$ and $c=c^{+}$; c\) $Sign(hc) = Sign(c)$ if either $h\in H^{+}$ and $c=c^{+}$ or $h\in H^{-}$ and $c=c^{-}$; d\) $Sign(h'hx) = -Sign(hx)$, if $h'hx\neq hx$, and $h'$ is negative w.r.t. $h$; e\) $Sign(h'hx) = Sign(hx)$, if $h'hx\neq hx$, and $h'$ is positive w.r.t. $h$; f\) $Sign(h'hx) = 0$ if $h'hx = hx$. Based on the function $Sign$, we have a criterion to compare $hx$ and $x$ as follows: \[prop99\] [@Ho90] For any $h$ and $x$, if $Sign(hx)= +1$, then $hx > x$, and if $Sign(hx) = -1$, then $hx < x$. In , HAs are extended by augmenting two artificial hedges $\Phi$ and $\Sigma$ defined as $\Phi(x) = infimum(H(x))$ and $\Sigma(x) = supremum(H(x))$, for all $x\in X$. An HA is said to be *free* if $\forall x\in X$ and $\forall h\in H$, $hx\neq x$. It is shown that, for a free lin-HA of the variable *Truth* with $H \neq\emptyset$, $\Phi(c^{+}) = \Sigma(c^{-})$, $\Sigma(c^{+}) = 1$ (*AbsolutelyTrue*), and $\Phi(c^{-}) = 0$ (*AbsolutelyFalse*). Let us put $W=\Phi(c^{+}) = \Sigma(c^{-})$ (called the *middle truth value*); we have $0<c^{-}<W<c^{+}<1$. \[Linguistic truth domain\] A *linguistic truth domain* $\overline{X}$ taken from a lin-HA $\underline{X} = (X,\{c^{-},c^{+}\}, H,\leq)$ is defined as $\overline{X}=X\cup \{0,W,1\}$, where $0, W$, and $1$ are the least, the neutral, and the greatest elements of $\overline{X}$, respectively. [@HoWech92] \[prop2\] For any lin-HA $\underline{X} = (X,G,H,\leq)$, the linguistic truth domain $\overline{X}$ is linearly ordered. The usual operations are defined on $\overline{X}$ as follows: $(i)$ *negation*: given $x = \sigma c$, where $\sigma\in H^{*}$ and $c\in\{c^{+},c^{-}\}$, $y$ is called the *negation* of $x$, denoted by $y = -x$, if $y = \sigma c'$ and $\{c, c'\}$ = $\{c^{+}, c^{-}\}$. For example, $hc^{+}$ is the negation of $hc^{-}$. In particular, $-1=0$, $-0=1$, and $-W=W$; $(ii)$ *conjunction*: $x\wedge y$ = min($x, y$); $(iii)$ *disjunction*: $x\vee y$ = max($x, y$). [@HoWech92] \[prop3\] For any lin-HA $\underline{X} = (X,G,H,\leq)$, the following hold: $(i)$ $-hx = h(-x)$ for any $h\in H$; $(ii)$ $--x=x$; $(iii)$ $x < y$ iff $-x>-y$. It is shown that the identity hedge $I$ is the least element of the sets $H^{+}\cup \{I\}$ and $H^{-}\cup \{I\}$, i.e., $\forall h \in H$, $h\geq I$. An *extended ordering relation* on $H \cup \{I\}$, denoted by $\leq_e$, is defined based on the ordering relations on $H^{+}\cup \{I\}$ and $H^{-}\cup \{I\}$ as follows. Given $h,k\in H\cup\{I\}$, $h\leq_e k$ iff: ($i$) $h\in H^{-}, k\in H^{+}$; or ($ii$) $h,k\in H^{+}\cup\{I\}$ and $h\leq k$; or ($iii$) $h,k\in H^{-}\cup\{I\}$ and $h\geq k$. We denote $h<_e k$ iff $h\leq_e k$ and $h\neq k$. \[ex2\] For the HA in Example \[ex1\], in $H\cup\{I\}$ we have $L<_e P<_e I<_e M<_e V$. It is straightforward to show the following: \[prop4\] For all $h,k\in H\cup \{I\}$, if $h<_ek$, then $hc^{+}< kc^{+}$. Inverse mappings of hedges -------------------------- In fuzzy logic, knowledge is usually represented in terms of pairs consisting of a *vague sentence* and its *degree of truth*, which is also expressed in linguistic terms. A vague sentence can be represented by an expression $u(x)$, where $x$ is a variable or a constant, and $u$ is a fuzzy predicate. For example, the assertion “*It is quite true that John is studying hard*" can be represented by a pair $(study\_hard(john),Quite True)$. According to , the following assessments can be considered to be approximately semantically equivalent: “*It is very true that Lucia is young*" and “*It is true that Lucia is very young*". That means if we have $(young(lucia),VeryTrue)$, we also have $(Very\:young(lucia),True)$. Thus, the hedge “*Very*" can be moved from the truth value to the fuzzy predicate. This is generalised to the following rule: $$(R1)\;\;\;(u(x),hTrue)\Rightarrow(hu(x),True)$$ However, the rule is not complete, i.e., in some cases we cannot use it to deduce the truth value of a hedge-modified fuzzy predicate from that of the original. For instance, given $(young(lucia),VeryTrue)$, we cannot compute the truth value of $Probably\:young(lucia)$ using the above rule. The notion of an *inverse mapping of a hedge*, which is an extension of Rule $(R1)$, provides a solution to this problem. The idea behind this notion is that the truth value of a hedge-modified fuzzy predicate can be a function of that of the original. In other words, if we modify a fuzzy predicate by a hedge, its truth value will be changed by the inverse mapping of that hedge. Now, we will work out the conditions that an inverse mapping of a hedge should satisfy. We denote the inverse mapping of a hedge $h$ by $h^{-}$. First, since $h^{-}$ is an extension of Rule $(R1)$, we should have $h^{-}(hTrue) = True$. Second, intuitively, the more true a fuzzy predicate is, the more true is its hedge-modified one, so $h^{-}$ should be monotone, i.e., if $x\geq y$, then $h^{-}(x)\geq h^{-}(y)$. Third, it seems to be natural that by modifying a fuzzy predicate using a hedge in $H^{+}$ such as *Very* or *More*, we accentuate the fuzzy predicate, so the truth value should decrease. For example, the truth value of $Very\:young(lucia)$ should be less than that of $young(lucia)$. Similarly, by applying a hedge in $H^{-}$ such as *Probably* or *Little*, we deaccentuate the fuzzy predicate; thus, the truth value should increase. For example, the truth value of $Probably\:high\_income(tom)$ should be greater than that of $high\_income(tom)$. This is also in accordance with the fuzzy-set-based interpretation of hedges [@Zadeh72], in which hedges such as *Very* are called *accentuators* and can be defined as $Very\;x = x^{1+\alpha}$, where $x$ is a fuzzy predicate expressed by a *fuzzy set* and $\alpha>0$, and hedges such as *Probably* are called *deaccentuators* and can be defined as $Probably\;x = x^{1-\alpha}$ (note that the degree of membership of each element in $x$ is in \[0,1\]). In summary, this can be formulated as: for all $h,k \in H\cup \{I\}$ such that $h\leq_e k$ and for all $x$, we should have $h^{-}(x)\geq k^{-}(x)$. As a convention, we always assume that for all $x$, $I^{-}(x)=x$. Given a lin-HA $\underline{X}=(X,\{c^{+}, c^{-}\},H,\leq)$ and a hedge $h\in H$, a mapping $h^{-}: \overline{X} \rightarrow \overline{X}$ is called an *inverse mapping* of $h$ iff it satisfies the following conditions: $$\begin{aligned} h^{-}(h c^{+}) = c^{+} \label{eq:im1} \\ x \geq y \Rightarrow h^{-}(x) \geq h^{-}(y) \label{eq:im2}\\ h\leq_e k \Rightarrow h^{-}(x)\geq k^{-}(x) \label{eq:im3}\end{aligned}$$ where $k^{-}$ is an inverse mapping of another hedge $k\in H\cup \{I\}$. Since 0, W, and 1 are fixed points, i.e., $\forall x\in \{0,W,1\}$ and $\forall h\in H$, $hx=x$ [@HoWech92], it is reasonable to assume that $\forall h\in H$, $h^{-}(0)=0, h^{-}(W)=W$, and $h^{-}(1)=1$. We show why we have to use lin-HAs in order to define the notion of an inverse mapping of a hedge. Consider an HA containing two incomparable hedges $P\;(Probably), A\;(Approximately)\in H^{-}$. We can see that since $Ac^{+}$ and $Pc^{+}$ are incomparable, $P^{-}(Ac^{+})$ and $P^{-}(Pc^{+})=c^{+}$ should be either incomparable or equal. The two values cannot be incomparable since every truth value is comparable to $c^{+}$ and $c^{-}$, and it might not be very meaningful to keep both $P$ and $A$ in the set of hedges if we have $P^{-}(Ac^{+})=P^{-}(Pc^{+})=c^{+}$. Inverse mappings of hedges always exist; in the following, we give an example of inverse mappings of hedges for a general lin-HA. \[ex3\] Consider a lin-HA $\underline{X}=(X,\{c^{+}, c^{-}\},H,\leq)$ with $H^{-}=\{h_{-q}, h_{-q+1}, ..., h_{-1}\}$ and $H^{+}=\{h_{1}, h_{2}, ..., h_{p}\}$, where $p,q\geq 1$. Let us denote $h_{0}=I$. Without loss of generality, we suppose that $h_{-q}> h_{-q+1}> ...> h_{-1}$ and $h_{1}< h_{2}< ...< h_{p}$. Therefore, we have $h_{-q}<_e h_{-q+1}<_e ...<_e h_{-1}<_e h_{0}<_e h_{1}<_e h_{2}$ $<_e ...<_e h_{p}$, and thus $h_{-q}c^{+}< ...< h_{-1}c^{+}< c^{+}< h_{1}c^{+}< ...< h_{p}c^{+}$. We always assume that, for all $k_1,k_2\in H$ and $c\in \{c^{+}, c^{-}\}$, $k_2k_1c\neq k_1c$, i.e., $Sign(k_2k_1c) \neq 0$. First, we build inverse mappings of hedges $h_{r}^{-}(x)$, for all $x\in H(c^{+})$, as follows: $(i)$ $x=c^{+}$. For all $r$ such that $-min(p,q)\leq r\leq min(p,q)$, we put $h_{r}^{-}(c^{+})=h_{-r}c^{+}$. In particular, $h_{0}^{-}(c^{+})=h_{0}c^{+}=c^{+}$. If $p>q$, for all $q+1\leq r\leq p$, $h_{r}^{-}(c^{+})=W$. If $p<q$, for all $-(p+1)\geq r\geq -q$, $h_{r}^{-}(c^{+})=1$. It can be easily verified that, for all $h\in H\cup \{I\}$, $h^{-}(c^{+})$ satisfies Condition (\[eq:im3\]). $(ii)$ $x=\sigma h_{s} c^{+}$, where $\sigma \in H^{*}$ and $h_{s}\neq I$, i.e., $s\neq 0$. If $r=s$, we put $h_{r}^{-}(\sigma h_{r} c^{+})= c^{+}$; hence, Condition (\[eq:im1\]) is satisfied. Otherwise, we have $r\neq s$. If $s-r< -q$, we put $h_{r}^{-}(\sigma h_{s}c^{+})=W$; if $s-r> p$, we put $h_{r}^{-}(\sigma h_{s}c^{+})=1$. Otherwise, we have $-q\leq s-r\leq p$. For a certain hedge $k$, $Sign(h_pkc^{+})$ can be either -1 or +1 . If $Sign(h_pkc^{+})=+1$, by Proposition \[prop99\], we have $kc^{+}< h_pkc^{+}$. Thus, it follows that $h_{-q}kc^{+}< ... < h_{-1}kc^{+}< kc^{+}<h_{1}kc^{+}< ... < h_{p}kc^{+}$. For example, we have $Sign(VPc^{+})=+1$ and $LPc^{+}< PPc^{+}< Pc^{+}< MPc^{+}< VPc^{+}$. Similarly, if $Sign(h_pkc^{+})=-1$, we have $h_{-q}kc^{+}> ... > h_{-1}kc^{+}> kc^{+}>h_{1}kc^{+}> ... > h_{p}kc^{+}$. For instance, we have $Sign(VLc^{+})=-1$ and $LLc^{+}> PLc^{+}> Lc^{+}> MLc^{+}> VLc^{+}$. In summary, the ordering of the elements in the set $\{h_{t}kc^{+}:-q\leq t\leq p\}$ can have one of the two above reverse directions. Therefore, for a pair $(s,s-r)$, there are two cases: \(a) The orderings of the elements in the sets $\{h_{t}h_{s}c^{+}:-q\leq t\leq p\}$ and $\{h_{t}h_{s-r}c^{+}:-q\leq t\leq p\}$ have the same direction, i.e., we have $h_{-q}h_{s}c^{+}< ... < h_{-1}h_{s}c^{+}< h_{s}c^{+}< h_{1}h_{s}c^{+}< ... < h_{p}h_{s}c^{+}$ and $h_{-q}h_{s-r}c^{+}< ... < h_{-1}h_{s-r}c^{+}< h_{s-r}c^{+}< h_{1}h_{s-r}c^{+}< ... < h_{p}h_{s-r}c^{+}$, or $h_{-q}h_{s}c^{+}> ... > h_{-1}h_{s}c^{+}> h_{s}c^{+}> h_{1}h_{s}c^{+}> ... > h_{p}h_{s}c^{+}$ and $h_{-q}h_{s-r}c^{+}> ... > h_{-1}h_{s-r}c^{+}> h_{s-r}c^{+}> h_{1}h_{s-r}c^{+}> ... > h_{p}h_{s-r}c^{+}$. In this case, we put $h_{r}^{-}(\sigma h_{s}c^{+})=\sigma h_{s-r} c^{+}$. \(b) The orderings have reverse directions, i.e., we have $h_{-q}h_{s}c^{+}< ... < h_{-1}h_{s}c^{+}< h_{s}c^{+}< h_{1}h_{s}c^{+}< ... < h_{p}h_{s}c^{+}$ and $h_{-q}h_{s-r}c^{+}> ... > h_{-1}h_{s-r}c^{+}> h_{s-r}c^{+}> h_{1}h_{s-r}c^{+}> ... > h_{p}h_{s-r}c^{+}$, or $h_{-q}h_{s}c^{+}> ... > h_{-1}h_{s}c^{+}> h_{s}c^{+}> h_{1}h_{s}c^{+}> ... > h_{p}h_{s}c^{+}$ and $h_{-q}h_{s-r}c^{+}< ... < h_{-1}h_{s-r}c^{+}< h_{s-r}c^{+}< h_{1}h_{s-r}c^{+}< ... < h_{p}h_{s-r}c^{+}$. We put $h_{r}^{-}(\sigma h_{s}c^{+})=\delta h_{s-r}c^{+}$, where $\delta$ is obtained as follows. If $\sigma$ is empty, then so is $\delta$. Otherwise, suppose that $\sigma=\sigma'h_{t}$, where $t\neq 0$. If $-q\leq -t \leq p$, we put $\delta = h_{-t}$; if $-t<-q$, then $\delta = h_{-q}$; if $-t>p$, then $\delta = h_{p}$. It can be seen that what we have done here is to make inverse mappings of hedges monotone. In particular, if $r=0$, then $s=s-r$. Thus, (b) is not the case, and by (a), we have $h_{0}^{-}(\sigma h_{s} c^{+})=\sigma h_{s} c^{+}$; this complies with the assumption $I^{-}(x)=x$, for all $x$. Second, for $x\in H(c^{-})$, we define $h^{-}_{r}(x)$ based on the above case as follows. Note that from $x\in H(c^{-})$, we have $-x\in H(c^{+}$). If $-min(p,q)\leq r\leq min(p,q)$, we put $h_{r}^{-}(x)=-h_{-r}^{-}(-x)$; if $p>q$, for all $q+1\leq r\leq p$, $h_{r}^{-}(x)=-h_{-q}^{-}(-x)$; if $p<q$, for all $-(p+1)\geq r\geq -q$, $h_{r}^{-}(x)=-h_{p}^{-}(-x)$. Finally, as usual, $h^{-}(1)=1$, $h^{-}(W)=W$, and $h^{-}(0)=0$, for all $h$. It can be easily seen that, for all $x\in H(c^{+})$ and $h\in H\cup \{I\}$, $h^{-}(x)\in H(c^{+})\cup \{W,1\}$, and, for all $x\in H(c^{-})$ and $h\in H\cup \{I\}$, $h^{-}(x)\in H(c^{-})\cup \{W,0\}$. It has been shown in the above example that the inverse mappings satisfy Condition (\[eq:im1\]). In the following, we prove that they also satisfy Conditions (\[eq:im2\]) and (\[eq:im3\]). The mappings defined above satisfy Condition (\[eq:im3\]), i.e., $h\leq_e k \Rightarrow h^{-}(x)\geq k^{-}(x)$. We prove that if $h <_e k$, then $h^{-}(x)\geq k^{-}(x)$. Assume that $h = h_{r_{1}}$, $k = h_{r_{2}}$, where $r_{1}<r_{2}$. First, we prove the case $x\in H(c^{+})$. The case $x=c^{+}$ has been shown to satisfy Condition (\[eq:im3\]) in Example \[ex3\]. Consider the case $x=\sigma h_{s}c^{+}$, where $s\neq 0$. From $r_{1}<r_{2}$ we have $s-r_{1}>s-r_{2}$. The case $s-r_{2}<-q$, i.e., $h_{r_{2}}^{-}(\sigma h_{s}c^{+})=W$, is trivial; so is the case $s-r_{1}>p$, i.e., $h_{r_{1}}^{-}(\sigma h_{s}c^{+})=1$. Otherwise, $-q\leq s-r_{2}<s-r_{1}\leq p$; thus, $h^{-}(x)=\delta_{1} h_{s-r_{1}}c^{+}$ and $k^{-}(x)=\delta_{2} h_{s-r_{2}}c^{+}$, for some $\delta_{1}$ and $\delta_{2}$. Since $h_{s-r_{1}}c^{+}> h_{s-r_{2}}c^{+}$, by Proposition \[prop1\], we have $h^{-}(x)> k^{-}(x)$. Second, consider the case $x\in H(c^{-})$. Since $-x\in H(c^{+})$, from the above case, we have, for all $t$, $h_{p}^{-}(-x)\leq h_{t}^{-}(-x)\leq h_{-q}^{-}(-x)$, and by Proposition \[prop3\], $-h_{p}^{-}(-x)\geq -h_{t}^{-}(-x)\geq -h_{-q}^{-}(-x)$. If $-r_{1}>p$, then $h_{r_{1}}^{-}(x)=-h_{p}^{-}(-x)$; if $-r_{2}<-q$, then $h_{r_{2}}^{-}(x)=-h_{-q}^{-}(-x)$. Thus, we always have $h_{r_{1}}^{-}(x)\geq h_{r_{2}}^{-}(x)$. Otherwise, $p\geq -r_{1}>-r_{2}\geq -q$; thus, $h_{r_{1}}^{-}(x)=-h_{-r_{1}}^{-}(-x)$ and $h_{r_{2}}^{-}(x)=-h_{-r_{2}}^{-}(-x)$. We have $h_{-r_{1}}^{-}(-x) \leq h_{-r_{2}}^{-}(-x)$; thus, $-h_{-r_{1}}^{-}(-x) \geq -h_{-r_{2}}^{-}(-x)$, i.e., $h_{r_{1}}^{-}(x)\geq h_{r_{2}}^{-}(x)$. Finally, for $x\in \{0,W,1\}$, we have $h^{-}(x)=k^{-}(x)=x$. The mappings defined above satisfy Condition (\[eq:im2\]), i.e., $x \geq y \Rightarrow h^{-}(x) \geq h^{-}(y)$. Suppose $x>y$. Consider $h_{r}^{-}(x)$ and $h_{r}^{-}(y)$, for some $r$. First, we prove the case $x,y\in H(c^{+})$. There are three possible cases: \(1) $x=c^{+}$ and $y=\sigma h_{t}c^{+}$, where $t<0$. If $t-r<-q$, then $h_{r}^{-}(y)=W\leq h_{r}^{-}(x)$; if $-r>p$, then $h_{r}^{-}(x)=1\geq h_{r}^{-}(y)$. Otherwise, $-q\leq t-r<-r\leq p$, thus $h_r^{-}(x)=h_{-r}c^{+}$ and $h_r^{-}(y)=\delta h_{t-r}c^{+}$. Since $h_{-r}c^{+}>h_{t-r}c^{+}$, we have $h_{r}^{-}(x)> h_{r}^{-}(y)$. \(2) $y=c^{+}$ and $x=\sigma h_{t}c^{+}$, where $t>0$. The proof is similar to that of (1). \(3) $x=\sigma h_{t}c^{+}$ and $y=\delta h_{s}c^{+}$, where $t\geq s$. If $s-r<-q$, then $h_{r}^{-}(y)=W\leq h_{r}^{-}(x)$, and if $t-r>p$, then $h_{r}^{-}(x)=1\geq h_{r}^{-}(y)$. Otherwise, $-q\leq s-r\leq t-r\leq p$; thus, $h_r^{-}(x)=\sigma' h_{t-r}c^{+}$ and $h_r^{-}(y)=\delta' h_{s-r}c^{+}$. There are two cases: (3.1) $t-r>s-r$. Since $h_{t-r}c^{+}>h_{s-r}c^{+}$, by Proposition \[prop1\], $h_r^{-}(x)> h_r^{-}(y)$. (3.2) $t=s$. Suppose $x=\sigma_{1}h_{m}h_{s}c^{+}$ and $y=\delta_{1}h_{n}h_{s}c^{+}$, where if $m = 0$, then $\sigma_{1}$ is empty, and if $n = 0$, then $\delta_{1}$ is empty. There are two cases: (3.2.1) $m\neq n$. Since $x>y$, by Proposition \[prop1\], $h_{m}h_{s}c^{+}>h_{n}h_{s}c^{+}$. If $h_{m}h_{s-r}c^{+}>h_{n}h_{s-r}c^{+}$, by (a), $h_r^{-}(x)=\sigma_{1}h_{m}h_{s-r}c^{+}$ and $h_r^{-}(y)=\delta_{1}h_{n} h_{s-r}c^{+}$. By Proposition \[prop1\], $h_{r}^{-}(x)>h_{r}^{-}(y)$. Otherwise, $h_{m}h_{s-r}c^{+}<h_{n}h_{s-r}c^{+}$. We prove the case $m>n$, and the case $m<n$ can be proved similarly. Since $m>n$ and $h_{m}h_{s-r}c^{+}<h_{n}h_{s-r}c^{+}$, we can see that the values $h_{z}h_{s-r}c^{+}$, where $z=p, p-1, ..., -q$, are increasing while the index $z$ is decreasing. Thus, for all $z$, $h_{p}h_{s-r}c^{+}\leq h_{z}h_{s-r}c^{+}\leq h_{-q}h_{s-r}c^{+}$. If $-m<-q$, then by (b), $h_r^{-}(x)=h_{-q}h_{s-r}c^{+}$. In any case, $h_r^{-}(y)=h_{z}h_{s-r}c^{+}$, for some $z$. Therefore, $h_r^{-}(x)\geq h_r^{-}(y)$. Similarly, if $-n>p$, then by (b), $h_r^{-}(y)=h_{p}h_{s-r}c^{+}$; thus, $h_r^{-}(x)\geq h_r^{-}(y)$. Otherwise, $-q\leq -m<-n\leq p$. By (b), $h_r^{-}(x)= h_{-m}h_{s-r}c^{+}$ and $h_r^{-}(y)=h_{-n}h_{s-r}c^{+}$. Since $-m<-n$, we have $h_{-m}h_{s-r}c^{+}>h_{-n}h_{s-r}c^{+}$, i.e., $h_r^{-}(x)>h_r^{-}(y)$. (3.2.2) $m=n$. Since $x>y$, by Proposition \[prop1\], there exist $k_1,k_2\in H\cup \{I\}$ and $k_1\neq k_2$, and $\sigma_2, \delta_2, \gamma \in H^{*}$ such that $x=\sigma_{2}k_1\gamma h_{m}h_{s}c^{+}, y=\delta_{2}k_2\gamma h_{m}h_{s}c^{+}$, and $k_1\gamma h_{m}h_{s}c^{+}>k_2\gamma h_{m}h_{s}c^{+}$. Also, since $x>y$, we have $m=n\neq 0$ (as a convention, all hedges appearing before $h_0=I$ in a representation of a value have no effect). There are two cases: either $h_{m}h_{s}c^{+}>h_{s}c^{+}$ or $h_{m}h_{s}c^{+}<h_{s}c^{+}$. We prove the case $h_{m}h_{s}c^{+}>h_{s}c^{+}$, and the other can be proved similarly. Since $h_{m}h_{s}c^{+}>h_{s}c^{+}$, by Proposition \[prop99\], $Sign(h_{m}h_{s}c^{+})=+1$. There are two cases: (3.2.2.1) $h_{m}h_{s-r}c^{+}<h_{s-r}c^{+}$. By (b), in any case, $h^{-}_{r}(x)=h^{-}_{r}(y)$. (3.2.2.2) $h_{m}h_{s-r}c^{+}>h_{s-r}c^{+}$. By (a), $h_{r}^{-}(x)=\sigma_{2}k_1\gamma h_{m}h_{s-r}c^{+}$ and $h_{r}^{-}(y)=\delta_{2}k_2\gamma h_{m}h_{s-r}c^{+}$. Since $h_{m}h_{s-r}c^{+}>h_{s-r}c^{+}$, $Sign(h_{m}h_{s-r}c^{+})=+1=Sign(h_{m}h_{s}c^{+})$. By Definition \[def3\], $Sign(k_1\gamma h_{m}h_{s-r}c^{+})=Sign(k_1\gamma h_{m}h_{s}c^{+})$ and $Sign(k_2\gamma h_{m}h_{s-r}c^{+})=Sign(k_2\gamma h_{m}h_{s}c^{+})$. Since $k_1\gamma h_{m}h_{s}c^{+}>k_2\gamma h_{m}h_{s}c^{+}$, there are three cases: (3.2.2.2.1) $k_1\gamma h_{m}h_{s}c^{+}>k_2\gamma h_{m}h_{s}c^{+}\geq \gamma h_{m}h_{s}c^{+}$. Thus, by definition, $k_1>k_2$. Moreover, by Proposition \[prop99\], $Sign(k_1\gamma h_{m}h_{s}c^{+})=+1$ and $Sign(k_2\gamma h_{m}h_{s}c^{+})\in \{0,+1\}$. Thus, $Sign(k_1\gamma h_{m}h_{s-r}c^{+})=+1$, i.e., $k_1\gamma h_{m}h_{s-r}c^{+}>\gamma h_{m}h_{s-r}c^{+}$. Since $k_1>k_2$, $k_1\gamma h_{m}h_{s-r}c^{+}\geq k_2\gamma h_{m}h_{s-r}c^{+}\geq \gamma h_{m}h_{s-r}c^{+}$; thus, $h^{-}_{r}(x)\geq h^{-}_{r}(y)$. (3.2.2.2.2) $\gamma h_{m}h_{s}c^{+}\geq k_1\gamma h_{m}h_{s}c^{+}>k_2\gamma h_{m}h_{s}c^{+}$. The proof is similar to that of (3.2.2.2.1). (3.2.2.2.3) $k_1\gamma h_{m}h_{s}c^{+}\geq \gamma h_{m}h_{s}c^{+}\geq k_2\gamma h_{m}h_{s}c^{+}$. By Proposition \[prop99\], $Sign(k_1\gamma h_{m}h_{s}c^{+})$ $=Sign(k_1\gamma h_{m}h_{s-r}c^{+})\in \{0,+1\}$ and $Sign(k_2\gamma h_{m}h_{s}c^{+}) = Sign(k_2\gamma h_{m}h_{s-r}c^{+})\in \{0,-1\}$. Thus, $k_1\gamma h_{m}h_{s-r}c^{+}\geq \gamma h_{m}h_{s-r}c^{+}$ and $k_2\gamma h_{m}h_{s-r}c^{+}\leq \gamma h_{m}h_{s-r}c^{+}$. Since $k_1\gamma h_{m}h_{s}c^{+}>k_2\gamma h_{m}h_{s}c^{+}$, one of $Sign(k_1\gamma h_{m}h_{s-r}c^{+})$ and $Sign(k_2\gamma h_{m}h_{s-r}c^{+})$ must differ from 0; thus $k_1\gamma h_{m}h_{s-r}c^{+}> k_2\gamma h_{m}h_{s-r}c^{+}$. Therefore, $h^{-}_{r}(x)> h^{-}_{r}(y)$. Second, consider the case $x,y\in H(c^{-})$. In any case, $h_{r}^{-}(x)=-h_{z}^{-}(-x)$ and $h_{r}^{-}(y)=-h_{z}^{-}(-y)$, for some $z$. Since $x,y\in H(c^{-})$, we have $-x,-y\in H(c^{+})$. By the above case, $x>y\Rightarrow -x<-y \Rightarrow h_{z}^{-}(-x)\leq h_{z}^{-}(-y)\Rightarrow h_{r}^{-}(x)\geq h_{r}^{-}(y)$. Finally, if $x\in H(c^{+})\cup \{W,1\}$ and $y\in H(c^{-})\cup \{0,W\}$, then $h^{-}(x)\geq W\geq h^{-}(y)$; and if $x=1$, then $h^{-}(x)\geq h^{-}(y)$. Limited hedge algebras ---------------------- In the present work, we only deal with finite linguistic truth domains. The rationale for this is as follows. First, in daily life, humans only use linguistic terms with a limited length. This is due to the fact that it is difficult to distinguish the different meaning of terms with many hedges such as *Very Little Probably True* and *More Little Probably True*. Hence, we can assume that applying any hedge to truth values that have a certain number $l$ of hedges will not change their meaning. In other words, canonical representations of all terms w.r.t. primary terms have a length of at most $l+1$. Second, according to , in most applications to approximate reasoning, a small finite set of fuzzy truth values would, in general, be sufficient since each fuzzy truth value represents a fuzzy set rather than a single element of \[0,1\]. Third, more importantly, it is reasonable for us to consider only finitely many truth values in order to provide a logical system that can be implemented for computers. In fact, we later show that with a finite truth domain, we can obtain the Least Herbrand model for a finite program after a finite number of iterations of an immediate consequences operator. An *l-limited* HA, where $l$ is a positive integer, is a lin-HA in which canonical representations of all terms w.r.t. primary terms have a length of at most $l+1$. For an *l*-limited HA $\underline{X} = (X,G,H,\leq)$, since the set of hedges $H$ is finite, so is the linguistic truth domain $\overline{X}$. In the following, we give a particular example of inverse mappings of hedges for a 2-limited HA. \[ex99\] Consider a $2$-limited HA $\underline{X}=(X,\{c^{+}, c^{-}\},\{V,M,P,L\},\leq)$ with $L<_e P<_e I<_e M<_e V$. We have a linguistic truth domain $\overline{X}=\{ v_0=0, v_1=VVc^{-},v_2=MVc^{-}, v_3=Vc^{-}, v_4=PVc^{-}, v_5=LVc^{-}, v_6=VMc^{-}, v_7=MMc^{-}, v_8=Mc^{-}, v_9=PMc^{-}, v_{10}=LMc^{-}, v_{11}=c^{-}, v_{12}=VPc^{-}, v_{13}=MPc^{-}, v_{14}=Pc^{-}, v_{15}=PPc^{-}, v_{16}=LPc^{-}, v_{17}=LLc^{-}, v_{18}=PLc^{-}, v_{19}=Lc^{-}, v_{20}=MLc^{-}, v_{21}=VLc^{-}, v_{22}=W, v_{23}=VLc^{+}, v_{24}=MLc^{+}, v_{25}=Lc^{+}, v_{26}=PLc^{+}, v_{27}=LLc^{+}, v_{28}=LPc^{+}, v_{29}=PPc^{+}, v_{30}=Pc^{+}, v_{31}=MPc^{+}, v_{32}=VPc^{+}, v_{33}=c^{+}, v_{34}=LMc^{+}, v_{35}=PMc^{+}, v_{36}=Mc^{+}, v_{37}=MMc^{+}, v_{38}=VMc^{+}, v_{39}=LVc^{+}, v_{40}=PVc^{+}, v_{41}=Vc^{+}, v_{42}=MVc^{+}, v_{43}=VVc^{+}, v_{44}=1\}$. Based on the inverse mappings defined in Example \[ex3\], we can build the inverse mappings for this 2-limited HA with some modifications. Since we are working with the 2-limited HA, if $h^{-}(x)=W$, for $x\in H(c^{+})$, we can put $h^{-}(x)=VLc^{+}$, the minimum value of $H(c^{+})$; if $h^{-}(x)=1$, for $x\in H(c^{+})$, we can put $h^{-}(x)=VVc^{+}$, the maximum value of $H(c^{+})$; if $h^{-}(x)=W$, for $x\in H(c^{-})$, we can put $h^{-}(x)=VLc^{-}$, the maximum value of $H(c^{-})$; and if $h^{-}(x)=0$, for $x\in H(c^{-})$, we can put $h^{-}(x)=VVc^{-}$, the minimum value of $H(c^{-})$. Changes are also made to the inverse mappings of hedges with a value in $\{c^{-},c^{+}\}$. This means that inverse mappings of hedges are not unique. This is acceptable since reasoning based on fuzzy logic is approximate, and inverse mappings of hedges should be built according to applications. Inverse mappings of hedges for the 2-limited HA are shown in Table \[tab3\], in which the value of an inverse mapping of a hedge $h^{-}$, appearing in the first row, of a value $x$, appearing in the first column, is in the corresponding cell. For example, $M^{-}(PPc^{+})=MLc^{+}$. Note that the values of $x$ appear in an ascending order. $V^{-}$ $M^{-}$ $P^{-}$ $L^{-}$ ----------- ----------- ----------- ----------- ----------- -------- $0$ $0$ $0$ $0$ $0$ $kVc^{-}$ $VVc^{-}$ $VVc^{-}$ $kMc^{-}$ $c^{-}$ $^{a}$ $kMc^{-}$ $VVc^{-}$ $kVc^{-}$ $c^{-}$ $kPc^{-}$ $^{a}$ $c^{-}$ $Vc^{-}$ $Mc^{-}$ $Pc^{-}$ $Lc^{-}$ $VPc^{-}$ $VMc^{-}$ $PMc^{-}$ $LLc^{-}$ $VLc^{-}$ $MPc^{-}$ $MMc^{-}$ $LMc^{-}$ $PLc^{-}$ $VLc^{-}$ $Pc^{-}$ $Mc^{-}$ $c^{-}$ $Lc^{-}$ $VLc^{-}$ $PPc^{-}$ $PMc^{-}$ $VPc^{-}$ $MLc^{-}$ $VLc^{-}$ $LPc^{-}$ $LMc^{-}$ $VPc^{-}$ $VLc^{-}$ $VLc^{-}$ $LLc^{-}$ $LMc^{-}$ $VPc^{-}$ $VLc^{-}$ $VLc^{-}$ $PLc^{-}$ $LMc^{-}$ $MPc^{-}$ $VLc^{-}$ $VLc^{-}$ $Lc^{-}$ $c^{-}$ $Pc^{-}$ $VLc^{-}$ $VLc^{-}$ $MLc^{-}$ $VPc^{-}$ $PPc^{-}$ $VLc^{-}$ $VLc^{-}$ $VLc^{-}$ $PPc^{-}$ $LPc^{-}$ $VLc^{-}$ $VLc^{-}$ $W$ $W$ $W$ $W$ $W$ $VLc^{+}$ $VLc^{+}$ $VLc^{+}$ $LPc^{+}$ $PPc^{+}$ $MLc^{+}$ $VLc^{+}$ $VLc^{+}$ $PPc^{+}$ $VPc^{+}$ $Lc^{+}$ $VLc^{+}$ $VLc^{+}$ $Pc^{+}$ $c^{+}$ $PLc^{+}$ $VLc^{+}$ $VLc^{+}$ $MPc^{+}$ $LMc^{+}$ $LLc^{+}$ $VLc^{+}$ $VLc^{+}$ $VPc^{+}$ $LMc^{+}$ $LPc^{+}$ $VLc^{+}$ $VLc^{+}$ $VPc^{+}$ $LMc^{+}$ $PPc^{+}$ $VLc^{+}$ $MLc^{+}$ $VPc^{+}$ $PMc^{+}$ $Pc^{+}$ $VLc^{+}$ $Lc^{+}$ $c^{+}$ $Mc^{+}$ $MPc^{+}$ $VLc^{+}$ $PLc^{+}$ $LMc^{+}$ $MMc^{+}$ $VPc^{+}$ $VLc^{+}$ $LLc^{+}$ $PMc^{+}$ $VMc^{+}$ $c^{+}$ $Lc^{+}$ $Pc^{+}$ $Mc^{+}$ $Vc^{+}$ $kMc^{+}$ $kPc^{+}$ $c^{+}$ $kVc^{+}$ $VVc^{+}$ $^{a}$ $kVc^{+}$ $c^{+}$ $kMc^{+}$ $VVc^{+}$ $VVc^{+}$ [^1] $1$ $1$ $1$ $1$ $1$ : Inverse mappings of hedges[]{data-label="tab3"} Many-valued modus ponens ------------------------ Our logic is truth-functional, i.e., the truth value of a compound formula, built from its components using a logical connective, is a function, which is called the *truth function* of the connective, of the truth values of the components. Our procedural semantics is developed based on many-valued modus ponens. In order to guarantee the soundness of many-valued modus ponens, the truth function of an implication, called an *implicator*, must be *residual* to the *t-norm*, a commutative and associative binary operation on the truth domain, evaluating many-valued modus ponens [@Ha98]. The many-valued modus ponens syntactically looks like: $$\frac{(B,b), (A\leftarrow B,r)}{(A,\mathcal{C}(b,r))}$$ Its soundness semantically states that whenever $f$ is an interpretation such that $f(B)\geq b$, i.e., $f$ is a model of $(B,b)$, and $f(A\leftarrow B)=\leftarrow^{\bullet}(f(A),f(B))\geq r$, i.e., $f$ is a model of $(A\leftarrow B,r)$, then $f(A)\geq \mathcal{C}(b,r)$, where $\leftarrow^{\bullet}$ is an implicator, and $\mathcal{C}$ is a t-norm. This means the truth value of $A$ under any model of $(B,b)$ and $(A\leftarrow B,r)$ is at least $\mathcal{C}(b,r)$. More precisely, let $r$ be a lower bound to the truth value of the implication $h\leftarrow b$, let $\mathcal{C}$ be a t-norm, and let $\leftarrow^\bullet$ be its residual implicator; we have: $$\mathcal{C}(b,r)\leq h \mbox{ iff } r\leq \leftarrow^\bullet(h,b) \label{ti01}$$ According to , from (\[ti01\]), we have: $$\begin{aligned} (\forall b)(\forall h)\; \mathcal{C}(b,\leftarrow^\bullet(h,b))\leq h \label{ti02} \\ (\forall b)(\forall r)\; \leftarrow^\bullet(\mathcal{C}(b,r),b)\geq r \label{ti03}\end{aligned}$$ Note that t-norms are not necessary to be a truth function of any conjunction in our language. Recall that in many-valued logics, there are several prominent sets of connectives called Łukasiewicz, Gödel, and product logic ones. Each of the sets has a pair of residual t-norm and implicator. Since our truth values are linguistic, we cannot use the product logic connectives. Given a linguistic truth domain $\overline{X}$, since all the values in $\overline{X}$ are linearly ordered, we assume that they are $v_0\leq v_1\leq ...\leq v_n$, where $v_0=0$ and $v_n=1$. The Łukasiewicz t-norm and implicator can be defined on $\overline{X}$ as follows: $$\mathcal{C}_{L}(v_i,v_j)= \left\{\begin{array}{ll} v_{i+j-n} & \mbox{if $i+j-n> 0$} \\ v_0 & \mbox{otherwise} \end{array} \right.$$ $$\leftarrow_{L}^\bullet (v_j,v_i)= \left\{\begin{array}{ll} v_n & \mbox{if $i\leq j$} \\ v_{n+j-i} & \mbox{otherwise} \end{array} \right.$$ and those of Gödel can be: $$\mathcal{C}_{G} (v_i,v_j)= min(v_i,v_j)$$ $$\leftarrow_{G}^\bullet (v_j,v_i)= \left\{\begin{array}{ll} v_n & \mbox{if $i\leq j$} \\ v_j & \mbox{otherwise} \end{array} \right.$$ Clearly, each of the implicators is the residuum of the corresponding t-norm. It can also be seen that t-norms are monotone in all arguments, and implicators are non-decreasing in the first argument and non-increasing in the second. Fuzzy linguistic logic programming ================================== Language -------- Like , our language is a many sorted (typed) predicate language. Let $\mathcal{A}$ denote the set of all attributes. For each sort of variables $A\in\mathcal{A}$, there is a set $\mathcal{C}^A$ of constant symbols, which are names of elements of the domain of $A$. In order to achieve the Least Herbrand model after a finite number of iterations of an immediate consequences operator, we do not allow any function symbols. This is not a severe restriction since in many database applications, there are no function symbols involved. Connectives can be: conjunctions $\wedge$ (also called Gödel) and $\wedge_{L}$ (Łukasiewicz); the disjunction $\vee$; implications $\leftarrow_L$ (Łukasiewicz) and $\leftarrow_G$ (Gödel); and linguistic hedges as unary connectives. For any connective $c$ different from hedges, its truth function is denoted by $c^\bullet$, and for a hedge connective $h$, its truth function is its inverse mapping $h^{-}$. The only quantifier allowed is the universal quantifier $\forall$. A *term* is either a constant or a variable. An *atom* or *atomic formula* is of the form $p(t_1,...,t_n)$, where $p$ is an n-ary predicate symbol, and $t_1,...,t_n$ are terms of corresponding attributes $A_1,...,A_n$. A *body formula* is defined inductively as follows: ($i$) An atom is a body formula. ($ii$) If $B_1$ and $B_2$ are body formulae, then so are $\wedge(B_1,B_2)$, $\vee(B_1,B_2)$, and $hB_1$, where $h$ is a hedge. Here, we use the prefix notation for connectives in body formulae. A *rule* is a graded implication $(A\leftarrow B.r)$, where $A$ is an atom called *rule head*, $B$ is a body formula called *rule body*, and $r$ is a truth value different from 0. $(A\leftarrow B)$ is called the *logical part* of the rule. A *fact* is a graded atom ($A.b$), where $A$ is an atom called the logical part of the fact, and $b$ is a truth value different from 0. A *fuzzy linguistic logic program* (program, for short) is a finite set of rules and facts, where truth values are from the linguistic truth domain of an $l$-limited HA, hedges used in body formulae (if any) belong to the set of hedges of the HA, and there are no two rules (facts) having the same logical part, but different truth values. We follow Prolog conventions where predicate symbols and constants begin with a lower-case letter, and variables begin with a capital letter. \[ex101\] Assume we use the truth domain from the 2-limited HA in Example \[ex99\], that is, $\underline{X} = (X,\{False,True\},\{V, M, P,L\},\leq)$, and we have the following knowledge base: ($i$) The sentence “*If a student studies very hard, and his/her university is probably high-ranking, then he/she will be a good employee*" is *Very More True*. ($ii$) The sentence “*The university where Ann is studying is high-ranking*" is *Very True*. ($iii$) The sentence “*Ann is studying hard*" is *More True*. Let *gd\_em, st\_hd, hira\_un*, and *T* stand for “*good employee*", “*study hard*", “*high-ranking university*", and “*True*", respectively. Then, the knowledge base can be represented by the following program: $$\begin{aligned} (gd\_em(X)\leftarrow_G \wedge(V\;st\_hd(X),P\;hira\_un(X)).VMT)\\ (hira\_un(ann).VT)\\ (st\_hd(ann).MT)\end{aligned}$$ Note that the predicates $st\_hd(X)$ and $hira\_un(X)$ in the only rule are modified by the hedges $V$ and $P$, respectively. We assume as usual that the underlying language of a program $P$ is defined by constants (if no such constant exists, we add some constant such as $a$ to form ground terms) and predicate symbols appearing in $P$. With this understanding, we can now refer to the *Herbrand universe* of sort $A$, which consists of all ground terms of $A$, by $U^A_P$, and to the *Herbrand base* of $P$, which consists of all ground atoms, by $B_P$ [@Ll87]. A program $P$ can be represented as a partial mapping: $$P:Formulae\rightarrow \overline{X} \setminus \{0\}$$ where the domain of $P$, denoted by $dom(P)$, is finite and consists only of logical parts of rules and facts, and $\overline{X}$ is a linguistic truth domain. The truth value of a rule ($A\leftarrow B.r$) is $r=P(A\leftarrow B)$, and that of a fact ($A.b$) is $b=P(A)$. Since in our logical system we only want to obtain the computed answers for queries, we do not look for 1-tautologies to extend the capabilities of the system although we can have some due to the fact that our connectives are classical many-valued ones (see ). Declarative semantics --------------------- Since we are working with logic programs without negation, it is reasonable to consider only fuzzy Herbrand interpretations and models. Given a program $P$, let $\overline{X}$ be the linguistic truth domain; a *fuzzy linguistic Herbrand interpretation* (interpretation, for short) $f$ is a mapping $f:B_P\rightarrow \overline{X}$. The ordering $\leq$ in $\overline{X}$ can be extended to the set of interpretations as follows. We say $f_1\sqsubseteq f_2 $ iff $f_1(A)\leq f_2(A)$ for all ground atoms $A$. Clearly, the set of all interpretations of a program is a complete lattice under $\sqsubseteq$. The least interpretation called the *bottom interpretation*, denoted by $\bot$, maps every ground atom to 0. An interpretation $f$ can be extended to all ground formulae, denoted by $\overline{f}$, using the unique homomorphic extension as follows: ($i$) $\overline{f}(A)=f(A)$, if $A$ is a ground atom; ($ii$) $\overline{f}(c(B_1,B_2))=c^\bullet(\overline{f}(B_1),\overline{f}(B_2))$, where $B_1,B_2$ are ground formulae, and $c$ is a binary connective; ($iii$) $\overline{f}(hB)=h^{-}(\overline{f}(B))$, where $B$ is a ground body formula, and $h$ is a hedge. For non-ground formulae, since all the formulae in the language are considered universally quantified, the interpretation $\overline{f}$ is defined as $$\overline{f}(\varphi)=\overline{f}(\forall\varphi)=inf_{\vartheta}\{\overline{f}(\varphi\vartheta)|\varphi\vartheta \mbox{ is a ground instance of } \varphi\}$$ where $\forall\varphi$ means universal quantification of all variables with free occurrence in $\varphi$. An interpretation $f$ is a *model* of a program $P$ if for all formulae $\varphi\in dom(P)$, we have $\overline{f}(\varphi)\geq P(\varphi)$. Therefore, $P(\varphi)$ is understood as a lower bound to the truth value of $\varphi$. A *query* is an atom used as a question $?A$ prompting the system. Given a program $P$, let $\overline{X}$ be the linguistic truth domain. A pair $(x;\theta)$, where $x\in \overline{X}$, and $\theta$ is a substitution, is called a *correct answer* for $P$ and a query $?A$ if for any model $f$ of $P$, we have $\overline{f}(A\theta)\geq x$. Procedural semantics -------------------- Given a program $P$ and a query $?A$, we want to compute a lower bound for the truth value of $A$ under any model of $P$. Recall that in the theory of many-valued modus ponens [@Ha98], given $(A\leftarrow B.r)$ and $(B.b)$, we have $(A.\mathcal{C}(b,r))$. As in , our procedural semantics utilises *admissible rules*. Admissible rules act on tuples of words in the alphabet, denoted by $L^{e}_P$, which is the disjoint union of the alphabet of the language of $dom(P)$ augmented by the truth functions of the connectives (except $\leftarrow_i$ and $\leftarrow^{\bullet}_i$) and symbols $\mathcal{C}_i$, and the linguistic truth domain. Admissible rules are defined as follows:\ **Rule 1.** From $((X A_m Y);\vartheta)$ infer $((X \mathcal{C}(B,r) Y)\theta;\vartheta \theta)$ if 1. $A_m$ is an atom (called *the selected atom*) 2. $\theta$ is an mgu of $A_m$ and $A$ 3. $(A\leftarrow B.r)$ is a rule in the program. **Rule 2.** From $(X A_m Y)$ infer $(X 0 Y)$. This rule is usually used for situations where $A_m$ does not unify with any rule head or logical part of facts in the program.\ **Rule 3.** From $(X hB Y)$ infer $(X h^{-}(B) Y)$ if $B$ is a non-empty body formula, and $h$ is a hedge.\ **Rule 4.** From $((X A_m Y);\vartheta)$ infer $((X r Y)\theta;\vartheta\theta)$ if 1. $A_m$ is an atom (also called the selected atom) 2. $\theta$ is an mgu of $A_m$ and $A$ 3. $(A.r)$ is a fact in the program. **Rule 5.** If there are no more predicate symbols in the word, replace all connectives $\wedge$’s, and $\vee$’s with $\wedge^{\bullet}$, and $\vee^{\bullet}$, respectively. Then, since this word contains only some additional $\mathcal{C}$’s, $h^{-}$’s, and truth values, evaluate it. The substitution remains unchanged. Note that our rules except Rule 3 are the same as those in . Let $P$ be a program and $?A$ a query. A pair $(r;\theta)$, where $r$ is a truth value, and $\theta$ is a substitution, is said to be a *computed answer* for $P$ and $?A$ if there is a sequence $G_0,...,G_n$ such that 1. every $G_i$ is a pair consisting of a word in $L^e_P$ and a substitution 2. $G_0=(A;id)$ 3. every $G_{i+1}$ is inferred from $G_i$ by one of the admissible rules (here we also utilise the usual Prolog renaming of variables along derivation) 4. $G_n=(r;\theta')$ and $\theta=\theta'$ restricted to variables of $A$, and we say that the *computation* has a length of $n$. Let us give an example of a computation. \[ex4\] We take the program in Example \[ex101\], that is: $$\begin{aligned} (gd\_em(X)\leftarrow_G \wedge(Vst\_hd(X),Phira\_un(X)).VMT)\\ (hira\_un(ann).VT)\\ (st\_hd(ann).MT)\end{aligned}$$ Given a query $?gd\_em(ann)$, we can have the following computation (since the query is ground, the substitution in the computed answer is the identity): $$\begin{aligned} ?gd\_em(ann)\\ \mathcal{C}_G(\wedge(V\;st\_hd(ann), P\;hira\_un(ann)),V M T)\\ \mathcal{C}_G(\wedge(V^{-}(st\_hd(ann)), P\;hira\_un(ann)),V M T)\\ \mathcal{C}_G(\wedge(V^{-}(st\_hd(ann)), P^{-}(hira\_un(ann))),V M T)\\ \mathcal{C}_G(\wedge(V^{-}(M T), P^{-}(hira\_un(ann))),V M T)\\ \mathcal{C}_G(\wedge(V^{-}(M T), P^{-}(V T)),V M T)\\ \mathcal{C}_G(\wedge^\bullet(V^{-}(M T), P^{-}(V T)),V M T)\end{aligned}$$ Using the inverse mappings of hedges in Table \[tab3\], we have $\mathcal{C}_G(\wedge^\bullet(V^{-}(M T),P^{-}(V T))$, $V M T)=\mathcal{C}_G(min(P T, VV T),$ $V M T)=\mathcal{C}_G(P T,V M T)=PT$. Hence, the sentence “*Ann will be a good employee*" is at least *Probably True*. This result is reasonable as follows: one of the conditions constituting the result is the one saying that “*The student studies very hard*"; since “*Ann is studying hard*" is *MT* (*More True*), the truth value of “*Ann is studying very hard*" is $V^{-}(M T)$; and since $MT<VT$, we have $V^{-}(M T)<V^{-}(V T)=T$, and $V^{-}(M T)=PT$ is acceptable. If we use the Łukasiewicz implication instead of the Gödel implication in the rule, then in the computation, the Gödel t-norm will be replaced by the Łukasiewicz t-norm, and, finally, we have an answer $(gd\_em(ann).MLT)$. From the definition of the procedural semantics, we can see that in order to increase the chances of finding a good computed answer which has a better truth value along a computation, we should do the following: $(i)$ If there is more than one rule or fact whose rule heads or logical parts can be unifiable with the selected atom, and of such rules or facts there is only one to which the highest truth value is assigned, then we choose it for the next step. $(ii)$ If there is one fact among such rules or facts which are associated with the highest truth value, then we choose the fact for the next step since the t-norm evaluating such a rule always yields a lower truth value than that of the fact. $(iii)$ If there is more than one such a rule, but no facts, which have the highest truth value, then we choose the one with the Gödel implication for the next step since in this case, the Gödel t-norm usually, but not always (since it also depends on the bodies of the rules), yields a better truth value than the Łukasiewicz t-norm. In Example \[ex4\], it has been shown that with the same body formula, the rule with the Gödel implication yields a better result ($PT$) than the rule with the Łukasiewicz implication ($MLT$). Soundness of the procedural semantics ------------------------------------- \[th04\] Every computed answer for a program $P$ and a query $?A$ is a correct answer for $P$ and $?A$. Assume that a pair $(r;\theta)$ is a computed answer for $P$ and $?A$. Let $f$ be any model of $P$; we will prove that $\overline{f}(A\theta)\geq r$. The proof is by induction on length $n$ of computations. First, suppose that $n=1$. Hence, either Rule 2 or Rule 4 has been applied. The case of Rule 2 is obvious since $r=0$. The case of Rule 4 implies that $P$ has a fact $(C.r)$ such that $A\theta=C\theta$. Therefore, $\overline{f}(A\theta)=\overline{f}(C\theta)\geq \overline{f}(C)\geq P(C)=r$. Next, suppose that the result holds for computed answers coming from computations of length $\leq k-1$, where $k>1$. We prove that it also holds for a computation of length $k$. Assume that the sequence of the substitutions in the computation is $\theta_1,...,\theta_k$ (some of them are the identity), where $\theta=\theta_1...\theta_k$ restricted to variables of $A$. Since the length of the computation $k>1$, the first admissible rule to be applied is Rule 1. This means there exists a rule $(C\leftarrow_i B.c)$ in $P$ such that $A\theta_1=C\theta_1$. For each atom $D$ in the rule body $B\theta_1$, there exists a computation of length $\leq k-1$ for it. Suppose $d$ is the computed truth value for $D$ in that computation; by the induction hypothesis, we have $d\leq \overline{f}(D\theta_2...\theta_k)$. Furthermore, since the truth functions of the conjunctions, the disjunction, and inverse mappings of hedges are non-decreasing in all their arguments, if $b$ is the computed truth value for the whole rule body $B\theta_1$, which is calculated from all the $d$ for each atom $D$ using the truth functions of the connectives, then $b\leq \overline{f}(B\theta_1\theta_2...\theta_k)$. Therefore, we have: $r=\mathcal{C}_i(b,c)\leq \mathcal{C}_i(\overline{f}(B\theta_1...\theta_k),c)\leq^{(*)} \mathcal{C}_i(\overline{f}(B\theta_1...\theta_k),\overline{f}(C\theta_1...\theta_k\leftarrow_i B\theta_1...\theta_k))= \mathcal{C}_i(\overline{f}(B\theta_1...\theta_k),\leftarrow_i^\bullet(\overline{f}(C\theta_1...\theta_k), \overline{f}(B\theta_1...\theta_k))) \leq^{(**)} \overline{f}(C\theta_1...\theta_k)=\overline{f}(A\theta_1...\theta_k)=\overline{f}(A\theta)$, where (\*) holds since $f$ is a model of $P$, and (\*\*) follows from (\[ti02\]). Fixpoint semantics ------------------ Similar to , the immediate consequences operator, introduced by van Emden and Kowalski, can be generalised to the case of fuzzy linguistic logic programming as follows. \[ico\] Let $P$ be a program. The operator $T_P$ mapping from interpretations to interpretations is defined as follows. For every interpretation $f$ and every ground atom $A\in B_P$, $T_P(f)(A)=max\{sup\{\mathcal{C}_i(\overline{f}(B),r):(A\leftarrow_i B.r)$ is a ground instance of a rule in $P\}$, $sup\{b:(A.b)$ is a ground instance of a fact in $P\}\}$. Since $P$ is function-free, each Herbrand universe $U_P^A$ of a sort $A$ is finite, and so is its Herbrand base $B_P$. Hence, for each $A\in B_P$, there are a finite number of ground instances of rule heads and logical parts of facts which match $A$. Therefore, the suprema in the definition of $T_P$ are in fact maxima. Similar to , we have the following results. \[th01\] The operator $T_P$ is monotone. Let $f_1$ and $f_2$ be two interpretations such that $f_1\sqsubseteq f_2$; we prove that $T_P(f_1)\sqsubseteq T_P(f_2)$. First, let us prove $\overline{f_1}(B)\leq \overline{f_2}(B)$ for all ground body formulae $B$ by induction on the structure of the formulae. In the base case where $B$ is a ground atom, we have $\overline{f_1}(B)=f_1(B)\leq f_2(B)=\overline{f_2}(B)$. For the inductive case, consider a ground body formula $B$. By case analysis and the induction hypothesis, we have $B=\wedge(B_1, B_2)$, or $B=\vee(B_1, B_2)$, or $B=hB_1$ such that $\overline{f_1}(B_1)\leq \overline{f_2}(B_1)$ and $\overline{f_1}(B_2)\leq \overline{f_2}(B_2)$. By definition, we have $\overline{f_1}(B)=\wedge^{\bullet}(\overline{f_1}(B_1),\overline{f_1}(B_2))\leq \wedge^{\bullet}(\overline{f_2}(B_1),\overline{f_2}(B_2))=\overline{f_2}(B)$, or $\overline{f_1}(B)=\vee^{\bullet}(\overline{f_1}(B_1),\overline{f_1}(B_2))\leq \vee^{\bullet}(\overline{f_2}(B_1),\overline{f_2}(B_2))=\overline{f_2}(B)$, or $\overline{f_1}(B)$ = $h^{-}(\overline{f_1}(B_1))\leq h^{-}(\overline{f_2}(B_1))=\overline{f_2}(B)$, respectively. Thus, $\overline{f_1}(B)\leq \overline{f_2}(B)$ for all ground body formulae $B$. Now, let $A$ be any ground atom. If $A$ does not unify with any rule head or logical part of facts in $P$, then $T_P(f_1)(A)=T_P(f_2)(A)=0$. Otherwise, since the value of the second $sup$ in Definition \[ico\] does not depend on the interpretations, what we need to consider now is the first $sup$. For any ground instance $(A\leftarrow_i B.r)$ of a rule in $P$, since $B$ is ground, we have $\mathcal{C}_i(\overline{f_1}(B),r)\leq \mathcal{C}_i(\overline{f_2}(B),r)$. By taking suprema for all ground instances $(A\leftarrow_i B.r)$ on both sides, we have $sup\{\mathcal{C}_i(\overline{f_1}(B),r)\}\leq sup\{\mathcal{C}_i(\overline{f_2}(B),r)\}$. Therefore, $T_P(f_1)(A)\leq T_P(f_2)(A)$ for all ground atoms $A$. \[th02\] The operator $T_P$ is continuous. Recall that a mapping $f:L\rightarrow L$, where $L$ is a complete lattice, is said to be *continuous* if for every directed subset $X$ of $L$, $f(sup(X))=sup\{f(x)|x\in X\}$. Let us prove that for each directed set $X$ of interpretations, $T_P(sup (X))=sup\{T_P(f)|f\in X\}$. Since $T_P$ is monotone, we have $sup\{T_P(f)|f\in X\}\sqsubseteq T_P(sup (X))$. On the other hand, since the Herbrand base $B_P$ and the truth domain are finite, the set of all Herbrand interpretations of $P$ is finite. Therefore, for each finite directed set $X$ of interpretations, we have an upper bound of $X$ in $X$. This, together with the monotonicity of $T_P$, leads to $T_P(sup(X))\sqsubseteq sup\{T_P(f):f\in X\}$. \[th03\] An interpretation $f$ is a model of a program $P$ iff $T_P(f)\sqsubseteq f$. First, assume that $f$ is a model of $P$; we prove that $T_P(f)\sqsubseteq f$. Let $A$ be any ground atom. Consider the following cases: ($i$) If $A$ is neither a ground instance of a logical part of facts nor a ground instance of a rule head in $P$, then $T_P(f)(A)=0\leq f(A)$. ($ii$) For each ground instance $(A.b)$ of a fact, say $(C.b)$, in $P$, since $f$ is a model of $P$, and $A$ is a ground instance of $C$, we have $b=P(C)\leq \overline{f}(C)\leq f(A)$. Hence, $f(A)\geq sup\{b|(A.b)$ is a ground instance of a fact in $P\}$. ($iii$) For each ground instance $(A\leftarrow_i B.r)$ of a rule, say $(C.r)$, in $P$, we have: $\mathcal{C}_i(\overline{f}(B),r)=\mathcal{C}_i(\overline{f}(B),P(C))\leq^{(*)} \mathcal{C}_i(\overline{f}(B),\overline{f}(A\leftarrow_i B))=\mathcal{C}_i(\overline{f}(B),\leftarrow_i^\bullet(f(A),\overline{f}(B)))\leq^{(**)}f(A)$, where (\*) holds since $(A\leftarrow_i B)$ is a ground instance of $C$, and (\*\*) follows from (\[ti02\]). Therefore, $f(A)\geq sup\{\mathcal{C}_i(\overline{f}(B),r)|(A\leftarrow_i B.r)$ is a ground instance of a rule in $P\}$. Thus, by definition, $T_P(f)(A)\leq f(A)$ for all $A\in B_P$. Finally, let us show that if $T_P(f)\sqsubseteq f$, then $f$ is a model of $P$. Let $C$ be any formula in $dom(P)$. There are two cases: ($i$) $(C.c)$, where $c$ is a truth value, is a fact in $P$. For each ground instance $A$ of $C$, by hypothesis and definition, we have $f(A)\geq T_P(f)(A)\geq sup\{b|(A.b)$ is a ground instance of a fact in $P\}\geq c=P(C)$. Therefore, $\overline{f}(C)=inf\{f(A)|A$ is a ground instance of $C\}\geq P(C)$. ($ii$) $(C.c)$ is a rule in $P$. For each ground instance $A\leftarrow_j D$ of $C$, by hypothesis and definition, we have $f(A)\geq T_P(f)(A)\geq sup\{\mathcal{C}_i(\overline{f}(B),r)|(A\leftarrow_i B.r)$ is a ground instance of a rule in $P\}\geq \mathcal{C}_j(\overline{f}(D),c) =\mathcal{C}_j(\overline{f}(D),P(C))$. Hence, $\overline{f}(A\leftarrow_j D)=\leftarrow_j^\bullet(f(A),\overline{f}(D))\geq^{(*)}\leftarrow_j^\bullet(\mathcal{C}_j(\overline{f}(D),P(C)),\overline{f}(D))\geq^{(**)} P(C)$, where (\*) holds since $\leftarrow_i^\bullet$ is non-decreasing in the first argument, and (\*\*) follows from (\[ti03\]). Consequently, $\overline{f}(C)=inf\{\overline{f}(A\leftarrow_j D)|(A\leftarrow_j D)$ is a ground instance of $C\}\geq P(C)$. Since the given immediate consequences operator $T_P$ satisfies Theorem \[th02\] and Theorem \[th03\], and the set of Herbrand interpretations of the program $P$ is a complete lattice under the relation $\sqsubseteq$, due to Knaster and Tarski [@Tarski55], the Least Herbrand model of the program $P$ is exactly the least fixpoint of $T_P$ and can be obtained by iterating $T_P$ from the bottom interpretation $\bot$ after $\omega$ iterations, where $\omega$ is the smallest limit ordinal (apart from 0). Furthermore, since the truth domain $\overline{X}$ and the Herbrand base $B_P$ are finite, the least model of $P$ can be obtained after at most $\mathcal{O}(|P||\overline{X}|)$ steps, where $|A|$ denotes the cardinality of the set $A$. This is an important tool for dealing with recursive programs, for which computations can be infinite. Completeness of the procedural semantics ---------------------------------------- The following theorem shows that $T_P^n(\bot)$ in fact builds computed answers for ground atoms. \[th05\] Let $P$ be a program and $A$ a ground atom. For all $n$, there exists a computation for $P$ and the query $?A$ such that the computed answer is $(T_P^n(\bot)(A);id)$. Note that since $A$ is ground, the substitutions in all computed answers are always the identity. We prove the result by induction on $n$. Suppose first that $n=0$. Since $T_P^0(\bot)(A)=0$, there is a computation for $P$ and $?A$ in which only Rule 2 is applied with the computed answer $(0;id)$. Now suppose that the result holds for $n-1$, where $n\geq 1$; we prove that it also holds for $n$. There are two cases: ($i$) $A$ does not unify with any rule head or logical part of facts in $P$. Then, $T_P^n(\bot)(A)=0$, and the computation is the same as the case $n=0$. ($ii$) Otherwise, since the suprema in the definition of $T_P$ are in fact maxima, there exists either a ground instance $(A.b)$ of a fact in $P$ such that $T_P^n(\bot)(A)=b$ or a ground instance $(A\leftarrow_i B.r)$ of a rule in $P$ such that $T_P^n(\bot)(A)=\mathcal{C}_i(T_P^{n-1}(\bot)(B),r)$. For the former case, there is a computation for $P$ and $?A$ in which only Rule 1 is applied, and the computed answer is $(b;id)$. For the latter, by the induction hypothesis, for each ground atom $B_j$ in $B$, there exists a computation such that $T_P^{n-1}(\bot)(B_j)$ is the computed truth value for $B_j$. Therefore, the computed truth value of the whole body $B$ is $T_P^{n-1}(\bot)(B)$, calculated from all $T_P^{n-1}(\bot)(B_j)$ along the complexity of $B$ using the truth functions of the connectives. Clearly, there is a computation for $P$ and $?A$ in which the first rule to be applied is Rule 1 carried out on the rule in $P$ which has $(A\leftarrow_i B.r)$ as its ground instance, and the rest is a combination of the computations of each $B_j$ in $B$. It is clear that the computed truth value for $?A$ in this computation is $T_P^n(\bot)(A)$. The completeness result for the case of ground queries is shown as follows. \[th06\] For every correct answer $(x;id)$ of a program $P$ and a ground query $?A$, there exists a computed answer $(r;id)$ for $P$ and $?A$ such that $r\geq x$. Since $(x;id)$ is a correct answer of $P$ and $?A$, for every model $f$ of $P$, we have $f(A)\geq x$. In particular, let $M_P$ be the Least Herbrand model of $P$; $M_P(A)=T_P^w(\bot)(A)\geq x$. Recall that $T_P^w(\bot)(A)=sup\{T_P^n(\bot)(A): n<w\}$. Since $w$ is a finite number, the $sup$ operator is in fact a maximum. Hence, there exists $n<w$ such that $T_P^n(\bot)(A)=T_P^w(\bot)(A)$. By Theorem \[th05\], there exists a computation for $P$ and $?A$ such that the computed answer is $(T_P^n(\bot)(A);id)$; thus, the theorem is proved. The completeness for the case of non-ground queries can be obtained by employing some extended versions of Mgu lemma and Lifting lemma [@Ll87] as follows. We define several more notions. Consider a computation of length $n$ for a program $P$ and a query $?A$; we call each $G_i, i=0...(n-1)$, in the sequence of the computation an *intermediate query*, and the part of the computation from $G_i$ to $G_n$ an *intermediate computation* of length $n-i$. Thus, a computation is a special intermediate computation with $i=0$. Similar to , we define an *unrestricted computation* (an *unrestricted intermediate computation*) as a computation (an intermediate computation) in which the substitutions $\theta_i$ in each step are not necessary to be most general unifiers (mgu), but only required to be unifiers. In the following proofs, since it is clear for which program a computed answer is, we may omit the program and state that the computed answer is for the (intermediate) query, or the query has the computed answer. The same convention is applied to (unrestricted) (intermediate) computations and correct answers. \[lemma1\] Let $P$ be a program and $G_i$ an intermediate query. Suppose that there is an unrestricted intermediate computation for $P$ and $G_i$. Then, there exists an intermediate computation for $P$ and $G_i$ with the same computed truth value and length such that, if $\theta_{i+1},...,\theta_n$ are the unifiers from the unrestricted intermediate computation, and $\theta_{i+1}',...,\theta_n'$ are the mgu’s from the intermediate computation, then there exits a substitution $\gamma$ such that $\theta_{i+1}...\theta_n=\theta_{i+1}'...\theta_n'\gamma$. The proof is by induction on the length of the unrestricted intermediate computation. Suppose first that the length is 1, i.e., $n=i+1$. Since if either Rule 2 or Rule 5 is applied, the unifier is the identity (an mgu), and Rule 1 and Rule 3 cannot be the last rule to be applied in an unrestricted intermediate computation, the rule to be applied here is Rule 4. Since Rule 4 is the last rule to be applied in the unrestricted intermediate computation, it can be shown that the unrestricted intermediate computation is also an unrestricted computation of length 1. This means $i=0$. Suppose that $G_0=(A_m;id)$, where $A_m$ is an atom. Then, there exists a fact $(A.b)$ in $P$ such that $\theta_1$ is a unifier of $A_m$ and $A$, and $b$ is the computed truth value. Assume that $\theta_1'$ is an mgu of $A_m$ and $A$. Then, $\theta_1=\theta_1'\gamma$ for some $\gamma$. Clearly, there is a computation for $P$ and $?A_m$ carried out on the same fact $(A.b)$ with length 1, the computed truth value $b$, and the mgu $\theta_1'$. Now suppose that the result holds for length $\leq k-1$, where $k\geq 2$; we prove that it also holds for length $k$. Assume that there is an unrestricted intermediate computation for $P$ and $G_i$ of length $k$ with the sequence of unifiers $\theta_{i+1},...,\theta_n$, where $n=i+k$. Consider the transition from $G_i$ to $G_{i+1}$. Since $k\geq 2$, it cannot be an application of Rule 5 and thus is one of the following cases: ($i$) Either Rule 2 or Rule 3 is applied. Then, $\theta_{i+1}=id$. By the induction hypothesis, there exists an intermediate computation for $P$ and $G_{i+1}$ of length $k-1$ with mgu’s $\theta_{i+2}',...,\theta_n'$ such that $\theta_{i+2}...\theta_n=\theta_{i+2}'...\theta_n'\gamma$ for some $\gamma$. Thus, there is an intermediate computation for $P$ and $G_{i}$ of length $k$ with mgu’s $\theta_{i+1}'=id,\theta_{i+2}',...,\theta_n'$ and $\theta_{i+1}...\theta_n=\theta_{i+1}'...\theta_n'\gamma$. ($ii$) Either Rule 1 or Rule 4 is applied. Hence, $\theta_{i+1}$ is a unifier for the selected atom $A$ in $G_i$ and an atom $A'$, which is either a rule head (if Rule 1 is applied) or a logical part of a fact (if Rule 4 is applied) in $P$. There exists an mgu $\theta_{i+1}'$ for $A$ and $A'$ such that $\theta_{i+1}=\theta_{i+1}'\vartheta$ for some $\vartheta$. Therefore, if we use $\theta_{i+1}'$ instead of $\theta_{i+1}$ in the transition, we will obtain an intermediate query $G_{i+1}'$ such that $G_{i+1}=G_{i+1}'\vartheta$ since $G_{i+1}$ and $G_{i+1}'$ are all obtained from $G_{i}$ by replacing $A$ with the same expression, then applying $\theta_{i+1}$ or $\theta_{i+1}'$, respectively. Now consider the transitions from $G_{i+1}$ to $G_{n-1}$. Since they cannot be an application of Rule 5, there are two possible cases: ($a$) All the transitions use only Rule 2 or Rule 3. Thus, all the unifiers are the identity. If we apply the same rule on the corresponding atom (for the case of Rule 2) or on the corresponding body formula (for the case of Rule 3) for each transition from the intermediate query $G_{i+1}'$, we will obtain a sequence $G_{i+1}',...,G_{n-1}'$, and it can be shown that for all $i+1\leq l\leq n-1$, $G_{l}=G_l'\vartheta$. Since the last transition from $G_{n-1}$ to $G_n$ uses Rule 5, $G_{n-1}$ does not have any predicate symbols, and neither does $G_{n-1}'$. Thus, they are identical. As a result, $G_i$ has an intermediate computation $G_i, G'_{i+1},..., G'_{n-1},G_n$ with mgu’s $\theta_{i+1}'$ and the identities. ($b$) There exists the smallest $m$ such that $i+1\leq m\leq n-2$, and the transition from $G_{m}$ to $G_{m+1}$ uses either Rule 1 or Rule 4. Hence, all the transitions from $G_{i+1}$ to $G_{m}$ use only Rule 2 or Rule 3. As above, we can have a sequence $G_{i+1}',...,G_{m}'$ such that for all $i+1\leq l\leq m$, $G_{l}=G_l'\vartheta$. Now we will prove the result for the case that Rule 1 is applied in the transition from $G_{m}$ to $G_{m+1}$, and the case for Rule 4 can be proved similarly. The application of Rule 1 in the transition implies that there exists a rule $(A''\leftarrow_j B.r)$ in $P$ such that $\theta_{m+1}$ is a unifier of the selected atom $A_m$ in $G_m$ and $A''$. Since we utilise the usual Prolog renaming of variables along derivation, we can assume that $\vartheta$ does not act on any variables of $A''$ or $B$. Suppose that $A_m'$ is the corresponding selected atom in $G_m'$, we have $A_m=A_m'\vartheta$. Therefore, $\vartheta\theta_{m+1}$ is a unifier for $A_m'$ and $A''$ since $A_m'\vartheta\theta_{m+1}=A_m\theta_{m+1}=A''\theta_{m+1}=A''\vartheta\theta_{m+1}$. Now applying Rule 1 to $G_m'$ on the selected atom $A_m'$ and the rule $(A''\leftarrow_j B.r)$ with the unifier $\vartheta\theta_{m+1}$, we obtain an intermediate query $G_{m+1}'$. Since $(\mathcal{C}_j(B,r))\theta_{m+1}=(\mathcal{C}_j(B,r))\vartheta\theta_{m+1}$ and $G_m=G_m'\vartheta$, we have $G_{m+1}'=G_{m+1}$. Thus, $G_i$ has an unrestricted intermediate computation with the sequence $G_i, G_{i+1}',..., G_m', G_{m+1},..., G_n$ and the unifiers $\theta_{i+1}',\theta_{i+2},...,\theta_{m},\vartheta\theta_{m+1},\theta_{m+2},...,\theta_n$. By the induction hypothesis, $G_m'$ has an intermediate computation with the sequence $G_m',G_{m+1}',...,G_n'$, the mgu’s $\theta_{m+1}',...,\theta_n'$, and the same computed truth value such that $\vartheta\theta_{m+1}\theta_{m+2}...\theta_n=\theta_{m+1}'...\theta_n'\gamma$ for some $\gamma$. Since $\theta_{i+2},...,\theta_{m}$ are the identity, $G_i$ has an intermediate computation with the sequence $G_i, G_{i+1}',...,G_m',G_{m+1}',...,G_n'$ and the mgu’s $\theta_{i+1}',\theta_{i+2},...,\theta_{m},\theta_{m+1}'...,\theta_n'$, and we have $\theta_{i+1}...\theta_m\theta_{m+1}\theta_{m+2}...\theta_n=\theta_{i+1}'\theta_{i+2}...\theta_m\vartheta\theta_{m+1}\theta_{m+2}...\theta_n=\theta_{i+1}'\theta_{i+2}...\theta_m\theta_{m+1}'...\theta_n'\gamma$. \[lemma2\] Let $P$ be a program, $?A$ a query, and $\theta$ a substitution. Suppose there exists a computation for $P$ and the query $?A\theta$. Then there exists a computation for $P$ and $?A$ of the same length and the same computed truth value such that, if $\theta_1,...,\theta_n$ are mgu’s from the computation for $P$ and $?A\theta$, and $\theta_1',...,\theta_n'$ are mgu’s from the computation for $P$ and $?A$, then there exists a substitution $\gamma$ such that $\theta\theta_1...\theta_n=\theta_1'...\theta_n'\gamma$. The proof is similar to that in . Suppose that the computation for $P$ and $?A\theta$ has a sequence $G_0=(A\theta;id),G_1, ...,G_n$. Consider the admissible rule to be applied in the transition from $G_0$ to $G_1$. We will prove the result for the case of Rule 1, and it can be proved similarly for the others. The application of Rule 1 implies that there exists a rule $(A'\leftarrow_j B.r)$ in $P$ such that $\theta_1$ is an mgu of $A\theta$ and $A'$. We assume that $\theta$ does not act on any variables of $A'$ or $B$; thus, $\theta\theta_1$ is a unifier for $A$ and $A'$. Now applying Rule 1 to $G_0'=(A;id)$ on the rule $(A'\leftarrow_j B.r)$ with the unifier $\theta\theta_1$, we have $G_1'=G_1$. Therefore, we obtain an unrestricted computation for $P$ and $?A$, which looks like the given computation for $P$ and $?A\theta$, except that the first intermediate query $G_0'$ is different, and the first unifier is $\theta\theta_1$. Now applying the mgu lemma, we obtain the result. We also have a lemma which is an extension of Lemma 8.5 in . \[lemma3\] Let $P$ be a program and $?A$ a query. Suppose that $(x;\theta)$ is a correct answer for $P$ and $?A$. Then there exists a computation for $P$ and the query $?A\theta$ with a computed answer $(r;id)$ such that $r\geq x$. The proof is similar to that in . Suppose that $A\theta$ has variables $x_1,...,x_n$. Let $a_1,...,a_n$ be distinct constants not appearing in $P$ or $A$, and let $\theta_1$ be the substitution $\{x_1/a_1,...,x_n/a_n\}$. Since for any model $f$ of $P$, $\overline{f}(A\theta\theta_1)\geq\overline{f}(A\theta)\geq x$, and $A\theta\theta_1$ is ground, $(x;id)$ is a correct answer for $P$ and $?A\theta\theta_1$. By Theorem \[th06\], there exists a computation for $P$ and $?A\theta\theta_1$ with a computed answer $(r;id)$ such that $r\geq x$. Since the $a_i$ do not appear in $P$ or $A$, by replacing $a_i$ with $x_i$ $(i=1,...,n)$ in this computation, we obtain a computation for $P$ and $?A\theta$ with the computed answer $(r;id)$. The completeness of the procedural semantics is stated as follows. Let $P$ be a program, and $?A$ a query. For every correct answer $(x;\theta)$ for $P$ and $?A$, there exists a computed answer $(r;\sigma)$ for $P$ and $?A$, and a substitution $\gamma$ such that $r\geq x$ and $\theta=\sigma\gamma$. Since $(x;\theta)$ is a correct answer for $P$ and $?A$, by Lemma \[lemma3\], there exists a computation for $P$ and the query $?A\theta$ with a computed answer $(r;id)$ such that $r\geq x$. Suppose the sequence of mgu’s in the computation is $\theta_1,...,\theta_n$. Then $A\theta\theta_1...\theta_n=A\theta$. By the lifting lemma, there exists a computation for $P$ and $?A$ with the same computed truth value $r$ and mgu’s $\theta_1',...,\theta_n'$ such that $\theta\theta_1...\theta_n=\theta_1'...\theta_n'\gamma'$, for some substitution $\gamma'$. Let $\sigma$ be $\theta_1'...\theta_n'$ restricted to the variables in $A$. Then $\theta=\sigma\gamma$, where $\gamma$ is an appropriate restriction of $\gamma'$. Clearly, the proofs of Mgu and Lifting lemmas here can be similarly applied to fuzzy logic programming and the frameworks of logic programming developed based on it such as *multi-adjoint logic programming* (see, e.g., ). More examples ------------- \[ex5\] Assume that we use the truth domain from the 2-limited HA in Example \[ex99\], that is, $\underline{X} = (X,\{False,True\},\{V, M, P,L\},\leq)$, and have the following knowledge base: $(i)$ The sentence “*A hotel is convenient for a business trip if it is **very** near to the business location, has a reasonable cost at the time, and is a fine building*" is *Very True*. $(ii)$ The sentence “*A hotel has a reasonable cost if either its dinner cost or its hotel rate at the time is reasonable*" is *Very True*. $(iii)$ The sentence “*Causeway hotel is near Midtown Plaza*" is *Little More True*. $(iv)$ The sentence “*Causeway hotel is a fine building*" is *Probably More True*. $(v)$ The sentence “*Causeway hotel has a reasonable dinner cost in November*" is *Very More True*. $(vi)$ The sentence “*Causeway hotel has a reasonable hotel rate in November*" is *Little Probably True*. Let *cn\_ht, ne\_to, re\_co, fn\_bd, re\_di, re\_rt, Bu\_lo, mt, cw* and *T* stand for “convenient hotel", “near to", “reasonable cost", “fine building", “reasonable dinner cost", “reasonable hotel rate", “business location", “Midtown Plaza", “Causeway hotel", and “True", respectively. Then, the knowledge base can be represented by the following program: $$\begin{aligned} (cn\_ht(Bu\_lo,Time, Hotel)\leftarrow_G \\ \wedge(V\;ne\_to(Bu\_lo,Hotel), re\_co(Hotel,Time), fn\_bd(Hotel)).VT)\\ (re\_co(Hotel,Time)\leftarrow_L \vee(re\_di(Hotel,Time), re\_rt(Hotel,Time)).VT)\\ (ne\_to(mt,cw).LMT)\\ (fn\_bd(cw).PMT)\\ (re\_di(cw,nov).VMT)\\ (re\_rt(cw,nov).LPT)\end{aligned}$$ Note that although the conjunctions and disjunction are binary connectives, they can be easily extended to have any arity greater than 2. Given a query $?cn\_ht(mt,nov,cw)$, we can have the following computation (the substitution in the computed answer is the identity): $$\begin{aligned} ?cn\_ht(mt,nov,cw)\\ \mathcal{C}_G(\wedge(V\;ne\_to(mt,cw), re\_co(cw,nov), fn\_bd(cw)),VT)\\ \mathcal{C}_G(\wedge(V^{-}(ne\_to(mt,cw)), re\_co(cw,nov), fn\_bd(cw)),VT)\\ \mathcal{C}_G(\wedge(V^{-}(LMT), re\_co(cw,nov), fn\_bd(cw)),VT)\\ \mathcal{C}_G(\wedge(V^{-}(LMT), re\_co(cw,nov), PMT),VT)\\ \mathcal{C}_G(\wedge(V^{-}(LMT),\mathcal{C}_L(\vee(re\_di(cw,nov), re\_rt(cw,nov)),VT), PMT),VT)\\ \mathcal{C}_G(\wedge(V^{-}(LMT),\mathcal{C}_L(\vee(VMT, re\_rt(cw,nov)),VT), PMT),VT)\\ \mathcal{C}_G(\wedge(V^{-}(LMT),\mathcal{C}_L(\vee(VMT,LPT),VT), PMT),VT)\\ \mathcal{C}_G(\wedge^{\bullet}(V^{-}(LMT),\mathcal{C}_L(\vee^{\bullet}(VMT,LPT),VT), PMT),VT)\end{aligned}$$ Using the inverse mappings of hedges in Table \[tab3\], we have $\mathcal{C}_G(\wedge^{\bullet}(V^{-}(LMT), \mathcal{C}_L(\vee^{\bullet}($ $VMT,LPT),VT), PMT),VT)=\mathcal{C}_G(\wedge^{\bullet}(V^{-}(LMT), \mathcal{C}_L(VMT,VT), PMT),VT)=\mathcal{C}_G(\wedge^{\bullet}(LPT,PMT,PMT),VT) = \mathcal{C}_G(LPT,VT)=LPT$. Thus, the computed answer is $(LPT;id)$, and the sentence “*Causeway hotel is convenient for a business trip to Midtown Plaza in November*" is at least *Little Probably True*. Now, if we want to relax the first condition in the sentence $(i)$, we can replace the phrase “*very near to*" by a phrase “*probably near to*". Then, similarly, we can have a similar program and the following computation: $$\begin{aligned} ?cn\_ht(mt,nov,cw)\\ \mathcal{C}_G(\wedge(P\;ne\_to(mt,cw), re\_co(cw,nov), fn\_bd(cw)),VT)\\ ...\\ \mathcal{C}_G(\wedge^{\bullet}(P^{-}(LMT),\mathcal{C}_L(\vee^{\bullet}(VMT,LPT),VT), PMT),VT)\end{aligned}$$ Using the inverse mappings in Table \[tab3\], we have a computed answer $(PMT;id)$. Similarly, if we remove the hedge for the first condition in the sentence $(i)$, we can have a similar program and the following computation: $$\begin{aligned} ?cn\_ht(mt,nov,cw)\\ \mathcal{C}_G(\wedge(ne\_to(mt,cw), re\_co(cw,nov), fn\_bd(cw)),VT)\\ ...\\ \mathcal{C}_G(\wedge^{\bullet}(LMT,\mathcal{C}_L(\vee^{\bullet}(MPT,LPT),VT), PMT),VT)\end{aligned}$$ Thus, we have a computed answer $(LMT;id)$. It can be seen that with the same hotel (*Causeway*), the time (*November*), and the business location (*Midtown Plaza*), by similar computations, if we put a higher requirement for the condition “*near to*", we obtain a lower truth value. More precisely, with the conditions “*very near to*", “*near to*", and “*probably near to*", we obtain the truth values $LPT$, $LMT$, and $PMT$, respectively, and $LPT<LMT<PMT$. This is reasonable and in accordance with common sense. Applications ============ A data model for fuzzy linguistic databases with flexible querying ------------------------------------------------------------------ Information stored in databases is not always precise. Basically, two important issues in research in this field are representation of uncertain information in a database and provision of more flexibility in the information retrieval process, notably via inclusion of linguistic terms in queries. Also, the relationship between deductive databases and logic programming has been well established. Therefore, fuzzy linguistic logic programming (FLLP) can provide a tool for constructing fuzzy linguistic databases equipped with flexible querying. The model is an extension of Datalog [@Ul88] without negation and possibly with recursion, which is similar to that in , called *fuzzy linguistic Datalog* (FLDL). It allows one to find answers to queries over a *fuzzy linguistic database* (FLDB) using a *fuzzy linguistic knowledge base* (FLKB). An FLDB is a (crisp) relational database in which an additional attribute is added to every relation to store a linguistic truth value for each tuple, and an FLKB is a *fuzzy linguistic Datalog program* (FLDL program). Here, we also work on *safe* rules, i.e., every variable appearing in the rule head of a rule also appears in the rule body. An FLDL program consists of finite safe rules and facts. Moreover, in an FLDL program, a fuzzy predicate is either an *extensional database* (EDB) predicate, the logical part of a fact, whose relation is stored in the database, or an *intensional database* (IDB) predicate which is defined by rules, but not both. We can extend the *monotone subset*, consisting of selection, Cartesian product, equijoin, projection, and union, of relational algebra [@Ul88] for the case of our relations and create a new one called *hedge modification*. We call this collection of operations *fuzzy linguistic relational algebra* (FLRA). Based on the operations, we can convert rules with the same IDB predicate in their heads to an expression of FLRA; the expression yields a relation for the predicate. Furthermore, it can be observed that the way the expression calculates the truth value of a tuple in the relation for the IDB predicate is the same as the way the immediate consequences operator $T_P$ does for the corresponding ground atom [@PV01]. Thus, similar to the classical case, the FLRA augmented by the immediate consequences operator is sufficient to evaluate recursive FLDL programs, and every query over an FLKB represented by an FLDL program can be exactly evaluated by finitely iterating the operations of FLRA from a set of relations for the EDB predicates. Threshold computation --------------------- This is the case when one is interested in looking for a computed answer to a query with a truth value not less than some threshold $t$. Assume that at a certain point in a computation we need to find an answer to the selected atom $A_m$ with a threshold $t_m$. Since $\mathcal{C}_{c}(x,y)\leq min(x,y)$, for $c\in \{L,G\}$, the selected rule or fact which will be used in the next step must have a truth value not less than $t_m$. If there is no such rule or fact, we can cut the computation branch. For the case that $A_m$ will be unified with the rule head of such a rule, the truth value of the whole body of the rule must not be less than $t_{m+1}=inf\{b|\mathcal{C}(b,r)\geq t_m\}$, where $r$ is the truth value of the rule and $r\geq t_m$. If the implication used in the rule is the Gödel implication, then $t_{m+1}=t_m$; if it is the Łukasiewicz implication, then $t_{m+1}=v_{n+k-j}$, where $r=v_j, t_m=v_k$ are two values in the truth domain $\overline{X}$, and $v_n=1$. Since $n\geq j\geq k$, we have $t_{m+1}\geq t_m$, and if $r<1$, we have $t_{m+1}>t_m$. Recall that a rule body can be built from its components using the conjunctions, the disjunction, and hedge connectives. Therefore, we have: ($i$) For the case of Gödel conjunction, $t_{m+1}$ is the next threshold for each of its components, and if $t_{m+1}>t_m$, for all $m$ (this will happen if all the implications are Łukasiewicz, and all the truth values of rules are less than 1), we can estimate the depth of the search tree according to the threshold $t$ and the highest truth value of rules. ($ii$) For the case of Łukasiewicz conjunction, if all the truth values of the facts in the program are less than 1 (thus the computed truth value of any component in any body formula is less than 1), the next threshold for each of the components is greater than $t_{m+1}$. Hence, similar to the above case, we can also work out the depth of the search tree. ($iii$) For the case of disjunction, one of the components of the rule body must have a computed truth value at least $t_{m+1}$. ($iv$) Finally, the problem of finding a computed truth value for a hedge-modified formula $hB$ with a threshold $u$ can be reduced to that of $B$ with a new threshold $u'=inf\{v|h^{-}(v)\geq u\}$. Fuzzy control ------------- Control theory is aimed at determining a function $\underline{f} :X\rightarrow Y$ whose intended meaning is that given an input value $x$, $\underline{f}(x)$ is the correct value of control signal. A fuzzy approach to control employs an approximation of such a (ideal) function by a system of fuzzy IF-THEN rules of the form “IF $x$ is $A$ THEN $y$ is $B$”, where $A$ and $B$ are labels of fuzzy subsets. In the literature, there are several attempts to reduce fuzzy control to fuzzy logic in narrow sense. proposed an interesting reduction in which a fuzzy IF-THEN rule “IF $x$ is $A$ THEN $y$ is $B$” is translated into a fuzzy logic programming rule $(good(x,y)\leftarrow A(x)\wedge B(y).\lambda)$, where $A$ and $B$ are now considered as fuzzy predicates. The truth value $\lambda$ is understood as the *degree of confidence* of the experts in such a rule, and by default, $\lambda =1$. The intended meaning of the new predicate $good(x, y)$ is that given an input value $x$, $y$ is a *good* value for the control variable. Therefore, the information carried by a system of fuzzy IF-THEN rules can be represented by a fuzzy logic program. More precisely, a system of fuzzy IF-THEN rules: $$\begin{aligned} \mbox{IF }x\mbox{ is }A_1\mbox{ THEN }y\mbox{ is }B_1 \nonumber\\ ... \label{sys01}\\ \mbox{IF }x\mbox{ is }A_n\mbox{ THEN }y\mbox{ is }B_n \nonumber\end{aligned}$$ can be associated with the following program $P$: $$\begin{aligned} (good(x,y)\leftarrow A_1(x)\wedge B_1(y).1) \nonumber\\ ... \nonumber\\ (good(x,y)\leftarrow A_n(x)\wedge B_n(y).1) \label{sys02} \\ (A_i(r).r_{A_i}) \mbox{, for } r\in X, i=1...n \nonumber\\ (B_j(t).t_{B_j}) \mbox{, for } t\in Y, j=1...n \nonumber\end{aligned}$$ where $r_{A_i}$ is the degree of truth to which an input value $r$ satisfies a predicate $A_i$, and $t_{B_j}$ is the degree of truth to which an output value $t$ satisfies a predicate $B_j$. Each element $r\in X$ or $t\in Y$ is considered as a constant. Thus, the language of $P$ is a two-sorted predicate one, and we have two Herbrand universes $U^{X}_P=X$ and $U^{Y}_P=Y$. Since the truth values of the rules are all equal to 1, Łukasiewicz and Gödel t-norms yield the same results in computations; therefore, without loss of generality we can use the same notation for the implications. By iterating the $T_P$ operator from the bottom interpretation $\bot$, we obtain the Least Herbrand model $M_P$ of $P$. In fact, it can be shown that $M_P=T^{2}_P(\bot)$. Let us put $\mathcal{G}(r,t)=M_P(good(r,t))$. Indeed, $\mathcal{G}(r,t)$ can be interpreted as the degree of preference on the output value $t\in Y$, given the input value $r\in X$. Therefore, the purpose of the program $P$ is not to compute the ideal function $\underline{f}: X\rightarrow Y$, but to define a fuzzy predicate *good* expressing a graded opinion on a possible control value $t$ w.r.t. a given input value $r$. Clearly, given an input value $r$, it should be better to take a value $t$ that maximises $\mathcal{G}(r,t)$. Note that the value $\mathcal{G}(r,t)$ is not a true value, but a lower bound to the truth value of $good(r,t)$. In other words, we can say that given $r$, $t$ can be proved to be good at least at the level $\mathcal{G}(r,t)$. It is worth noticing that in fuzzy control, it is quite often that the labels of fuzzy subsets in a system of fuzzy IF-THEN rules, i.e., $A_i$ and $B_i$ in the system (\[sys01\]), are hedge-modified ones, e.g., *Verylarge* and *Veryfast*. Thus, our language can be used to represent the associated program in a very natural way since we allow using linguistic hedges to modify fuzzy predicates. Clearly, in such a program, all the facts $(A_i(r).r_{A_i})$ and $(B_j(t).t_{B_j})$ we need are only for primary predicates (predicates without hedge modification) such as *large* or *fast*, but not for all predicates as in the case of fuzzy logic programming. Implementation ============== In the literature, there has been research on *multi-adjoint logic programming* (MALP) (see, e.g., ), which is an extension of fuzzy logic programming in which truth values can be elements of any complete bounded lattice instead of the unit interval. Also, there have been several attempts to implement systems where multi-adjoint logic programs can be executed. Due to the similarity between MALP and FLLP, the implementation of a system for executing fuzzy linguistic logic programs can be carried out based on the systems built for multi-adjoint ones. In the sequel, we sketch an idea for implementing such a system, which is inspired by the FLOPER (Fuzzy LOgic Programming Environment for Research) system described in . The main objective is to translate fuzzy linguistic logic programs into Prolog ones which can be safely executed inside any standard Prolog interpreter in a completely transparent way. We take the following program as an illustrative example: $$\begin{aligned} (gd\_em(X)\leftarrow_G \wedge_L(V\;st\_hd(X),P\;hira\_un(X)).VMT)\\ (hira\_un(ann).VT)\\ (st\_hd(ann).MT)\end{aligned}$$ For simplicity, instead of computing with the truth values, we can compute with their indexes in the truth domain. Thus, the program can be coded as: $$\begin{aligned} gd\_em(X)\;<godel\;\; \&luka(hedge\_v(st\_hd(X)),hedge\_p(hira\_un(X))) \;with\; 38.\\ hira\_un(ann)\; with\; 41.\\ st\_hd(ann)\; with\; 36.\end{aligned}$$ where 38, 41, and 36 are respectively the indexes of the truth values $VMT$, $VT$, and $MT$ in the truth domain in Example \[ex99\]. During the parsing process, the system produces Prolog code as follows: $(i)$ Each atom appearing in a fuzzy rule is translated into a Prolog atom extended by an additional argument, a truth variable of the form $\_TV_i$, which is intended to store the truth value obtained in the subsequent evaluation of the atom. $(ii)$ The truth functions of the binary connectives and the t-norms can be easily defined by standard Prolog clauses as follows: $$\begin{aligned} and\_godel(X,Y,Z)\; :-\; (X=<Y,Z=X;X>Y,Z=Y). \\ and\_luka(X,Y,Z)\; :-\; H\; is\; X+Y-n,(H=<0,Z=0;H>0,Z=H). \\ or\_godel(X,Y,Z)\; :-\; (X=<Y,Z=Y;X>Y,Z=X). \end{aligned}$$ where $n$ is the index of the truth value 1 in the truth domain (in Example \[ex99\], $n= 44$). Note that $and\_godel$ is the t-norm $\mathcal{C}_G$ as well as the truth function of the conjunction $\wedge$ ($\wedge_G$) while $and\_luka$ is the t-norm $\mathcal{C}_L$ and also the truth function of the conjunction $\wedge_L$, and $or\_godel$ is the truth function of the disjunction $\vee$. Inverse mappings of hedges can be defined by listing all cases in the form of ground Prolog facts (except inverse mappings of 0, $W$, and 1). More precisely, the inverse mappings in Table \[tab3\] can be defined as follows: $$\begin{aligned} inv\_map(H,0,0).\\ ...\\ inv\_map(l,17,21).\\ ...\\ inv\_map(v,33,25).\\ ...\\ inv\_map(H,44,44).\end{aligned}$$ where 33, 25, 17, and 21 are indexes of the values $c^{+}$, $Lc^{+}$, $LLc^{-}$, and $VLc^{-}$, respectively; the fact $inv\_map(v,33,25).$ defines the case $V^{-}(c^{+})=Lc^{+}$ while the fact $inv\_map(l,17,21).$ defines the case $L^{-}(LLc^{-})=VLc^{-}$. The facts $inv\_map(H,0,0).$, $inv\_map(H,22,22).$, and $inv\_map(H,44,44).$, where $H$ is a variable of hedges, define the mappings: for all $h$, $h^{-}(0)=0$, $h^{-}(W)=W$, and $h^{-}(1)=1$. $(iii)$ Each fuzzy rule is translated into a Prolog clause in which the calls to the atoms appearing in its body must be in an appropriate order. More precisely, the call to the atom corresponding to an operation must be after the calls to the atoms corresponding to its arguments in order for the truth variables to be correctly instantiated, and the last call must be to the atom corresponding to the t-norm evaluating the rule. For example, the rule in the previous program can be translated into the following Prolog clause: $$\begin{aligned} gd\_em(X,\_TV0)\; :-\; st\_hd(X,\_TV1), inv\_map(v,\_TV1,\_TV2), \\ hira\_un(X,\_TV3), inv\_map(p,\_TV3,\_TV4),\\ and\_luka(\_TV2,\_TV4,\_TV5), and\_godel(\_TV5,38,\_TV0).\end{aligned}$$ $(iv)$ Each fuzzy fact is translated into a Prolog fact in which the additional argument is just its truth value instead of a truth variable. For the above program, the two fuzzy facts are translated into two Prolog facts $hira\_un(ann,41)$ and $st\_hd(ann,36)$. $(v)$ A query is translated into a Prolog goal that is an atom with an additional argument, a truth variable to store the computed truth value. For instance, the query $?gd\_em(X)$ is translated into the Prolog goal: $?-\;gd\_em(X,Truth\_value)$. Given the above program and the above query, a Prolog interpreter will return a computed answer $[X=ann,Truth\_value=29]$, i.e., we have $(gd\_em(ann).PPT)$. Conclusions and future work =========================== We have presented fuzzy linguistic logic programming as a result of integrating fuzzy logic programming and hedge algebras. The main aim of this work is to facilitate the representation and reasoning on knowledge expressed in natural languages, where vague sentences are often assessed by a degree of truth expressed in linguistic terms rather than in numbers, and linguistic hedges are usually used to indicate different levels of emphasis. It is well known that in order for a formalism to model such knowledge, it should address the twofold usage of linguistic hedges, i.e., in generating linguistic values and in modifying predicates. Hence, in this work we use linguistic truth values and allow linguistic hedges as predicate modifiers. More precisely, in a fuzzy linguistic logic program, each fact or rule is graded to a certain degree specified by a value in a linguistic truth domain taken from a hedge algebra of a truth variable, and hedges can be used as unary connectives in body formulae. Besides the declarative semantics, a sound and complete procedural semantics which directly manipulates linguistic terms is provided to compute a lower bound to the truth value of a query. Thus, it can be regarded as a method of computing with words. A fixpoint semantics of logic programs is defined and provides an important tool to handle recursive programs, for which computations can be infinite. It has been shown that knowledge bases expressed in natural languages can be represented by our language, and the theory has several applications such as a data model for fuzzy linguistic databases with flexible querying, threshold computation, and fuzzy control. Finding more applications for the theory and implementing a system where fuzzy linguistic logic programs can be executed are directions for our future work. , [Hölldobler, S.]{}, [and]{} [Tran, D. K.]{} 2006. The fuzzy linguistic description logic ${ALC}_{FL}$. In [*Proc. of the 11th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems ([IPMU]{}’2006)*]{}. 2096–2103. 2001\. Fuzzy control as fuzzy deduction system.  [*121*]{}, 409–425. 2005\. Fuzzy logic programming and fuzzy control.  [*79*]{}, 231–254. 1998\. . Kluwer, Dordrecht, The Netherlands. , [Lencses, R.]{}, [and]{} [Vojtáš, P.]{} 2004. A comparison of fuzzy and annotated logic programming.  [*144*]{}, 173–192. 1987\. . Springer Verlag, Berlin, Germany. , [Ojeda-Aciego, M.]{}, [and]{} [Vojtáš, P.]{} 2004. Similarity-based unification: a multi-adjoint approach.  [*146,*]{} 1, 43–62. 2008\. Using floper for running/debugging fuzzy logic programs. In [*Proc. of the 12th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU’2008)*]{}, [L. Magdalena]{}, [M. Ojeda-Aciego]{}, [and]{} [J. Verdegay]{}, Eds. Málaga, 481–488. , [Ozawa, J.]{}, [Hayashi, I.]{}, [and]{} [Wakami, N.]{} 1995. A proposal of a fuzzy connective with learning function and query networks for fuzzy retrieval systems. In [*Fuzziness in Database Management Systems*]{}, [P. Bosc]{} [and]{} [J. Kacprzyk]{}, Eds. Physica-Verlag, 345–364. , [Tran, D. K.]{}, [Huynh, V. N.]{}, [and]{} [ Nguyen, H. C.]{} 1999. Linguistic-valued logic and their application to fuzzy reasoning.  [*7*]{}, 347–361. , [Vu, N. L.]{}, [and]{} [Le, X. V.]{} 2008. Optimal hedge-algebras-based controller: Design and application.  [*159*]{}, 968–989. 1990\. Hedge algebras: An algebraic approach to structure of sets of linguistic truth values.  [*35*]{}, 281–293. 1992\. Extended hedge algebras and their application to fuzzy logic.  [*52*]{}, 259–281. 2001\. A data model for flexible querying. In [*Advances in Databases and Information Systems, ADBIS’’01*]{}, [A. Caplinskas]{} [and]{} [J. Eder]{}, Eds. Springer Verlag, 280––293. LNCS 2151. 1955\. A lattice-theoretical fixpoint theorem and its applications.  [*5*]{}, 285–309. 1988\. . Vol. I. Computer Science Press, United States of America. 2001\. Fuzzy logic programming.  [*124*]{}, 361–370. 1972\. A fuzzy-set-theoretic interpretation of linguistic hedges.  [*2,*]{} 3, 4–34. 1975a. The concept of a linguistic variable and its application in approximate reasoning.  [*8, 9*]{}, 199–249, 301–357, 43–80. 1975b. Fuzzy logic and approximate reasoning.  [*30*]{}, 407–428. 1979\. A theory of approximate reasoning. In [*Machine Intelligence*]{}, [J. E. Hayes]{}, [D. Michie]{}, [and]{} [L. I. Mikulich]{}, Eds. Vol. 9. Wiley, 149–194. 1989\. Knowledge representation in fuzzy logic.  [ *1,*]{} 1, 89–99. 1997\. Toward a theory of fuzzy information granulation and its centrality in human reasoning and fuzzy logic.  [*90*]{}, 111–127. [^1]: $k$ is any of the hedges, including the identity $I$.
Vox Humana Ted Morrissey Her brother Harry hadn’t written from Korea for more than a year, or at least none of his letters got through. Then out of the blue he called, said he was in South Carolina, he was taking a train, then a bus and he’d be in Crawford at the Trailways terminal in three days: Could Annette or Tim pick him up? Tim and the car had been gone for three weeks and two days at that point. It was too complicated to explain over the phone, especially long distance, so Annette said she would see her brother in Crawford. It required her taking a day off from school and borrowing Carl Reynolds’s Ford, but she managed it. She arrived at the bus terminal a few minutes late but suspected the bus would be even later. However, she’d only stepped from the car when she spotted Harry on a bench in front of the station, which also sat vacant and lonely. Harry was lost in his own thoughts and didn’t notice her right away, which gave Annette a few moments to look at her brother and truly see him, almost the way a stranger might take him in. He was thin, perhaps even wiry beneath clothes that looked like hand-me-downs from an older and somewhat larger brother, nonexistent. His hair was brown with—was it possible?—the beginning streaks of white at his temples. It’d been cut short but that was weeks ago and the neat haircut had been growing of its own accord. Trousers of summer-weight wool, soon to be unquestionably out of season; a cotton shirt with a button-down collar, unbuttoned, and zippered jacket, unzipped, also soon to be too light for the time of year. His shoes may have been new but definitely in need of an energetic polishing. Harry was smoking, something he must’ve picked up in the service. More accurately, he was watching the lit cigarette between his fingers, as if a complicated thing which warranted careful study. Annette was reluctant to interrupt his seemingly peaceful contemplation, but Harry must’ve sensed her presence, or someone’s, and he looked up. Sis, he said, not so much in greeting as in practicing. He hadn’t seen his sister in nearly five years, and perhaps with all that had transpired in the intervening years he needed a sort of rehearsal to reprise his role as kid brother. Look at you, said Annette kissing him on the cheek, rough with stubble. You’re so skinny. The scent of his cigarette elicited a recollection of Tim, who would smoke after dinner, and the memory was unpleasant and unwelcome. The Army isn’t known for its haute cuisine. Quite the opposite. He exhaled a final breath of smoke and stepped on the butt with the toe of his shoe. The car’s just over here. He slung an army-green duffel over his shoulder and picked up a medium-size suitcase, old-fashioned, with leather straps—his only possessions it would seem. On the drive back Annette tried to make small talk, which in itself felt strange, to speak to her brother as she would a virtual stranger, but Harry’s responses were perfunctory at best. Annette wondered if he was upset to be in the passenger seat. Maybe he’d become one of those men who felt it was a fact of biology that the male of the species should always take the wheel. Tim believed it. After a while, Annette decided her brother’s taciturnity was more a matter of his preferring quiet. So she left him to his own thoughts, and he seemed content to watch the old familiar countryside, rushing past in its early orange hues, the fields dotted here and there with farm animals. You can smoke if you like—probably best to roll down the window. Harry took out his pack of Chesterfields. ㅑ Annette had only begun to use the spare bedroom as a what-not room—she’d set up her ironing-board, for example—so it was easy enough to prepare for Harry’s arrival when he called. He hadn’t asked to stay but she assumed, and apparently correctly. He didn’t inquire about Tim, either, or about anything for that matter. Annette imagined Tim’s absence from her life was abundantly clear. Harry put away his few things almost as soon as they got back, and easy as that they were roommates. Annette wondered if she should have a welcome-home party for Harry, but growing up he’d had few close friends, and fewer still remained in the village. Zane Robbins’s bone spurs kept him out of the service, as did Herbert Green’s vertigo. Zane worked the family farm. Herb lived at home, taking correspondence courses. There was Beth Ann Ferguson, too, the school librarian. Harry and Beth Ann had gone to a dance or two together, nothing serious. As far as Annette knew, Beth Ann wasn’t seeing anyone—nor had she ever indicated that she wanted to be. Harry had been a quiet, bookish boy, yet he didn’t find school especially interesting. He would have preferred to stay home, reading and studying whatever he pleased. It wasn’t surprising when the Army attached him to a unit specializing in linguistics, not as a translator but as a transcriber. Harry lugged his portable typewriter and folding table and chair from one encampment to another, usually near the front line. His typewriter’s steel case saved him from injury twice, once deflecting a wildly off-target bullet, another time a piece of shrapnel as long as his thumb. The case bore the scars of those close calls, along with the innumerable hard knocks from time in-country, each leaving its mark on the Army-issue Olympia that he returned to the quartermaster before his discharge. The carriage-return arm was bent, the carriage itself tended to slip when underlining, and the ribbon had been worn so thin it was opaque in spots, well past being re-inked. Harry had managed his final transcript and submitted it along with its carbons to the lieutenant who was his immediate superior. For months it seemed that his only words were those that belonged to the North and South Koreans who had been interrogated or interviewed, respectively. Their stories, whether mostly true or mostly false, had been his stories, their lives his life. He felt as if he only existed when he was transcribing, and the brief interval between scripts was a time of silent incompleteness and waiting to be reanimated by the story of the next bearer of words. Meanwhile, the horrors of the conflict multiplied around him like an earthquake’s suffocating rubble. The only relief was immersing himself in the transcribed lives of others. The night-colored letters on the paper, hammered there by the Olympia’s solid strokes, formed a fortress against the bodies torn asunder, friend and foe alike, combatant and collateral kill, men and women, children. Even the images of slain animals haunted him, family pets, and livestock slaughtered in fields and left to rot. By the time Annette met him at the bus station in Crawford Harry hadn’t completed a transcript in nearly two months, weeks exposed to the world without periodic relief inside the sanctuary of others’ words. Their stories, no matter how horrific the images they related, were like invocations that protected his fragile sanity. Harry could slip behind the walls of words and breathe more easily for a few blessed minutes. ㅓ Annette had left Harry alone to go to school. She probably thought he was sleeping in on his first morning back home, but in fact he’d been awake for hours listening to the cricket chirp of night transform to the birdsong of morning. Then Annette was up rushing through her routine. He loved his sister and appreciated her giving him a place to stay, truly, but the idea of speaking to her—to anyone—was overwhelming. Conversation was exhausting. Sometimes the exhaustion came from trying to think of something to say, and sometimes it came from holding back the torrent of words wanting to break forth. For months Harry had found himself at one extreme or the other. When he was transcribing, the words of others were enough and he didn’t feel the crippling pressure to unearth or embank his own. At last Annette left for school, and the house was quiet. Harry dressed then went to the kitchen. He checked the icebox and breadbox. He might eat something eventually but he needed coffee and a smoke. He prepared the coffeepot and lit the burner with a kitchen match. While he waited for the water to boil, he went to the bookcase in his sister’s small dining room and perused the selection of hardcovers and paperbacks. Several of the latter were Armed Services Editions, leftovers from the previous war still circulating a decade later. Harry ran his finger along the spines, listening for the coffeepot’s boil. One title in particular arrested his attention: Carl Sandburg’s Chicago Poems. He took the well-worn paperback from the shelf and in a few minutes was seated on the back porch with the Sandburg, his coffee, and cigarettes. It was a cool morning; leaves from an elm lay on the steps, a scarlet harbinger of winter’s slow approach. The book of poetry was divided into sections, and the one titled ‘War Poems (1914-1915)’ stood out. Obviously they were inspired by the Great War, but he suspected the Great War was as terrible as his own, or any other. He lit a cigarette and began reading. Later, one image especially would stay with him--Red drips from my chin where I have been eating. Not all the blood, nowhere near all, is wiped off my mouth. He sat on the porch for a long while with the poems and the thoughts they conjured. Mrs. Holcomb next door had come out and spent some time cleaning fallen leaves from her flowerbeds and primping the mums and asters. She pretended not to see Harry, not wanting to disturb his privacy perhaps; or maybe she truly didn’t see him, half hidden by the porch itself. ㅕ On her way home from school, Annette stopped at the grocery for a piece of sirloin and some new potatoes. Normally her main meal was the hot one prepared by the school cooks; then at night she would nibble on cold cuts and crackers or some fruit. When Tim left she wondered if it was because of her inadequate dinners, at least in part. But, no, he seemed generally unhappy with everything and everyone, inside and outside their home. It wasn’t as simple as poorly prepared meals. Still, it sometimes nagged her. Maybe because so many of the wives took such pride in their ability to cook, and several were associated with a particularly popular dish: Mrs. Reynolds’s rabbit stew, Mrs. Abernathy’s quail in white gravy, Mrs. Johnston’s minted lamb-chops, Mrs. Phillips’s fried chicken and cornbread, Mrs. Whittle’s chicken and dumplings, Mrs. Smythe’s morel soufflé, Mrs. Moreland’s honey-glazed pork roast. . . . On one level Annette knew Tim’s happiness was beyond her influence—no effort great or small seemed capable of achieving it for long—yet his discontent picked at her peace of mind. She would think of the eagle that tortured bound Prometheus, little by little without interruption. Tim’s leaving triggered a tide of complex emotions, some sharp, some subtle, but the most immediate and most profound was simple relief: The harpy had suddenly ceased plucking at her liver, and it was, in a word, glorious.These thoughts and feelings returned to her as she was preparing the meal for Harry and her, and she examined them like exhibits in a museum, or, better, like specimens in a zoo—for they were safely sealed off but still very much alive, and their quickness lent them an element of danger, should one or two of the more predatory types get loose.Annette scraped cubed potatoes from the cutting-board into the pan of salted boiling water. Harry was in the living room. The evening news was on, the picture more snowy than usual. It didn’t seem to matter: He sipped at his can of Falstaff and stared at the fuzzy newsman, the Zenith’s sound all but off. She considered attempting a conversation—about something, anything—but her brother appeared content so she elected to let him be and focus on making their dinner. While she stirred the roiling potatoes she thought of all the times she wanted to talk with Tim—always in an attempt to make things better, it seemed in retrospect, either via a conversation that dealt with a problem or one that deflected from it—eventually, however, the futility of talk, of any kind, became clear, and it was simpler to keep within her own thoughts. Even though this felt different, Annette resolved that she wouldn’t let avoiding conversations with her brother become a pattern. They ate at the dining-room table. Harry turned off the television, and he poured his beer into a glass. Drinking from the can at the table must’ve seemed poor etiquette. Everything smells great, Netta. It was the first time he’d used her nickname since coming home, but rather than a sign of normality it felt a forced attempt to mimic it. Thank you, she said. I’m afraid I’m out of practice in the kitchen. She thought that Harry must be wondering about Tim’s absence, but he hadn’t said a word. Did he wonder, did he assume, or did he not even notice? Some of Tim’s things remained. Old clothes in the closet of the spare room (now, Harry’s room), odds and ends of tools in the shed, along with various shovels and spades and the mower, an item or two in the back of this drawer or that. To Annette these stray remainders were reminders of Tim’s absence, but perhaps to others, to Harry, they were as unworthy of note as a burned out bulb, or the unraked leaves in the corner of the yard. She wondered at times if there was something wrong with her. Shouldn’t she be devastated Tim had left and their marriage of six years was through? Shouldn’t she become weepy at the thought of it? Shouldn’t she be lonely, and desperate to fill some vacuous void? Everything told her she should be all of these things and more. In truth, though, she felt only serenity and the strange stirrings of contentment. Now her life could have meaning which she herself ascribed to it. She was just beginning to understand Tim’s abandonment and what it may mean for her when Harry called about his discharge and his needing a ride. As they sat across the table from one another, eating the sirloin and mashed potatoes, Annette found herself wondering how long her brother would be staying with her. It was an uncharitable line of thought and she tried to dismiss it altogether. The click and scrape of Harry’s fork as he scooped up tinefuls of potato punctuated her selfish speculations. You’re almost out. Would you like another Falstaff? Harry looked at the finger of yellow liquid in his glass. Sure, sis, thanks. Annette pushed herself away from the table and her unkind thoughts. She retrieved a can from the icebox, punctured its top twice, big and small; then returned to the table and poured some of the beer into her brother’s glass. Her water glass was empty so she poured the rest of the beer for herself. Annette took her seat: I was thinking, we should have a welcome-home party for you, give everyone a chance to say hello all at once. I’m sure we could use the basement of the church. I’ll speak with Pastor Phillips. She didn’t say: Letting everyone know you’re back may prompt someone to offer you a job. She tried to imagine what Harry would’ve done if he hadn’t gone off to Korea. It was difficult to imagine. He’d always been a bookish, quiet sort of child, though not much interested in school unless it was a subject he especially liked. Mythology, for example, and the Trojan War. Homer’s Iliad was a favorite and of course Odysseus’ great wanderings. She recalled his making a cyclops out of papier-mâché. It was nearly three feet tall and required most of their immediate neighbors’ old newspapers, torn into strips. ㅗ Plans were made for a potluck in the church’s basement. Pastor Phillips insisted that the Methodist Ladies Auxiliary would supply the main course and cake. Guests could bring salads and sides. The pastor imagined that the whole village would turn out to welcome Harry home. The returns of World War Two boys, less than a decade earlier, were still fresh in most folks’ minds. Those were some bona fide parties, said Pastor Phillips, grinning a bit mischievously beneath his thick glasses. He would have Mrs. Holcomb put a notice about Harry’s party in the bulletin. Between that and word-of-mouth, the whole village will be in the know in no time. The pastor enjoyed his turn of phrase.Annette thanked him before hurrying from his office to get to school on time. ㅛ Harry had spent the morning reading, thinking, smoking, drinking coffee. The sky was gray and mildly threatening as he considered it from his sister’s back porch. They were nearly out of coffee, and he was down to his last two cigarettes. They were low on bread also, and cornflakes. All the time sitting, relaxing, had given way to a goading restlessness, so he decided to walk to the grocery to restock essentials. He was already feeling a trifle guilty about staying with Annette and not pulling his own weight, especially since Tim seemed to be out of the picture. He felt that he should say something about his absent brother-in-law but he didn’t want to pry—that was one excuse he allowed himself. Really, though, it had more to do with his belief that if he broached the subject, it may lead to his sister confessing a cascade of heavy, complicated emotions, and it would be more than he could bear. There was so much heaviness and complication inside of him already, he sensed that another straw more may be his undoing. He may collapse beneath it all. Besides, he never cared much for Tim Wilson, and it was a pleasant surprise not having him around. Annette kept a sturdy wicker basket with long handles on a hook next to the backdoor for the express purpose of carrying items home from the grocery store. Harry knew, in part, because she’d had the same basket for years. He removed it from its hook and began walking to Wilson’s Grocery (the owner was a cousin, once or twice removed, of Annette’s husband). It was a short walk along Willow Street to Main, made shorter by Harry’s hurrying: the skies looked more threatening than they had from the porch. Large, icy drops were beginning to pelt him as he reached the store. He regretted his timing as he didn’t know how long he’d be pinned down by the rain. There was a young woman working the counter whom he didn’t recognize. How could that be? It was a place where everyone knew everyone. He was Wilson’s only customer at the moment. Harry nodded hello to the blond-haired girl; she was working a crossword puzzle. He began perusing the small store’s slim inventory. Outside, the rain intensified. Harry could hear it monsooning against the roof, raindrops mixed with pellets of hail. It was warm inside the store. He could feel sweat on his neck. The girl seemed unconcerned about the weather as she filled in letters of her puzzle with a yellow pencil. Light glinted off canisters of soup. Harry blinked at the shards of white. He heard the first grumble of thunder. The girl looked up too. She smiled briefly—perhaps reassuringly—then returned to her crossword. Harry’s mouth was dry. He wanted a long drink of beer. A flash of lightning broke across the gray, instantly followed by the thunder’s crash. Harry dropped the basket and unbuttoned the top button of his shirt: He was suffocating. His t-shirt was soaked under his arms and along his ribs and spine. He was near a bin of vegetables. He fought an instinct to squat behind it, shielding himself from the flashing windows. Another bolt then bellow of thunder. He put his hand on the wooden bin for support. A head of purple cabbage shifted and fell to the floor. As he watched it topple, the bin’s wet scent of boggy earth rose around him like a closing bag. Another head fell. He may have kicked it as he lurched forward, or it may have been that his legs felt as stiff as crutches—but he lost his balance and groped for a handhold. He found a rack of sealed jars. One or two fell and glass shrapneled across the floor as the briny smell of pickles rippled over him. He couldn’t breathe. Harry fumbled his way out of the store, which felt as claustrophobic as a coffin. More items may have fallen and burst in his wake. Down the streets rain running in streams, calling his name, someone. Red umbrella, porch, Annette’s, blanket on his shoulders, Annette’s. Smoking. ㅜ​ Donald and Rita Gale (née Hopkins) were both only-children of parents who had them later in life. Thus Annette Elizabeth and Harrison Scott Gale grew up with no extended family to speak of. Their father had a distant cousin, a nun, whom they met once at the funeral of a still more distant cousin, a salesman of some sort, door-to-door. Don Gale passed (coronary) when Annette and Harry were in their early teens; Rita five years later (cancer). They’d been dutiful parents but not doting. Annette, who was a voracious reader starting in childhood with the works of Lewis Carroll, soon advancing to Jane Austen then the Brontës and beyond, came to think of her parents, especially her father, as affectionate in a British sort of way—though he was as Midwestern American as a boy and then a man could be. Her mother wasn’t quite as aloof as her father, but she seemed to prefer her husband’s company to her children’s, and she was devastated nearly to the point of petrification at his sudden passing. In Annette’s recollection her mother had died shortly after her father. She would have to remind herself there was a five-year interval, during which her mother seemed morbidly intent on joining her spouse in death. Rita showed the preoccupation of a commuter who’d missed her train but was determined to catch the next.ㅠ ​ Sometimes Harry saw the Koreans who were the subjects of the translations he typed into transcripts, in triplicate. Seeing them was more likely if they were South Koreans. But it usually took several days or even weeks for the translated interviews to reach him for transcription. So he rarely had any idea whose words he was reading. Over time, his mind began selecting one of a small collection of Korean faces as the subject. While working on a transcript, Harry would automatically assign a face to the voice he heard in the words. He thought of them almost as masks stored in a theatrical cabinet, except of course real, living faces. The mask—thickset or thin, bright-eyed or beady, saintly or sinister—spoke the words he read on the pages, the Koreans’ comments and confessions, rendered into English by the translators, most of whom broadcast an air of superiority. They delivered the words Mosaically, as if Jehovah’s own, not to be doubted, leave be amended. One could almost smell the acrid scent of a burned bush upon the pages. After a time the Korean faces began appearing in his dreams, whispering to him in English with their distinct accents. They floated in an ethereal sea, a space all their own, a sort of in-between but with no sense of what lay before or beyond. In the beginning, they whispered snippets of the transcripts he’d typed on the portable Olympia. One night, though, they started murmuring of other things, barely audible in the rush of ether, only partly formed, mostly pieces of images, a helter-skelter of tiles from incomplete mosaics. Harry woke in the dark tent knowing something was different—knowing he was different.More and more the faces spoke to him when he was awake, in the background of his consciousness, like distant voices in a crowded theater before the opening curtain. He’d catch a word or phrase, not enough to constitute a whole thought, just enough to remain frustratingly at the edge of intelligibility. He thought of the cast who occupied the fringes of his psyche as his own Greek chorus, a joke meant to lessen his fears about their worrisome presence, but it had little effect.​ _ Annette had informed her teaching colleagues about the welcome-home party while they were in the faculty dining-room. She communicated the details over the noise of scraping forks and slicing knives. Two teachers immediately gave their regrets. The remaining dozen or so implied they would attend by not saying they couldn’t. Annette wanted to be sure about Beth Ann Ferguson’s attendance. When the end-of-the-day bell had rung and the students had rushed away as if the school was sinking, Annette went to the room which served as the library. Bookcases, tall and short, cut the space into narrow aisles. Beth Ann’s desk, constructed of some ancient wood, dark and dense, stood sentinel near one wall. Beth Ann was so diminutive, when she sat at her mammoth desk, she could at first look like one of the children. She was not at her desk, however, and Annette believed for a moment that she’d already left; then she heard a sound among the tall stacks, and she found the little librarian in the 800s. Annette startled her. Sorry, Beth Ann—I thought you probably heard me come in. It’s o.k. She peered up, above her half reading glasses, which she wore on a cord around her neck. For a second Annette doubted what she knew to be true, that Beth Ann and Harry were the same age. With her hair pinned back, and her reading glasses, and her blue dress with the lace collar, Miss Ferguson projected the image of an older woman. Earrings of pearl, like grandmotherly types in the village wore for special occasions, looked almost too weighty for Beth Ann’s delicate ears. Annette wondered if she, too, presented a similarly dowdy persona. She hoped not. I couldn’t recall if you were at lunch when I mentioned Harry’s party, Annette fibbed. Yes, it sounds like a nice thing for Harry. She replaced a book to its spot on the shelf. Euripides. Annette knew the collection, and sometimes shared an excerpt from The Trojan Women with her class as further context for Homer. The stacks were pleasantly heavy with the musty scent of old books; there was also a floral trace of Beth Ann’s powder—subtle but it stood out because of its strangeness among the rows and rows of rarely read books. Do you think you can make it? To Harry’s party? I’m not sure . . . I may have plans. The lie lingered in the air between them. Beth Ann added, I may have a friend coming in from out of town. The lie maintained its buoyancy in spite of the added weight. Your friend would be welcome, of course. I’m sure it would mean the world to Harry to have you there. I’ll see. I’ll try to make it. She pulled a book from the shelf. Aeschylus. Thank you. . . . Thank you. Annette manufactured a smile of warmth. As she was leaving the library she thought of The Libation Bearers. ㅣ During the thirty-six-hour cross-country trip Harry discovered that reading helped to calm the chorus—not quiet their voices completely, nor drown them out, but at least the author’s voice could distract his attention from the chorus’ multilayered murmuring. On the bus Harry was reading a collection of New England ghost stories, and he found that some authorial voices were better at distracting from the Korean chorus than others. The more complex the sentence structure, the more obscure the vocabulary—the further the narrative distanced the incoherent choir. The ghost stories were sweet relief. At Annette’s home, Harry doubted that the collection of Sandburg poems would have much effect, but the poet’s voice could more than distract Harry; in fact, some of the poems seemed to get the choral voices to stop altogether, for a few minutes at least. It was something about their rhythm, perhaps interwoven with particularly vivid images, that provoked the chorus’ silence, ceased their steady whispering, like a wind that is harbinger to storm. I cannot tell you now; When the wind’s drive and whirl Blow me along no longer The poem was titled ‘The Great Hunt,’ but it seemed to be more about seeking than hunting, thought Harry, because he associated hunting with killing, and the poem was about seeking someone with a great soul, or searching for that great soul beyond death. In any event, the words froze him in the realization of his solitariness, in understanding the lonely state of his own soul. He wondered if his soul was great, or could be great. Harry savored the moment’s utter silence as if rays of golden light temporarily tore through a stubborn gray cover.ㄱ Most of the translations Harry typed were the quintessence of banality—subjects’ mundane descriptions of daily routines, uninspired details of local topographical features not included on the Army Corps’ survey maps, and rambling acknowledgments of connections between family members and friends. One day, however, a translation landed in Harry’s assignment box that was out of the ordinary: A South Korean, from Taejon, identified only as Im, was interviewed and claimed his profession as master storyteller. As substantiation Im told a tale that was so ancient no one knew its attribution, he said. Its title was translated as ‘The Tale of the Old Man with No Legs.’ It turned out to be a frame narrative (Harry recalled when he first learned the term in English class--Arabian Nights, Chaucer’sTales, The Ancient Mariner). The unnamed narrator meets an old man with no legs on the wharf and inquires as to how he lost them. The old man recounts, In my youth I was a sailor, one among a crew of twenty whose ship was battered by a terrible storm and driven many leagues from our course. We came to an island unknown to us. A strange-looking keep stood on the headland overlooking the sea. We were hungry and thirsty and exhausted from our ordeal in the storm. We had no choice but to ask for assistance. We climbed to the keep and called at the tall, queerly arched door. No one responded so we pushed the door ajar and entered the barely illumined interior. The smell of animals was strong and at a distance we heard the bleating of sheep and other pasture creatures. We were relieved, for surely the master of the keep had plenty of food and could spare some for lost, woebegone travelers in need. Our relief was fleeting, however. Just then a giant, terrible to behold, came into dim view from the dark edges of the vast room. We made our plea to the giant, who loomed above us as a very large man does tiny children. He made no reply, but went to the door and secured it with a ponderous timber. We were trapped. He disappeared into the dark only to return momentarily with a pile of wood and start a blazing bonfire. Still without a word he snatched one of our number and proceeded to roast him alive; then eat him to every last morsel. Meanwhile all that we could do was watch in horror. When the giant finished his meal, he drank greedily from an earthen vessel, which must have held strong spirits for soon the giant lay on his back snoring, a sound like felled trees tumbling together. We rushed to the door and tried to budge the timber which held it fast but it was too massive. One of our number had a small knife used mainly for carving figures. We resolved to blind the giant and slash his throat. We managed the first order of business, but the terrible injury woke him to sudden rage and frantically he grasped for us. We scrambled to the far side of the room, where we found the giant’s sheep and pigs. He followed us but every time he reached down he clutched an animal instead of a man. We each picked up a sheep and carried it on our back. The giant went to the door and opened it so that his stock could go to pasture, which allowed us to escape the keep. We ran for our lives to the beach and our waiting ship. The giant heard our clamoring down the rocks and called to his brethren for assistance. We boarded our ship but before we could reach open water the giants held us fast. We beat their hands with oars and clubs and stabbed them with fishing lances until they released us, and the tide carried us away. Weary from our escape, however, we soon ran upon treacherous shoals, which tore the ship asunder. All of my mates drowned. I clung to a piece of the timber, a rib of the splintered ship, and was the sole survivor. As I floated in the sea, a frightful fish bit off my legs. I washed upon a lonely island, where I remained marooned for many years—surviving against all odds. Harry recognized the Korean tale from Odysseus’s great wanderings as well as from Sinbad’s voyages—all the ancient authors drawing from some still more ancient source, a common human head water. ㄴ As the day of his welcome-home party approached, Harry felt more and more uneasy. Perhaps it was the very idea of it: The village no longer felt like home. No place did. It seemed, then, that the idea was to welcome him to a strange place, one that offered no hope of becoming familiar.The village had donned a disquieting mask The line came to him—alone in the house, brewing coffee—as if spoken (whispered) by someone else. It took him a moment to attribute it to the chorus, which was whispering other words: a flood of images. As soon as the coffee was done he took a cup to the back porch, along with a notepad he found in a drawer and a pen. It was a chilly morning but Harry was warm enough in a heavy sweater, with a scarf knotted around his neck—in addition to the words burning inside of him. He scribbled the line about the disquieting mask, then the next: as dramatic a change as from Drama to Comedy, from Dionysus to Apollo, from Discord to Calm He sat for a long while writing the words, working them, before he accepted that he was making a poem. He wrought images into lines and lines into stanzas. Harry discovered that the chaos of his chorus had become a more tamable torrent, one that could be checked with dikes and causeways and dams, by controlling diction, syntax and punctuation—even blank spaces provided some buffer for the rising tide of words. In the corner of the spare room there was a small desk on which sat an Underwood typewriter. Harry took the pages of handwritten poetry—with their crossouts and insertions, arrows and asterisks—and sat in front of the machine. Even his juxtaposition to the typewriter calmed his mind further. He found clean sheets of paper in one of the desk drawers and rolled a sheet into the carriage. He worked to bring meaning to the disordered ideas and images. After a time he had a draft with which he felt some level of satisfaction. He unrolled it from the carriage, folded the newborn poem in thirds, and placed it inside Sandburg’s Chicago Poems. He’d titled it ‘The Human Voice.’ ㄷ The Ladies Auxiliary prepared most of the food, including their signature ham loaf, and decorated the church basement. Annette’s students made a banner out of an old sheet and purple fabric paint: Welcome Home, Harry!! There was some debate as to whether ‘Harry’ should be ‘Mr. Gale’ or ‘Cpl. Gale,’ but Annette decided the simplest approach best. The banner hung on the basement wall above the refreshments table. Mrs. Holcomb had made a cake, white icing with purple script repeating the message, and color scheme, of the banner. Pastor Phillips had placed a notice about the welcome-home in the bulletin, prominently anchoring the front page; and he made special mention of it during both services. Other notices were posted here and there: the windows of Owens’ Café and Reynolds’ Barber Shop, on the bulletin-board in the library, and on the wall at the Farm Service office. ㄹ Harry: (To the Audience) I hold no animosity for my neighbors—but neither do they hold a special place in my heart. They strike me as ghosts, a part of my past that has returned to me unwanted and unbidden, even though it is I who has returned to them. They dully occupy a space which should be my future, as yet unformed. Are there seeds beneath the soil that shall claw their way to the surface and the enlivening light? Or have they lain dormant too long and forgotten their purpose, and will merely rot in the damp earth? Food for another’s dream.Annette: Are you ready? The guest of honor shouldn’t be late, not even fashionably so.Harry: I cannot of course count my sister a ghost. That would be unkind, for one, and she has always been kind to me; even when we were children. Also, there is life and living in her more so than most of her (our) neighbors. She is a seed whose brave shoot may yet break into the light—perhaps only to whither in the inhospitable climate. She may be brave enough to try, but am I brave enough to let her?Chorus: Harry has not broached the subject but Annette has obviously been abandoned by her husband. Tim loaded the car with what he wanted—leaving enough behind to tell the tale of his departure—and he drove toward some other future. Though this place is rife with spirits, a new Asphodel, how could Harry abandon his sister too? It could be the last bit that breaks her.Annette: Look. They’ve made you a banner: Welcome Home, Harry!!Pastor Phillips: Here’s our war hero. Let us shake your hand.Jim Goodpath: Welcome home!Carl Reynolds: It’s good to see you!Harry: Thank you, Pastor, for lending us use of the basement, and for arranging the food.Pastor: You’re welcome. The ladies were delighted to have a project. It’s been a slow year in the wedding and funeral departments, sadly and gladly, respectively.Chorus: A funeral, that’s what this feels like to Harry: The village turning out to pay their respects, then have a free meal and eat a piece of white cake. He manufactures a smile for new arrivals and shakes hands and responds to the same few questions again and again. When did he get back? What’s he going to do? Implied: Will he be staying? A couple of weeks ago . . . ? . . . ? The too-sweet smell of the ham loaf reminds him of his parents’ funerals.Pastor Phillips: (Confidentially.) Your arrival home was quite fortuitous—the timing we mean.Harry: Timing?Jim Goodpath: Just when she needed a man.Chorus: It’s a well-established fact that a woman needs a man. The several women in the village and in its history who appear to have successfully ignored the fact are mere exceptions: Outliers, often nearly as literal of a term as figurative. Harry looks across the basement at his sister, whom he would describe as thoughtful, but notunhappy. It makes sense: There is much to consider, given her change of circumstance.Pastor: I have been saying an extra little prayer for Annette—and here you are . . . and for your safe return to us, too, of course.Carl Reynolds: Let’s get your picture.Pastor: I see you have a new toy.Carl: Picked up the new flash in Crawford yesterday. Been chomping at the bit to try it out.Chorus: Harry has brought the ragged copy of Chicago Poems with him, in his side pocket— the weight of the book against his hip, slight as it is, gives him a sense of security. He takes the Sandburg from his pocket to pose for Carl Reynolds, amateur but avid photographer.Carl: That’s great. Chin up a bit, smile, on three . . .Chorus: The exploding bulb dazzles Harry for a second. As his vision clears he sees Beth Ann Ferguson standing next to Carl, as if she has materialized with the burst of light. At first, before his vision returns completely, he thinks she is a child at Mr. Reynolds’s side. When he recognizes Beth Ann, he recalls that her childhood nickname was Polly, something to do with a story about a parrot in one of their primers. Harry must be imagining it, but a faint residue of the blinding light seems to linger in the space around Beth Ann, like an aura.Harry: Hi. I didn’t notice you come in.Chorus: His extended hand holds the book of poetry still, as if an offering. Beth Ann takes the accidental gift and looks at the spine before returning it.Beth Ann: Sandburg.Chorus: Eruption of laughter, if you please. . . . Thank you. It is because Mr. Abernathy has gotten into the basement storage-room where the church’s theatrical properties are kept and emerged wearing a goat mask. Mayor Whittle and Doc Higgins join in, and they are a cow and a pig. The trio begin to make the appropriate noises, much to the group’s general delight.Pastor Phillips: What do you think, Harry? Bet you didn’t know this would be a masquerade.High noon inside the playhouse large cool fishes glisten singing of life in the sun: Chorus.Harry: I need a smoke.Beth Ann: I could use some air.A grinning skull waves of sound beat the sidewalk a new way: Chorus.Harry: The night is clear. The stars are like comets. They should all have tails of stardust.Beth Ann: You always noticed the oddest things, Harry Gale.Harry: Did I? I’d nearly forgotten.Cars whir by when high waves come closer into the hoofs on a winter night: Chorus.Beth Ann: Is something wrong?Harry: No. . . .Soft echoes on a bronze pony in the cold and lonely snow and into dawn: Chorus.Harry: Do you have a car?Beth Ann: I can use my dad’s Mercury.Harry: Would you drive me to Crawford? To the bus terminal?Beth Ann: What? Now?—Do you want to let Annette know?Harry: I can call. She can send me my things.Alone with a picture after the dead beat their hands outside of what poets leave their skulls in the sun for: Chorus. Powered by Create your own unique website with customizable templates.
Decolonising research methodology must include undoing its dirty history Author Director of Scholarship at Change Management Unit at the Vice Chancellors' office; Professor and Head of Archie Mafeje Research Institute, University of South Africa Disclosure statement Sabelo Ndlovu-Gatsheni does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment. Hyphenating “research” into “re-search” is very useful because it reveals what is involved, what it really means, and goes beyond the naive view of “research” as an innocent pursuit of knowledge. It underscores the fact that “re-searching” involves the activity of undressing other people so as to see them naked. It is also a process of reducing some people to the level of micro-organism: putting them under a magnifying glass to peep into their private lives, secrets, taboos, thinking, and their sacred worlds. Building on Smith’s work, my concern here is the context in which re-search methodology is designed and deployed. In particular, it concerns the relationship between methodology with power, the imperial/colonial project as well as the implications for those who happened to be the re-searched. Broadly speaking, what is at issue is re-search as a terrain of pitting the interests of the “re-searcher” against those of the “re-searched.” The core concern is about how re-search is still steeped in the Euro-North America-centric worldview. Re-searching continues to give the “re-searcher” the power to define. The “re-searched” appear as “specimens” rather than people. I define re-search methodology as a process of seeking to know the “Other” who becomes the object, rather than subject of re-search and what is means to be known by others. That is why methodology needs to be decolonised. The process of its decolonisation is an ethical, ontological and political exercise rather than simply one of approach and ways of producing knowledge. Whose methodology is it anyway? When Europeans shifted from a God-centred society to secular thinking during the Enlightenment period, they inaugurated the science of “knowability”. God was no longer the only one who could understand the world. The rational human could too. This idea is best captured in Rene Descartes’ famous dictum Cogito Ergo Sum (I Think, Therefore, I am). This marks the emergence of the Cartesian philosophy of being and the Cartesian knowing subject: a human defined by her rationality. Since then, the decolonial theorists have unmasked the person behind the “I” as not just any human being, but the European “Man”. Here was born the idea that “Man” stands for “Human” as well – the beginning of the shift towards the world being seen through a patriarchal lens. It was during the “Voyages of discovery” that gave rise to colonialism, that European men began to encounter the “Other” and then assume the position of a “knower” and a “re-searcher” who was thirsty to know the “Other”, who emerged as the native Indian, as Shakespeare’s Caliban, as the African, the Aborigines and the other natives. The Other had to be re-searched to establish whether they were actually human or not. Here was born the methodology as a handmaiden of colonialism and imperialism. From ‘ethnographic’ to ‘biometric’ state In his book Define and Rule, Ugandan scholar Mahmood Mamdani argued that every colonial conqueror was preoccupied with the “native question”. This was about how a minority of white colonial conquerors were to rule over a majority of conquered black people. To resolve the “native question” the conquered black “native” had to be known in minute detail by the white coloniser. Thus re-search became a critical part of the imperial-colonial project. The European anthropologist became an important re-searcher, producing ethnographic data and knowledge that was desperately needed by colonialism to deal with the nagging “native question”. As a result of this desire to know the “native” for colonial administrative purposes the colonial state emerged as an “ethnographic state”, interested and involved in ‘re-searching’ the native so as to “define” and “rule” over the “native”. It was under the “ethnographic state” that colonial ideologues such as Thomas Babington Macaulay in India, Lord Frederick Lugard in West and East Africa, Cecil John Rhodes in Southern Africa used this data to malevolent ends, both to invent the idea of the native and to control her. They assumed the status of experts on the colonised natives and produced treatise such as The Dual Mandate in Africa which assumed the status of modules on how to rule over “natives”. Today, with the rise of “global terrorism,” drug-trafficking and the problem of migration, new forms of surveillance, and state control have emerged. Keith Breckenridge in his award winning book reflected on concepts of “biometric state” and “documentary state”. These use machines to extract, capture, and store information about all people, but particularly “Muslims” and “blacks”, whose ways of worship, living and actions do not fit into the European template. What is the role of re-search in this and what methodologies are used in trying to know the “Other”, that is, the unwanted migrant and the feared Muslim. Unmasking, rebelling, re-positioning and recasting Decolonising methodology must begin with unmasking the modern world system and the global order as the broader context from which re-search and methodology are cascading and are influenced. It also means acknowledging and recognising its dirtiness. Our present crisis is that we continued to use re-search methods that are not fundamentally different from before. The critique of methodology is interpreted as being anti-re-search itself. Fearing this label, we (modern scholars and intellectuals) have been responsible for forcing students to adhere religiously to existing ways of knowing and understanding the world. No research proposal can pass without agreement on methodology. No thesis can pass without recognisable methodology. There is a mandatory demand: how did you go about getting to know what you have put together as your thesis? Consequently, methodology has become the straitjacket that every new researcher has to wear if they are to discover knowledge. This blocks all attempts to know differently. It has become a disciplinary tool that makes it difficult for new knowledge to be discovered and generated. In the knowledge domain those who try to exercise what the leading Argentinian semiotician and decolonial theorist Whater D. Mignolo termed “epistemic disobedience” are disciplined into an existing methodology, in the process draining it of its profundity. Decolonising methodology, therefore, entails unmasking its role and purpose in re-search. It also about rebelling against it; shifting the identity of its object so as to re-position those who have been objects of research into questioners, critics, theorists, knowers, and communicators. And, finally, it means recasting research into what Europe has done to humanity and nature rather than following Europe as a teacher to the rest of the world. You might also like Residential school survivor Lorna Standingready is comforted by a fellow survivor during the closing ceremony of the Indian Residential Schools Truth and Reconciliation Commission of Canada. THE CANADIAN PRESS/Sean Kilpatrick
Q: SyntaxError: missing ) after argument list in fire bug I am getting the syntax error: SyntaxError: missing ) after argument list From this jQuery code: $("#createEnquiry").text(${noEnqMsg}); What kinds of mistakes produce this Javascript Syntax error? A: I think you meant to use string quotes: $("#createEnquiry").text("${noEnqMsg}"); Or maybe you though you were using template strings: $("#createEnquiry").text(`${noEnqMsg}`);
Acacia incognita Acacia incognita is a shrub belonging to the genus Acacia and the subgenus Juliflorae. It is native to an area in the Mid West region of Western Australia. See also List of Acacia species References incognita Category:Flora of Western Australia
# -*- coding: utf-8 -*- from common import * class Items: def __init__(self): self.cache = True self.video = False def list_items(self, focus=False, upd=False): if self.video: xbmcplugin.setContent(addon_handle, content) xbmcplugin.endOfDirectory(addon_handle, cacheToDisc=self.cache, updateListing=upd) if force_view: xbmc.executebuiltin('Container.SetViewMode(%s)' % view_id) if focus: try: wnd = xbmcgui.Window(xbmcgui.getCurrentWindowId()) wnd.getControl(wnd.getFocusId()).selectItem(focus) except: pass def add_item(self, item): data = { 'mode': item['mode'], 'title': item['title'], 'id': item.get('id', ''), 'params': item.get('params','') } art = { 'thumb': item.get('thumb', icon), 'poster': item.get('thumb', icon), 'fanart': item.get('fanart', fanart) } labels = { 'title': item['title'], 'plot': item.get('plot', ''), 'premiered': item.get('date', ''), 'episode': item.get('episode', 0) } listitem = xbmcgui.ListItem(item['title']) listitem.setArt(art) listitem.setInfo(type='Video', infoLabels=labels) if 'play' in item['mode']: self.cache = False self.video = True folder = False listitem.addStreamInfo('video', {'duration':item.get('duration', 0)}) listitem.setProperty('IsPlayable', 'true') else: folder = True if item.get('cm', None): listitem.addContextMenuItems( item['cm'] ) xbmcplugin.addDirectoryItem(addon_handle, build_url(data), listitem, folder) def play_item(self, path): listitem = xbmcgui.ListItem(path=path) listitem.setContentLookup(False) listitem.setMimeType('video/mp4') xbmcplugin.setResolvedUrl(addon_handle, True, listitem)
What are good methods of explaining to my four-year-old son why "color" is not how he should spell "colour", even though it is spelt that way in many of the books he's reading? (Also "realize", "favorite", etc). Similarly, although less importantly, that he should pronounce "z" as "zed" ("dance", "tomato", etc), but the characters on TV that say "zee" are right too. He understands that there are multiple words for the same thing in other languages (e.g. counting in Māori or Spanish), but US English is so close to English that it's confusing that there are these minor differences. I'm not saying that US English is "wrong", but at school he will be expected to spell in NZ English, and there using US spelling will be "wrong" in that it will be corrected. When he is older, he'll be able to learn why regional variants of languages exist, but for someone just learning to read/spell, that's a bit complicated. In maths, we first learn that the square root of 4 is 2, and that is the "correct" answer in school - once we understand more we learn that it's ±2. For now, what matters is knowing that if he is asked to spell "colour" that the answer that is expected has a "u"; learning how to write in US English can be done when he's older. I'd like suggestions as to know how I can explain what's expected, while acknowledging that the variants are (for the author) correct, without overwhelming his four-year-old brain. @torbengb Not as far as I can tell. It isn't really an English question (I know why there are differences and what there are), it's a parenting question (how to explain this to a pre-schooler). I'd like advice from parents, not language experts (for this particular question). – Tony MeyerMay 23 '11 at 12:58 2 +1 to this. We live in The Philippines, the stuff on the shelves might have come from any given country. AGH! – Tim Post♦May 23 '11 at 18:14 I dunno, I've always considered "Mom" and "Mum" to be different words - they're not alternate spellings, but different things you can call your mother, just like "Ma" or "Mama". – MarthaMay 28 '11 at 0:06 For the mom example, mom and mum are different words with different pronunciations, although with the same meaning and IIRC a parallel or closely branched etymology. You could have both in a sentence and they would both be spelled correctly. Color and colour are better examples, and ironically enough the latter 'ou' spelling is taken from Old French, while the previous matches both a more modern Anglicization and the original latin. – Peter DeWeeseJul 1 '11 at 19:00 9 Answers 9 You can use it as an introduction to World Cultures, English while spoken in places like America, Britain, Australia and in Canada all have different spellings and pronunciations. Even in a individual countries you get regional dialects that make words sound different, that might even be a starting point. Wrong and right may not be the best way to phrase it, but maybe expectations are. In the book your son is reading, because its from a different place, has spellings that are different but your son will still be expected to spell the words in the way he is taught. Language is a wonderful and changing thing, although lots of these complexities might be too much for a 4 year old but it doesn't hurt to lay the groundwork for the future. Thanks. His understanding of geography is significantly behind his reading, so that doesn't really help. Using this as a reason to try and improve that is a good suggestion. – Tony MeyerMay 23 '11 at 12:59 3 I like the point about wrong and right vs. expectations. It reminds me of some excellent (if humorous) advice I once heard: "There are two answers to every question in school. The right answer, and the answer your teacher wants to hear. I expect you to know them both." – Bill ClarkMay 24 '11 at 15:53 Before you read a book, put a pin on a world map for where the author is from and/or the setting of the story. Then explain to your child that different people from different places speak and write differently. Without distracting their reading too much, mention phrases/idioms, animals, spelling, architecture, etc. that is unique to that author or story setting. You may have to scan the book beforehand to be prepared. A nice suggestion, although a lot of work :) We read four books each night, so that would add quite a lot of prep and execution time, although most books are from NZ/UK/US, so there would be a lot of overlap I guess. – Tony MeyerMay 24 '11 at 0:34 You don't have to do it with every book.. – Javid JamaeMay 24 '11 at 1:22 2 +1 for the idea of having a world map. I grew up with a huge world map on the wall, and I'm providing the same to my son. I believe it's important to understand that you're just a small part of something much much bigger. – Torben Gundtofte-BruunMay 24 '11 at 6:15 I agree torben; we have a huge map in the eldests bedroom and she pins it in each country we've visited with green pins and red pins for any country she wants to go to. We also have a lot of maps as jigsaws. I come from a mixed background, and want the kids to know they too shouldn't limit themselves by geographical bounds; maps are great. – HairyMay 24 '11 at 6:44 At my grammar school you were allowed to spell either way: US English or British English - you just had to choose one and stick with it. You need to explain that there exists different written dialects of English much like there are different accents. A Scotsman speaking English sounds much different from a Cockney speaking English (if they actually do speak English, that is. I'm still not convinced.) But neither of them are pronouncing anything 'wrong' per se, just in their own dialect. If he uses the US spelling, gently correct him with the comment, "Remember, we spell 'X' differently than they do. Their spelling isn't WRONG, it's just not how WE spell it here." If he gets familiar with both forms of spelling, it's just a bonus IMO. That way there isn't any agonising over colors or neighbourhoods or how to check your cheque. Cockney uses rhyming slang, which is a bastardisation of english, so isn't really the same at all. In Scotland, like in the US, they actually use different words, for the same things, so the same rules can be applied. I'd also say, for us, it is wrong, so I will always teach my children there is a wrong and right way to spell things. We call the english Americans speak American English and approach it as a different language, as such. – HairyMay 23 '11 at 13:19 1 1: I'd just like to add I'm female. 2: Even American English has different dialects. Bostonian vs. Georgia Peach vs. Wisconsinite. What's a Tonic to me, is a Coke the other and a Pop to the third, even if it's a Pepsi we're all drinking. Does it make it wrong? No. Both a Scotsman and a Cockney (or a Welshman) can speak English to each other and be understood (most of the time, accent joking aside). The same way a Dane from Copenhagen can understand a Dane from Sønderjylland. Different dialects, with some different regional expressions and spellings, but both 'right' in their own sense. – DarwyMay 23 '11 at 15:17 @Darwy Apologies on the gender! I'd edit if I could.... – BeofettMay 23 '11 at 15:20 ...and finally, the discussion was about spelling. In England, if you spell Colour as color, you will be marked as incorrect, at ALL academic institutions; teaching a child it isn't wrong, will not only confuse them, especially at an early age, it will be misleading them, for it is wrong. – HairyMay 23 '11 at 15:59 As an Englishman living in the US, this is a never ending topic of interest, intrigue and amusement for my 2 young daughters. Conceptually, there are quite a few different angles that one can take this from (probably depending on age) that can make pretty interesting explanations & discussions. Historical - it's pretty fun to explain how over the last several hundred years people have moved around the world, and in this case how the expansion (and subsequent contraction) of the British Empire has led to dispersement of the language. Evolutionary - Given the dispersement, explain how things can diverge over time (spelling, pronunciation and so forth). This is a nice way to explain the beauty of language and how it can be different and yet but the same. Also a nice lead in to other evolutionary concepts. Differences - How any differences across cultures, geographies, countries etc. present these interesting and fun situations. Spelling was very unstandardized at one point and phonetic encodings differed considerably. English seemed to hit the perfect storm for un-standardization as compared to most other european languages. Very interesting cultural study. – Peter DeWeeseJul 1 '11 at 18:52 Children at the age of four are not and should not be generally expected to sit still or spell complicated words correctly and certainly not understand why spelling it "color" is wrong when Kermit spells it like that. So it's a mistake to use the terms "right" and "wrong" in this case. In fact, it's a mistake in any case as one spelling isn't right or wrong. English spelling is just a mostly arbitrary selection between options, there aren't any spelling rules per se, just tradition, and tradition differs in different countries. Hence, instead of explaining that "color" is wrong, you should explain that "color" is alright, but in NZ we usually have a u before the r, making it "colour". This will be an excellent opportunity for your kid to learn both that there may not always be a right and wrong answer. If "here in New Zealand" is too complicated, just say "color" is correct, but "colour" is even better. Personally I'd wait with that until he starts school, though. With regards to the claims that NZ and UK schools do claim that there is a right and wrong spelling, this is hardly done with 4-years olds, and if it is, a word with the teacher to tone down the "you are doing it wrong!" with 4-year olds is probably appropriate. It's obviously different once they are a bit older, but telling a 4-year old they are doing it wrong when he/she is writing "I have a calico colored cat" is perhaps less pedagogical than could be desired. Perhaps you should start by changing your stance that one way is "right" and the other "wrong". Even if the schools operate by this expectation, it is difficult to explain to a 4-year-old (as you no doubt have witnessed, else you wouldn't be asking the question). Instead, simply explain that the US spellings and pronunciations are considered "unusual" in your area, and that "people who live far away" sometimes do things a little different. There's nothing wrong with an American who spells "color" as "colour". At worst, it might seem a trifle eccentric. It certainly isn't "wrong". Any teacher who would mark such a thing "wrong" is more concerned with teaching a lesson about following instructions rather than proper spelling, and you may or may not agree with teaching children that doing what is expected of them is more important than understanding why it is expected of them. If you're having trouble explaining these concepts to a 4 year old, then you might simply have to wait until he's a bit older. In the meantime, you can just make an effort to expose him to more examples of the English variations, so that the US versions appear to be more of a minority. When he is actually in school, all of the examples presented will be the English variations, which should make it easier for him to decide which to use. If there is still any confusion, you can simply guide him to spell or pronounce it the way the teachers do. I am still not clear from the conflicting comments if children in NZ are expected to be writing words like "colour" by the time they are 4, but it seems that they probably are not. You do have time for your son to develop a bit more, and in as little as a year it may be much easier for him to grasp the concept. @everybody, I've said this before, so I'll be a bit less polite today: Please do not use comments for long back-and-forth sessions; this is a Q&A site, not a chat. If you don't have access to the parenting chat from work, then chat from home. If you feel the need to declare your perspective in detail, do so in your own answer. I have saved the comments. If anyone is interested, ask me in the chat! – Torben Gundtofte-BruunMay 23 '11 at 15:49 1 @everybody, I will continue to delete comments as long as you're just using them to bicker. Please everyone, contribute your thoughts in separate answers. Demonstrate your good parenting skills by showing tolerance and respect toward each other. Remember the FAQ says be nice. – Torben Gundtofte-BruunMay 23 '11 at 19:16 I would have thought simplest would be to say it is wrong in NZ English or British English. Treat it as a different language that has most words the same as proper English (by proper I mean for your region) - as I would certainly hope the school would mark answers as incorrect if they use spelling from a different language. The issue is that he has trouble understanding how they are different languages when 90%+ is exactly the same (unlike, e.g. Spanish or Māori). – Tony MeyerMay 24 '11 at 0:36 There are other groups of languages that are also similar-yet-different. The German group of German/Austrian/Swiss German; the Scandinavian group of Danish/Swedish/Norwegian. Or even (less helpful to a U.S. 4yo) Japanese that uses many Chinese kanji characters that sometimes have different meaning but same reading (pronunciation), and sometimes different reading but same meaning. I think US/UK/AU/NZ English is closer than all of these though, so it's no wonder that it's difficult to learn, and probably also hard to teach! – Torben Gundtofte-BruunMay 24 '11 at 6:41 1 Which is probably why it is a very good idea to make it very clear to a child, who is learning a language, that there is a right and wrong way to spell, speak it. Any variance can be introduced at a later date when they are ready, intellectually, for it. – HairyMay 24 '11 at 7:19 When I was a child, I read books from the UK, and my mom just made a little arch over the "u" in "Mum" to make it "Mom." Sometimes little corrections like that may be necessary. By the time I was 12, I read books my dad brought back from his trips to the UK, and loved learning a different way of spelling. To this day, however, I spell some things "wrong" for the US, and don't realize it until spellcheck corrects me. At the same time, the internet has no borders and many jobs nowadays require knowledge of, and sensitivity to, other countries' means of spelling. It's a fine line to walk, and 4 years old is definitely an age to "keep it simple, stupid." I'd try to avoid conflicting English for now (even if you have to edit your child's books), and whenever you can't avoid it, try to turn it into a very short, very simple geography or history lesson. Bear in mind that sometimes adults overly complicate things; a simple "people spell things differently in other countries, but we should get used to the way we do it here first" might suffice. You can always expand this into teaching more about the grey (or gray) areas of life later.
Q: 'System.ArgumentException' when trying to use SetPixel (x, y, Color) I am trying to create a white picture (greyscale format). So here is the code I use: Bitmap bmp = new Bitmap(1, 1, System.Drawing.Imaging.PixelFormat.Format16bppGrayScale); bmp.SetPixel(0, 0, Color.FromArgb(255, 255, 255)); bmp = new Bitmap(bmp, i.Width, i.Height); "I" is an existing Bitmap image. I am playing with its channels. The principle is to create a 1 pixel image in greyscale, give this pixel the white color and then enlarge it to the good size. But I have this as a result: "An unhandled exception of type 'System.ArgumentException' occurred in System.Drawing.dll" I tried Color.White but it isn't allowed for greyscale or indexed formats. What are my other options to fix this? A: Why not to do it without converting to greyscale? Bitmap A = new Bitmap(i.Width, i.Height); for (int x = 0; x < i.Width; x++) for (int y = 0; y < i.Height; y++) { Color C = i.GetPixel(x, y); Color WD = Color.FromArgb(C.R, 255, C.G, 0); A.SetPixel(x, y, WD); } Simply done by placing colour channels in the desired order
307 F.3d 1127 DEWITT CONSTRUCTION INC., Plaintiff-Appellant,v.CHARTER OAK FIRE INSURANCE COMPANY; Travelers Property Casualty Corporation; Travelers Indemnity Company of America, Defendants-Appellees,v.Opus Northwest LLC, Third-party-plaintiff.DeWitt Construction Inc., Plaintiff-Appellee,v.Charter Oak Fire Insurance Company; Travelers Property Casualty Corporation; Travelers Indemnity Company of America, Defendants-Appellants,v.Opus Northwest LLC, Third-party-plaintiff. No. 01-36013. No. 01-36014. United States Court of Appeals, Ninth Circuit. Argued and Submitted September 9, 2002. Filed October 9, 2002. As Amended on Denial of Rehearing and Rehearing En Banc December 4, 2002. COPYRIGHT MATERIAL OMITTED COPYRIGHT MATERIAL OMITTED Todd C. Hayes, Stanislaw Ashbaugh, LLP, Seattle, WA, for plaintiff-appellant-appellee. Russell C. Love, Thorsrud Cane & Paulich, Seattle, WA, for defendants-appellees-appellants. Appeal from the United States District Court for the Western District of Washington; Barbara J. Rothstein, Chief Judge, Presiding. D.C. No. CV-00-01150-BJR. Before HILL*, GOULD, and BERZON, Circuit Judges. Opinion by Judge GOULD; Concurrence by Judge HILL. OPINION GOULD, Circuit Judge. 1 This appeal, in a case with jurisdiction based on diversity, follows the district court's grant of summary judgment in an insurance contract dispute about two commercial liability insurance policies purchased by DeWitt Construction, Inc. ("DeWitt") from Travelers Property Casualty Co.1 ("Travelers"). DeWitt, a sub-contractor on a major development project, negligently installed cement piles, and thereafter had to install new piles that were satisfactory. The initial substandard performance by DeWitt gave rise to damages claims by the general contractor.2 The scope of the insurance policies' coverage, Travelers' duty to defend DeWitt against the asserted liability on DeWitt's subcontract, and bad faith and Consumer Protection Act claims resulting after Travelers declined the tender of defense are the subject of this dispute. 2 The district court granted DeWitt's partial summary judgment motion on duty to defend, granted Travelers' partial summary judgment motion on coverage, and thereafter dismissed DeWitt's claims for bad faith insurance claims handling and for violation of the Washington Consumer Protection Act. The district court awarded DeWitt $17,043 in defense costs and $43,043.40 in attorneys' fees to be paid by Travelers because it had breached its duty to defend. DeWitt appeals, and Travelers cross-appeals. We have jurisdiction, and we affirm in part and reverse in part. Factual Background 3 DeWitt was a subcontractor on a large-scale commercial construction project in Issaquah, Washington. DeWitt subcontracted with the general contractor, Opus Northwest LLC ("Opus"), to drill and place concrete piles into the ground to serve as a primary component of a building's foundation. At the heart of DeWitt's subcontract was DeWitt's promise to achieve a minimum strength in the concrete piles that were to support the building. Before commencing operations, DeWitt purchased a commercial general liability policy and a commercial excess liability policy (collectively "policies") from Travelers. 4 In performing the work, DeWitt at first failed to construct the concrete piles so that they achieved the required strength. The cement in the piles did not harden properly. As a result, the original holes and pile assembles were unusable. DeWitt had to install about 300 more piles to the site in other locations. This also resulted in delays in the overall project pace, abandonment of defective piles, re-engineering of the site's foundation, and the removal and reinstallation of other subcontractors' work. In addition, when DeWitt was moving heavy equipment to install remedial piles, DeWitt damaged buried mechanical and site work completed by other subcontractors. DeWitt's unsatisfactory work required Opus to accelerate the work of other subcontractors to meet its original construction deadline. 5 On January 6, 2000, Opus informed DeWitt that it was asserting a $3.5 million claim against DeWitt for damages arising from DeWitt's alleged negligence in the design and installation of the defective piles. DeWitt tendered Opus's claim to Travelers. Opus filed an arbitration demand against DeWitt on March 24, 2000. DeWitt also tendered the arbitration demand to Travelers. Between April and May, 2000, Travelers and Opus exchanged correspondence in which Opus provided Travelers additional itemization and detail regarding its claimed loss. Travelers made no decision on indemnification and did not provide counsel for DeWitt's defense during its investigation. After DeWitt filed suit in this case for a declaratory judgment, Travelers informed DeWitt that it was denying both defense and indemnification benefits under the policies. Discussion 6 On appeal we address: whether the district court erred (1) in finding there is no coverage, (2) in finding that Travelers breached its duty to defend DeWitt, (3) in calculating the damages awarded to DeWitt, and (4) in dismissing DeWitt's bad faith and Consumer Protection Act claims. We review these issues de novo. Delta Sav. Bank v. United States, 265 F.3d 1017, 1021 (9th Cir.2001); Neptune Orient Lines, Ltd. v. Burlington N. and Santa Fe Ry. Co., 213 F.3d 1118, 1119 (9th Cir.2000). I. Coverage 7 To determine whether any of DeWitt's claims are covered under the policies, we must consider three questions of contract interpretation: (1) whether there was an "occurrence" giving rise to the alleged damages; (2) whether any of the alleged damages are "property damage"; and (3) whether the property damages are nevertheless barred from coverage by a specific exclusion under the policies. A. Occurrence 8 To be covered under the policies, any alleged property damage must be caused by an "occurrence," which is defined in part as "an accident." DeWitt argues that the defective manufacture of the concrete piles, such that they failed to meet the proper break-strength requirements, constituted an "occurrence" within the meaning of the policies. We agree. As the Washington Supreme Court decided in Yakima Cement Products Co. v. Great American Ins. Co., 93 Wash.2d 210, 608 P.2d 254, 257 (1980), a subcontractor's unintentional mismanufacture of a product constitutes an "occurrence." See also Baugh Constr. Co. v. Mission Ins. Co., 836 F.2d 1164, 1169 (9th Cir.1988) (finding that "negligent construction and negligent design claims fall within the definition of a fortuitous event"). 9 In addition, the inadvertent act of driving over the buried mechanical and site work fits squarely within the policies' definition of "occurrence," as there is no indication in the record that the damage was caused intentionally. B. Property Damage 10 The policies at issue in this case provide DeWitt with coverage for "those sums that the insured becomes legally obligated to pay as damages because of `bodily injury' or `property damage' to which this insurance applies." "Property damage" means: (a) "physical injury to tangible property, including all resulting loss of use of that property" or (b) "loss of use of tangible property that is not physically injured."3 11 DeWitt alleges three types of property damage in this case: (1) damage to the construction site by impaling it with unremovable obstacles, (2) damage to the work of other subcontractors that had to be removed and reconstructed due to DeWitt's negligence, and (3) damage to buried mechanical piping and site work while moving equipment to replace the under-strength piles. 12 We conclude that the alleged damage to the construction site caused by DeWitt impaling it with unremovable piles is not "property damage" under the policies. For faulty workmanship to give rise to property damage, there must be property damage separate from the defective product itself. Yakima Cement, 608 P.2d at 258-59 (no property damage occurred due to the incorporation of defective concrete panels where record was devoid of evidence that the building value was diminished).4 See also Marley Orchard Corp. v. Travelers Indem. Co., 50 Wash.App. 801, 750 P.2d 1294, 1297 (1988) (stress to trees was property damage caused by the installation of a defective sprinkler system, unlike Yakima Cement where there was no damage separate from the defect). 13 DeWitt argues that the site was impaled with useless concrete piles and had to be redesigned to accommodate the remedial piles. DeWitt does not argue, however, that the remedial design was qualitatively worse than the original. Because DeWitt does not allege physical injury apart from the defective piles themselves, there is no issue of material fact in dispute. We affirm the district court's grant of summary judgment on coverage for the alleged property damage to the construction site by the "impaling." 14 We turn next to whether the alleged damage to the work of other subcontractors, which had to be removed and destroyed as a result of DeWitt's installation of defective piles, is property damage within the scope of the policies. We find that it is. In Baugh, we applied Washington law and found property damage to tenant improvements when those improvements had to be removed as a result of the installation of defective concrete panels in a building. 836 F.2d at 1170. Similarly, Opus had to hire a demolition subcontractor to tear out pile-caps that had been installed over the defective piles because they were no longer useful. Baugh controls our conclusion that there was property damage to the extent subcontractors' work had to be removed and destroyed. 15 We also find that the alleged damage to the buried mechanical and site work caused by DeWitt's movement of heavy equipment was "physical injury to tangible property" and thus constituted property damage within the scope of the policies. C. Applicability of Exclusions 16 Because we have found that DeWitt has proven property damages within the scope of the policies to the other subcontractors' work that was torn out or otherwise destroyed and to the other subcontractors' work that was damaged by operation of DeWitt's equipment, we next analyze whether any exclusion under the policies nevertheless bars coverage. Travelers bears the burden of proving that property damages that fall within the scope of the policy are excluded from coverage under the two policies purchased by DeWitt. See, e.g., Am. Star Ins. Co. v. Grice, 121 Wash.2d 869, 854 P.2d 622, 625-26 (1993) (insurer bears the burden of proving that a loss is not covered because of an exclusionary provision). Exclusions are strictly construed against the insurer because they are contrary to the protective purpose of insurance. See Diamaco, Inc. v. Aetna Cas. & Sur. Co., 97 Wash.App. 335, 983 P.2d 707, 711 (1999). 1. Damage to Other Subcontractors' Work Performed on Defective Piles 17 The "impaired property" exclusion does not bar coverage for property damage to the destroyed work that other subcontractors had performed on the defective piles. The impaired property exclusion, as stated in the policies, only applies "if [the impaired property] can be restored to use by: a) the repair, replacement, adjustment or removal of `[the insured's] product' or `[the insured's] work'; or b) [the insured] fulfilling the terms of the contract or agreement." DeWitt's installation of additional piles did not "restore to use" the work of other subcontractors. The other subcontractors' work (e.g., the pile caps) was removed from the defective piles, destroyed in the removal process, and remained destroyed notwithstanding the subsequent remedial work by DeWitt. The destroyed work of other subcontractors was not merely impaired, nor was it restored to use. 18 The "course of operations" exclusion in the general liability policy bars coverage for damage to "that particular part of any property" on which DeWitt is "performing operations, if the `property damage' arises out of those operations." In addition, the exclusion bars coverage for damage to "that particular part of any property" that must be repaired or replaced because DeWitt's work "was incorrectly performed on it." 19 The sequence of the work performed by other subcontractors in relation to DeWitt's work precludes the applicability of the course of operations exclusion. DeWitt installed piles by drilling holes, filling them with concrete, and then inserting rebar cages into the concrete. After DeWitt completed these operations, DeWitt began work in another area. Only then did the other subcontractors perform work on the defective piles. Neither component of the course of operations exclusion bars coverage: DeWitt was not performing operations on the work of other subcontractors when the damage occurred, nor did DeWitt incorrectly perform operations on the work of other subcontractors because that work (e.g., the pile caps) did not even exist when DeWitt performed its operations. 20 The "care, custody, and control" exclusion in the umbrella policy bars coverage for property damage to "property in [DeWitt's] care, custody, or control." This exclusion does not bar coverage for damage to the work of other subcontractors that performed work on the defective piles. DeWitt was in control of the areas in which piles were being installed only while operations were being performed in those areas. Once DeWitt finished installing a pile, DeWitt did not retain control over those site areas.5 To find otherwise would incorrectly impute control for a particular piece of property for the duration of the construction project as soon as a subcontractor performs any operation on that area, even if only for a limited time. Such a broad reading of the care, custody, and control exclusion would be inconsistent with the controlling principle of Washington law that exclusions should be narrowly construed and read in favor of the insured. See Diamaco, Inc., 983 P.2d at 711. 21 Because there is no policy exclusion that specifically bars coverage for the property damage to the work that other subcontractors performed on the defective piles, we reverse the district court's grant of summary judgment to Travelers on coverage of this claim.6 2. Damage to Buried Mechanical and Site Work 22 The impaired property exclusion does not bar coverage for the damage to buried mechanical and site work that was crushed by the movement of heavy equipment. That work was not restored to use through remedial steps taken by DeWitt; on the contrary, this property damage occurred when DeWitt was attempting to redress the initial mistake. 23 Whether the course of operations exclusion applies to damage to buried mechanical and site work cannot be decided on summary judgment at this time because there is a factual dispute as to whether the damaged work was on "that particular part" of the property on which DeWitt was performing operations. DeWitt argues that the damage occurred while driving heavy equipment en route to the particular part of the site where the remedial piles were being installed. Travelers, however, argues that the damage occurred when DeWitt moved heavy equipment in the particular area on which DeWitt was performing operations. The record is not instructive. Because there is a genuine issue of material fact, and because on summary judgment we view the facts in the light most favorable to the nonmoving party, we conclude that the course of operations exclusion does not bar coverage for the damage to buried site and mechanical work as a matter of law based on the current record. We therefore reverse in part the district court's grant of summary judgment on coverage and remand to the district court for further factual determinations and proceedings on the applicability of the course of operations exclusion to the damage to buried site and mechanical work. 24 As with the course of operations exclusion, there is also a factual dispute involving the applicability of the care, custody, and control exclusion. DeWitt contends that it only had control over the specific locations where it was actively installing piles. There is a question of fact regarding the proximity of the areas where the buried mechanical and site work was damaged to the areas where DeWitt was actively installing additional piles. Because there remains a genuine issue of material fact as to the applicability of the care, custody, and control exclusion, we reverse the district court's grant of summary judgment on coverage of this claim and remand for further factual determination. D. Consequential Damages 25 The insurance policies at issue here provide for indemnification of the insured for "those sums that the insured becomes legally obligated to pay as damages because of ... `property damage' to which the insurance applies." In construing similar language, the Washington Court of Appeals in Marley Orchard determined that the policy allowed for consequential damages. 750 P.2d at 1297. The plaintiff in Marley was allowed to recover for expenditures reasonably made in an effort to avoid or minimize damages. Id. See also Gen. Ins. Co. of Am. v. Gauger, 13 Wash.App. 928, 538 P.2d 563, 566 (1975) (finding that once the definition of property damage is satisfied, "any and all damages flowing therefrom and not expressly excluded from the policy are covered"). 26 Because the policies cover consequential damages, the district court correctly noted that even though intangible economic injury does not constitute property damage under the policy, "intangible economic injuries may result from physical injury to tangible property." We remand to the district court to determine the consequential damages (e.g., delay costs), if any, that flowed from property damage to the work of other subcontractors. Also, if the factfinder concludes that the property damage to buried mechanical and site work is not barred by any of the exclusions, then DeWitt is entitled to recover for delay costs that flowed directly from those damages. We express no view on these issues, which are properly within the domain of the district court in its further proceedings. II. Duty to Defend 27 Under Washington law, the duty to defend and the duty to indemnify are separate obligations, and the duty to defend is broader than the duty to indemnify. Baugh, 836 F.2d at 1168. "The duty to defend arises whenever a lawsuit is filed against the insured alleging facts and circumstances arguably covered by the policy. The duty to defend is one of the main benefits of the insurance contract." Kirk v. Mt. Airy Ins. Co., 134 Wash.2d 558, 951 P.2d 1124, 1126 (1998). To determine whether a duty to defend exists, we examine whether the allegations for coverage are conceivably within the terms of the policy. Hayden v. Mut. of Enumclaw Ins. Co., 141 Wash.2d 55, 1 P.3d 1167, 1172 (2000). Then we determine whether an exclusion clearly and unambiguously bars coverage. Id. 28 Because we have concluded that at least some of the claims tendered to Travelers by DeWitt involve property damage within the scope of the policies that is not clearly excluded from coverage, Travelers did have a duty to defend. As explained below, that duty was triggered by the filing of the arbitration demand. We affirm the district court's grant of summary judgment on the duty to defend.7 29 When, as here, an insurer breaches its duty to defend, recoverable damages for the insured include: "(1) the amount of expenses, including reasonable attorney fees the insured incurred defending the underlying action, and (2) the amount of the judgment entered against the insured." Kirk, 951 P.2d at 1126. See also Waite v. Aetna Cas. and Sur. Co., 77 Wash.2d 850, 467 P.2d 847, 851 (1970) (an insurer who wrongfully refuses to defend "will be required to pay the judgment or settlement to the extent of its policy limits" and reimburse the defense costs) (emphasis added). 30 Because we have determined that the policies do cover property damage to the subcontractors' work on the defective piles, and because the factfinder may conclude that the buried mechanical and site work is also covered by the policies, on remand the district court should consider (1) the portion of a reasonable settlement, if any, that can fairly be said to be related to the covered property damage; and (2) whether any such portion is recoverable as damages for breach of duty to defend.8 31 The district court erred by calculating attorneys' fees and costs from the date that DeWitt tendered Opus's claim to Travelers, February 16, 2000. The duty to defend is triggered by a "suit." See, e.g., Griffin v. Allstate Ins. Co., 108 Wash.App. 133, 29 P.3d 777, 781 (2001) (noting that in Washington the duty to defend arises upon the filing of a complaint). Because the policies include arbitration proceedings within the definition of "suit," the duty to defend was triggered on the date the arbitration demand was filed, March 24, 2000. Therefore, the award of attorneys' fees and costs should be calculated from March 24, 2000. We reverse and remand to the district court to properly determine the attorneys' fees and costs. III. Extra-contractual Claims 32 To establish the tort of bad faith in the insurance context, the insured must show that the insurer's actions were "unreasonable, frivolous, or unfounded." Kirk, 951 P.2d at 1126. "Bad faith will not be found where a denial of coverage or failure to provide a defense is based upon a reasonable interpretation of the insurance policy." Id. Here, Travelers's duty to defend was not unambiguous. The arbitration demand was vague as to the nature of the damages giving rise to the claims, referencing "additional material damages" instead of noting specific property damage. The policy coverage was unclear in light of legitimate factual and legal issues pertinent to contract interpretation and application. We affirm the district court's grant of summary judgment dismissing DeWitt's bad faith claim. 33 DeWitt also appeals the district court's grant of summary judgment of its claim under the Washington Consumer Protection Act (CPA), RCW 19.86.010 et seq. Under the CPA, DeWitt must demonstrate (1) an unfair or deceptive act or practice (2) occurring in trade or commerce (3) that impacts the public interest (4) causing an injury to the plaintiff's business or property with (5) a causal link between the unfair or deceptive act and the injury suffered. Indus. Indem. Co. of the Northwest, Inc. v. Kallevig, 114 Wash.2d 907, 792 P.2d 520, 528 (1990). The first part of this analysis is closely related to the bad faith standard that we have already held was not satisfied by DeWitt. For essentially the same reasons that we conclude the district court appropriately dismissed the claim for bad faith, the district court appropriately dismissed the CPA claims against the insurer. We affirm the district court's grant of summary judgment for the CPA claims. Conclusion 34 We affirm in part and reverse in part the district court's grant of summary judgment on coverage. Specifically, and as explained above, we affirm denial of coverage on the alleged damage to site from defective piles; we reverse denial of coverage on the subcontractors' work that was destroyed because of the defective piles (and direct the district court to enter a partial summary judgment to DeWitt on this issue); and we remand for further proceedings and factual determinations pertinent to application of the course of operations and of the care, custody, and control exclusions, as they may relate to the damage to buried subcontractors' work caused by DeWitt's movement of equipment to install new piles. We affirm the grant of summary judgment to the insured and against the insurer for breach of the duty to defend. On that issue, we remand for a recalculation of attorneys fees and costs subsequent to the arbitration demand, and for consideration whether there are other recoverable damages for breach of the duty to defend as to any portion of the settlement between DeWitt and Opus that reflects covered property damage. We finally affirm the grant of summary judgment to the insurer rejecting the bad faith and CPA claims because the insurer's position was not unreasonable, frivolous or unfounded. Both parties shall bear their own costs for appeal. 35 AFFIRMED IN PART AND REVERSED IN PART, AND REMANDED for further proceedings consistent with this disposition. Notes: * The Honorable James C. Hill, Senior United States Circuit Judge for the Eleventh Circuit Court of Appeals, sitting by designation 1 Because Charter Oak Fire Insurance Company and Travelers Indemnity Company of America are subsidiaries of Travelers Property Casualty Company, the defendants are hereby collectively referred to as "Travelers" 2 DeWitt has entered into a settlement agreement with the general contractor, Opus Northwest LLC ("Opus"). Opus has agreed not to file a judgment implementing the settlement pending the final outcome of this litigation between DeWitt and Travelers 3 DeWitt did not argue that any of Opus's claims fall within the "loss of use" definition of property damage, and we do not address that issue on appeal 4 In this case, even a showing of diminished value of the site would be insufficient to show property damage. Property damage under this policy requires "physical injury," whereas the policy inYakima Cement only required "injury." 5 There is no indication in the record that DeWitt had supervisory control over the subcontractors who performed work on the piles after DeWitt had concluded its own operations 6 Because there are no fact issues pertinent to coverage for the destroyed work of other subcontractors that attached to the defective piles, we direct the district court on remand to give partial summary judgment to DeWitt on this issueCf. Bird v. Glacier Elec. Coop., Inc., 255 F.3d 1136, 1152 (9th Cir.2001) (courts may sua sponte grant summary judgment to a nonmovant when there has been a summary judgment motion by one party and no cross-motion). 7 Because we have determined that the arbitration demand alleged claims covered by the policies, we do not need to reach, and therefore do not decide, the issue of whether, under any theory including that relied upon by the district court, Travelers would have had a duty to defend even if coverage ultimately had been barred on summary judgment. More specifically, we need not decide and therefore express no view whether the district court's finding of a duty to defend absent a coverage determination for an interim period, before the coverage decision was made by the insurer, was correct 8 We reject DeWitt's argument that Washington law permits no allocation of settlement if the insurer breaches the duty to defend. First, the Washington Supreme Court, inKirk v. Mt. Airy Ins. Co., 951 P.2d 1124, 1128 (Wash.1998), held that when an insurer breaches the duty to defend in bad faith, the insurer is estopped from asserting that alleged claims are outside the scope of coverage. Absent bad faith, the insurer is "liable for the judgment entered provided that the act creating liability is a covered event and provided the amount of the judgment is within the limits of the policy." Id. at 1126 (emphasis added). Because there was no bad faith here, see infra Section III, allocation is appropriate. To conclude otherwise would be to afford the same remedy in cases where the insurer has breached the duty to defend in good faith as in cases where such breach was in bad faith. Second, DeWitt's partial reliance on Nautilus v. Transamerica Title Ins. Co., 534 P.2d 1388, 1393 (Wash.App.1975), is misplaced because there the court rejected allocation where there was one claim and there were several legal theories of recovery. Here we have several claims; coverage of one claim does not automatically bring the others into the scope of the policy absent bad faith. HILL, Circuit Judge, concurring: 36 Were we writing on the proverbial clean slate, I should be in serious doubt that the failure of the insured, DeWitt, to have performed its contracted work properly constituted an occurrence under the commercial liability policies. It seems to me that this goes far towards substituting general liability coverage for a performance guarantee underwritten by an insurance company. 37 However, evaluation of such doubt is not necessary in this case. We are dealing with a state law case, and the Yakima Cement Products Co. case, cited in the opinion, is a clear statement by the highest court of the state that, in Washington, such a failure is an occurrence. I, therefore, concur.
Hotels and Resorts HODELPA all-inclusive Hodelpa chain has four hotels in the Dominican Republic, one is located in the Colonial Zone of Santo Domingo, the other three are located in Santiago de los Caballeros. These are city hotels for business travelers, businessmen ... but also to tourists visiting the Dominican Republic. Hodelpa Garden Court enjoys a very strategic location close to the business areas, the free trade zone, atractions and only minutes away from the historical downtown and cultural hub of the city of Santiago.
Enzymatic mechanism of an RNA ligase from wheat germ. We have characterized the mechanism of action of a wheat germ RNA ligase which has been partially purified on the basis of its ability to participate in in vitro splicing of yeast tRNA precursors (Gegenheimer, P., Gabius, H-J., Peebles, C.L., and Abelson, J. (1983) J. Biol. Chem. 258, 8365-8373). The preparation catalyzes the ligation of oligoribonucleotide substrates forming a 2'-phosphomonoester, 3',5'-phosphodiester linkage. The 5' terminus of an RNA substrate can have either a 5'-hydroxyl or a 5'-phosphate. The 5'-phosphate, which for a 5'-hydroxyl substrate can be introduced by a polynucleotide kinase activity in the preparation, is incorporated into the ligated junction. The 3' terminus can have either a 2',3'-cyclic phosphate or a 2'-phosphate. 2',3'-Cyclic phosphates can be converted into 2'-phosphates by a 2',3'-cyclic phosphate, 3'-phosphodiesterase activity in the preparation. The 2'-phosphate of the ligated product is derived from the phosphate at the 3' terminus of the substrate. Ligation proceeds with the adenylylation of the 5'-phosphorylated terminus to form an intermediate with a 5',5'-phosphoanhydride bond.
A tariff on guitars bundled with the video game Rocksmith is striking the wrong chord with Ubisoft Canada Inc. The software developer, which has studios in Montreal, Toronto, and other cities worldwide, is appealing a decision by the Canada Border Services Agency (CBSA) to classify guitars bundled with copies of its instructional video game Rocksmith as musical instruments. Still following? While the guitars — Les Paul Junior models, manufactured by Epiphone — are indeed real musical instruments, Ubisoft’s argument is that because the guitars are bundled and sold with video game software, they are eligible for alternate classification under “tariff item No. 9948.” The said tarriff allows “articles for use in video games used with a television receiver, and other electronic games” to be sold duty free. Imported musical instruments and accessories, however, are typically subject to a 6% fee under tariff rules.
Ellen DeGeneres' reaction to the country star's "tiny snake" is priceless. Enjoy Blake Shelton keeping the audience entertained while on 'Ellen' here! Blake Shelton is now a children’s book author. The country star and The Voice coach debuted his book titled “Blake and His Tiny, Little Snake” while visiting The Ellen DeGeneres Show. “Chapter one. There once was a man named Blake. Blake was a very tall and handsome man. But Blake had a very tiny little snake,” Shelton read aloud to the audience. Watch Ellen’s priceless reaction to Blake’s new project and below. Okay, thankfully the book is a total fake— but the gifts Ellen and Blake gave out to the excited audience are totally real! Later in the episode Blake and Ellen invited two sisters to play a new game called “Wet Head” Watch the hilarity ensue… As can be expected, when Blake and Ellen get together— laughter is abundant. Share these silly segments with other Blake Shelton fans!
DECEMBER 11, 2008 IN THIS ISSUE 1. Graduation and Baccalaureate Mass are this weekend More than 450 Marquette University graduates will be recognized at the mid-year commencement on Sunday, Dec. 14, at the U.S. Cellular Arena. The program will begin at 9:30 a.m. and will feature a keynote address by Dr. Steven Goldzwig, professor of communication studies in the J. William and Mary Diederich College of Communication, as well as remarks from Marquette President Robert A. Wild, S.J., and student speaker Emily Rodier, Helen Way Klingler College of Arts and Sciences. Graduation events will also include a Baccalaureate Mass at 7:30 p.m. Saturday, Dec. 13, at Gesu Church, celebrated by Father Wild and other members of the Marquette Jesuit community. 2. Board of Trustees approves new biology major The Marquette University Board of Trustees Wednesday approved a new major, Biology for the Professions, specifically designed for middle and secondary school majors in the College of Education. The new major becomes available in fall 2009 addresses the need for middle and high school science teachers locally and nationally and meets the content standards required for teacher licensure by the Wisconsin Department of Public Instruction. The biology for the professions major will require six, three-credit biological sciences courses, plus an additional biology lab course and nine credits of electives in biological sciences. Students will also be required to take chemistry, mathematics and physics courses. The trustees also approved the elimination of majors in human biology, social sciences, broad field social sciences, broad field social sciences major in history, broad field social sciences major in political science, broad field social sciences major in sociology and interdisciplinary major in social sciences. Most of the majors eliminated have not been offered in recent years. At the graduate level, the trustees approved the termination of the Master of Arts degree in Teaching (Spanish). The board also received information about new specializations in computational science at both the Ph.D. and master’s levels. The new specializations will replace discrete specializations in algebra, biomathematics, statistics and logic and foundation at the Ph.D. level and in computer science and mathematics at the master’s level. In addition, two graduate certificates in nursing — adult clinical nurse specialist and gerontologic nurse specialist — have been offered but are now formally recognized. 3. Freshman experience task forces start work in January Eight task force groups are being formed to review the results of the freshman experience survey in which employees and students recently participated. The Office of the Provost and the Division of Student Affairs are working with the Foundations of Excellence project of the Policy Center on the First Year of College, Ashville, N.C., to develop an action plan for campus-wide improvement. Marquette’s faculty/staff survey response rate was 64.4 percent and the response rate for students was 56.3 percent. The task forces, which will begin their work in January 2009, and their chairs are: Marquette is one of 20 four-year institutions participating in a comprehensive, campus-wide self-assessment of the first year college experience during the 2008-09 academic year. Task force chairs will work together to identify a comprehensive first-year vision. Deans, directors and other supervisors have recommended task forces members. Others interested in serving on a task force, including students, should contact Tina Rodriguez, office associate, Office of the Provost, at 8-8030. 4. US Bank customers targeted for phishing scam The US Bank on campus has received reports of cell phone users receiving text messages from US Bank, asking the user to contact a number and give account information to reactivate an account. These messages are a fraudulent attempt to access customer accounts and are being sent randomly to US Bank customers and non-US Bank customers to phish for account information. Do not use the number given by the e-mail or text message. Phishing is a common method used to commit fraud. According to US Bank: There has been no breach in security. US Bank will never request information by e-mail or by text message. Any request for financial or personal information should be verified by contacting the financial institution directly. US Bank customers who have provided information as a result of these text messages should see a representative at the US Bank branch on campus for more information and to limit damage. 5. Public Safety offers vacant house watch over break The Department of Public Safety encourages students residing in the nearby off-campus neighborhood to take advantage of its Vacant House Watch. Students can register their residences with Public Safety prior to leaving campus for Christmas break, and DPS officers will monitor the vacant residences during routine patrols. The information provided to Public Safety remains strictly confidential. Students must complete a form, which is available online, and return it to Public Safety by Monday, Dec. 22. Students concerned about leaving their vehicles unattended can also obtain free on-campus parking during the break. Students can register vehicles and obtain a parking pass with Parking Services after their last final exam. For more information, contact Parking Services at 8-6911. 6. Annex hosting men’s basketball viewing party The Annex will host a viewing party for the men’s basketball game against Tennessee on Tuesday, Dec. 16, at 8:30 p.m. The first 100 fans will receive a commemorative T-shirt. Doors open at 5 p.m. All fans will have a chance to enter the “Best Seat in the House” contest to watch the game from a La-Z-Boy chair with a free Annex pizza and pitcher of soda. There will also be opportunities to win Marquette gear at half time. Check the Annex’s Web site or Facebook site for more information. News Briefs is published for Marquette students, faculty and staff every Monday and Thursday, except during summer and academic breaks when only the Monday edition is published. The deadline for the Monday edition is noon Friday. The deadline for the Thursday edition is noon Wednesday. Highest priority notices as determined by university leadership are also sent periodically.
The long term goal of this project is the investigation of mechanisms of photosensitivity in patient who are abnormally sensitive to sunlight. A major aspect of the investigation is based on determining the action spectra; that is, identification of ultraviolet, visible or infrared radiation and the minimal dose of the radiation that will provoke changes in the skin. This is determined by irradiating the skin with narrow wave bands of measured intensity, obtained from high intensity xenon or xenon mercury lamps. Monochromatic light has been obtained with interference filters, but more recently a high intensity tandem prism - defraction grating mnochromator has been constructed which has been available for testing since January, 1969. A knowledge of the action spectra of a disease may lead to an understanding of its pathogenesis, or even biochemical identification of the photosensitizing substance. This hypothesis is based on the principle that radiant energy is only active after absorption, so that if the active wave length is known, a search can be made amongst substances which are known to absorb them. If the photosensitizing substance can be excited to produce autofluorescence, additional information can be obtained by adapting the defraction grating monochromator to form a microfluorospectrophotometer. The biochemical and biological research into the cause of photosensitivity includes derangements of porphyrin synthesis in human and experimental porphyrias; effect of ultraviolet radiation on DNA synthesis and more recently the investigator has observed that T lymphocytes are more ultraviolet sensitive than B lymphocytes. It is proposed to continue to study the practical application of T lymphocyte killing in various experimental models including inhibition of delayed hypersensitivity and graft versus host reactions.
FIG. 1A illustrates a prior art apparatus to feed livestock. Apparatus 100 comprises cab portion 102 and trailer assembly 105. In certain embodiments, powered unit 102 and trailer 105 comprise an integral manufacture. Trailer assembly 105 comprises feed container 110 and delivery assembly 120 disposed therein. Referring now to FIGS. 1A and 1B, feed 150 is disposed in feed container 110 and is gravity fed into delivery assembly 120. In the illustrated embodiment of FIGS. 1A and 1B, delivery assembly 120 comprises a first auger 130 and a second auger 140. In other embodiments, delivery assembly 120 may comprise a single auger. In still other embodiments, delivery assembly 120 comprises more than 2 augers. In certain embodiments, multiple augers may operate in a counter-rotating fashion. Referring now to FIGS. 1C and 1D, apparatus 100/105 is disposed adjacent to a livestock feeding site and positioned such that feed trailer 105 is disposed adjacent to feed bunk 170. Side 190 of feed trailer 105 is formed to include aperture 180. Delivery assembly 120 is energized, and feed 150 is transferred from feed container 110, through aperture 180, across chute 160 and into feed bunk 170. In some instances, the size of aperture 180 is adjustable by means of operable door to regulate feed 150 flow. The prior art apparatus of FIGS. 1A, 1B, 1C, and 1D, can deliver the same feed formulation to a plurality of feeding locations. However, different formulations cannot be delivered to different locations with the same load of feed 150.
The Flint water crisis and its health consequences The water crisis that gripped the city of Flint, Michigan, in 2014 and 2015—and which is still felt to the present day—became one of the most notorious and scandalous public health disasters in recent United States history. The immediate cause was the contamination of the municipal water supply with toxic lead and dangerous bacteria, but the true cause is widely considered to be colossal mismanagement and unsound cost-cutting measures imposed on the city. Compounding the scandal is the fact that the population of Flint is disproportionately poor and African-American, suggesting to many critics that such mismanagement might not have occurred in a place with a wealthier, whiter population. Although the most acute health consequences of the crisis may be over, the long-term effects, particularly from the lead exposure, may take years to emerge. See also: Environmental toxicology; Lead; Water supply engineering Water can be contaminated by chemicals or other foreign substances that are detrimental to human health. The water crisis in Flint, Michigan, involved the contamination of the municipal water supply with toxic lead and dangerous bacteria. (Credit: National Institute of Environmental Health Sciences, National Institutes of Health) According to the U.S. Census Bureau, Flint is a city of almost 100,000 people where more than 40 percent live below the poverty line. The General Motors automobile factory located in Flint was for many years the company’s largest, but the weakness of the American auto industry caused a decline in the city’s fortunes starting in the 1980s. Because of Flint’s poor finances, it was put into receivership and managed by state officials between 2011 and 2015. One of the decisions made during this time was to reduce the city’s water bill by switching suppliers from the Detroit Water and Sewerage Department (DWSD) to the Karegnondi Water Authority, via a new tunnel to be bored to Lake Huron. Flint stopped obtaining water from the DWSD in April 2014, but because the new Lake Huron supply was not yet available, the city adopted an interim measure of drawing water from the nearby Flint River. Although the Flint River had served as the city’s backup water source for many years and had been the primary source of water early in the 20th century, it had historically suffered from poor quality, especially in more recent decades. Runoff in the Flint River watershed contained a variety of pollutants, including industrial compounds, pesticides, fertilizers, and fecal bacteria. Those problems, which officials seem to have addressed inadequately, soon translated into a variety of health threats for Flint’s citizens. See also: Lake; River; Water-borne disease; Water resources Shortly after the switchover to Flint River water, city residents began to complain that their water had an unpleasant smell, taste, and orange discoloration. In August 2014, tests detected fecal coliform bacteria (E. coli) in the water reaching some Flint neighborhoods, leading to the first of two temporary advisories that the water be boiled before use. Intestinal E. coli infections can cause severe diarrhea and dehydration, with complications such as kidney failure, and they are most dangerous for children, pregnant women, and people with weakened immune systems. In response to the coliform bacteria reports, Flint officials increased the amount of antibacterial chlorine used to treat the water. See also: Escherichia; Escherichia coli outbreaks; Water treatment The levels of chlorine and other corrosive compounds in the Flint River water caused further problems. By October 2014, General Motors had stopped using Flint’s municipal water out of concern that it corroded sensitive engine parts. More significantly, the corrosiveness of the water leached lead out of old pipes used throughout the city. Lead poisoning poses serious and even fatal threats to the nervous system and kidneys at every age, but high lead levels are especially dangerous for children under the age of six, whose physical and mental development may be permanently stunted. The U.S. Environmental Protection Agency (EPA) sets a limit of 15 parts per billion for lead in drinking water, but because lead and other heavy metals accumulate within the body, no level of lead exposure is considered perfectly safe. In Flint, tests showed that the lead levels in the water in some homes was many times the EPA-recommended maximum. The extent of the lead contamination problem in Flint’s water at its worst is uncertain and controversial, but a preliminary report issued by a research team from Virginia Tech found that 40% of homes in Flint might have high lead levels. Researchers from the Hurley Medical Center announced in September 2015 that the number of children in Flint with excess blood lead levels had doubled since the switch to Flint River water, and that the levels had tripled in neighborhoods with the highest lead contamination in their water. The final version of an EPA report on testing at three Flint homes issued in November 2015 noted that “even with corrosion control treatment in place in the future, physical disturbances will be capable of dislodging the high lead-bearing scale and sediment from non-lead pipes as well as lead pipes.” See also: Childhood lead exposure and lead toxicity; Chlorine; Corrosion; Poison Health officials noted that the number of cases of Legionnaires disease (Legionella) around Flint rose atypically during 2014 and 2015, claiming 10 lives. The Michigan Department of Health and Human Services maintains that no clear connection between this outbreak and the Flint water crisis has been established, but the family of one of the Legionella victims has filed a $100 million lawsuit against Michigan Department of Environmental Quality officials and a regional medical center. See also: Legionnaires' disease; Public health What makes the Flint water crisis a scandal is the evidence that mismanagement by city and state officials was ultimately to blame for most of the problems and health consequences suffered by the citizens. The final report of the Flint Water Advisory Task Force, released March 23, 2016, described the crisis as “a story of government failure, intransigence, unpreparedness, delay, inaction, and environmental injustice.” It faulted Michigan state agencies for failing to enforce drinking water regulations in Flint, failing to take appropriate measures to prevent the foreseeable pipe corrosion problems, and discrediting efforts to bring attention to the early evidence of tainted water and lead contamination. It also blamed other public officials, including the state governor, for not reversing the bad decisions made in the management of the crisis more promptly. Criminal charges are pending against more than a dozen state and local officials, including multiple felony charges against some of the state-appointed emergency managers. Flint returned to drawing water from its Detroit source in October 2015. Because of damage to corroded plumbing, some residents still filter their water to remove impurities or use bottled water for drinking, cooking, and routine hygiene. Recent tests of the water in Flint homes generally find that the quality is well within federal safety standards, but the trust of Flint’s citizens in their water has been slow to rebound. To learn more about subscribing to AccessScience, or to request a no-risk trial of this award-winning scientific reference for your institution, fill in your information and a member of our Sales Team will contact you as soon as possible. Let your librarian know about the award-winning gateway to the most trustworthy and accurate scientific information. About AccessScience AccessScience provides the most accurate and trustworthy scientific information available. Recognized as an award-winning gateway to scientific knowledge, AccessScience is an amazing online resource that contains high-quality reference material written specifically for students. Its dedicated editorial team is led by Sagan Award winner John Rennie. Contributors include more than 9000 highly qualified scientists and 39 Nobel Prize winners. Features MORE THAN 8500 articles and Research Reviews covering all major scientific disciplines and encompassing the McGraw-Hill Encyclopedia of Science & Technology and McGraw-Hill Yearbook of Science & Technology 115,000-PLUS definitions from the McGraw-Hill Dictionary of Scientific and Technical Terms 3000 biographies of notable scientific figures MORE THAN 17,000 downloadable images and animations illustrating key topics ENGAGING VIDEOS highlighting the life and work of award-winning scientists SUGGESTIONS FOR FURTHER STUDY and additional readings to guide students to deeper understanding and research LINKS TO CITABLE LITERATURE help students expand their knowledge using primary sources of information
Millrace Rapids Millrace Rapids is a popular kayaking playspot, located on the Lower Saluda River in Columbia, South Carolina. History The rapids are a result of the Saluda River running over the remains of a twice-dynamited coffer dam. Features Blast-O-Matic Dumbass Hole Cookie Monster Square Eddy Fisherman's Rock Pop-up Hole Events and Competitions Millrace Massacre and Iceman Competition Safety Concerns Because the lower Saluda is flow-controlled by a hydroelectric dam (operated by SCANA), the water level can change rapidly. There is a warning system in place that consists of a series of sirens and strategically placed markers. External links - American Whitewater page for the Lower Saluda River - SCE&G's page for the Lower Saluda River. Category:Canoeing and kayaking venues in the United States Category:Geography of Columbia, South Carolina Category:Bodies of water of South Carolina Category:Tourist attractions in Columbia, South Carolina Category:Rapids of the United States
/** * \file axesstmc13readerunit.hpp * \author Maxime C. <maxime-dev@islog.com> * \brief AxessTMC13 Reader unit. */ #ifndef LOGICALACCESS_AXESSTMC13READERUNIT_HPP #define LOGICALACCESS_AXESSTMC13READERUNIT_HPP #include <logicalaccess/readerproviders/readerunit.hpp> #include <logicalaccess/readerproviders/serialportxml.hpp> #include <logicalaccess/plugins/readers/axesstmc13/axesstmc13readerunitconfiguration.hpp> #include <logicalaccess/plugins/readers/axesstmc13/lla_readers_axesstmc13_api.hpp> namespace logicalaccess { class Profile; class AxessTMC13ReaderCardAdapter; class AxessTMC13ReaderProvider; /** * \brief The AxessTMC13 reader unit class. */ class LLA_READERS_AXESSTMC13_API AxessTMC13ReaderUnit : public ReaderUnit { public: /** * \brief Constructor. */ AxessTMC13ReaderUnit(); /** * \brief Destructor. */ virtual ~AxessTMC13ReaderUnit(); /** * \brief Get the reader unit name. * \return The reader unit name. */ std::string getName() const override; /** * \brief Get the connected reader unit name. * \return The connected reader unit name. */ std::string getConnectedName() override; /** * \brief Set the card type. * \param cardType The card type. */ void setCardType(std::string cardType) override; /** * \brief Wait for a card insertion. * \param maxwait The maximum time to wait for, in milliseconds. If maxwait is zero, * then the call never times out. * \return True if a card was inserted, false otherwise. If a card was inserted, the * name of the reader on which the insertion was detected is accessible with * getReader(). * \warning If the card is already connected, then the method always fail. */ bool waitInsertion(unsigned int maxwait) override; /** * \brief Wait for a card removal. * \param maxwait The maximum time to wait for, in milliseconds. If maxwait is zero, * then the call never times out. * \return True if a card was removed, false otherwise. If a card was removed, the * name of the reader on which the removal was detected is accessible with * getReader(). */ bool waitRemoval(unsigned int maxwait) override; /** * \brief Create the chip object from card type. * \param type The card type. * \return The chip. */ std::shared_ptr<Chip> createChip(std::string type) override; /** * \brief Get the first and/or most accurate chip found. * \return The single chip. */ std::shared_ptr<Chip> getSingleChip() override; /** * \brief Get chip available in the RFID rang. * \return The chip list. */ std::vector<std::shared_ptr<Chip>> getChipList() override; /** * \brief Get the reader ping command. * \return The ping command. */ ByteVector getPingCommand() const override; /** * \brief Get the current chip in air. * \return The chip in air. */ std::shared_ptr<Chip> getChipInAir(); /** * \brief Get the default AxessTMC13 reader/card adapter. * \return The default AxessTMC13 reader/card adapter. */ virtual std::shared_ptr<AxessTMC13ReaderCardAdapter> getDefaultAxessTMC13ReaderCardAdapter(); /** * \brief Connect to the card. * \return True if the card was connected without error, false otherwise. * * If the card handle was already connected, connect() first call disconnect(). If you * intend to do a reconnection, call reconnect() instead. */ bool connect() override; /** * \brief Disconnect from the reader. * \see connect * * Calling this method on a disconnected reader has no effect. */ void disconnect() override; /** * \brief Check if the card is connected. * \return True if the card is connected, false otherwise. */ bool isConnected() override; /** * \brief Connect to the reader. Implicit connection on first command sent. * \return True if the connection successed. */ bool connectToReader() override; /** * \brief Disconnect from reader. */ void disconnectFromReader() override; /** * \brief Get a string hexadecimal representation of the reader serial number * \return The reader serial number or an empty string on error. */ std::string getReaderSerialNumber() override; /** * \brief Serialize the current object to XML. * \param parentNode The parent node. */ void serialize(boost::property_tree::ptree &parentNode) override; /** * \brief UnSerialize a XML node to the current object. * \param node The XML node. */ void unSerialize(boost::property_tree::ptree &node) override; /** * \brief Retrieve reader identifier. * \return True on success, false otherwise. False means it's not a TMC reader * connected to COM port or COM port inaccessible. */ bool retrieveReaderIdentifier(); /** * \brief Get the TMC reader identifier. * \return The TMC reader identifier. */ ByteVector getTMCIdentifier() const; /** * \brief Get the AxessTMC13 reader unit configuration. * \return The AxessTMC13 reader unit configuration. */ std::shared_ptr<AxessTMC13ReaderUnitConfiguration> getAxessTMC13Configuration() { return std::dynamic_pointer_cast<AxessTMC13ReaderUnitConfiguration>( getConfiguration()); } /** * \brief Get the AxessTMC13 reader provider. * \return The AxessTMC13 reader provider. */ std::shared_ptr<AxessTMC13ReaderProvider> getAxessTMC13ReaderProvider() const; protected: /** * \brief The TMC reader identifier. */ ByteVector d_tmcIdentifier; }; } #endif
Talking to the media outside Adiala Jail on Friday, he said that the PML-N is striving for the release of all workers. To a question why Nawaz Sharif refused a meeting with Ch Nisar, the former minister said that only Ch Nisar could tell about it. “The people have rejected Ch Nisar in the election results,” he said. He said that no decision has yet been made on the presidential candidate. “However, consultations with allied parties are going on in this regard.” Pervaiz Rashid was of the view that it will be good if the opposition agrees on a candidate. When asked whether the PML-N will support Aitzaz Ahsan as the presidential candidate, he said that they will consider his name if he comes to Adiala Jail and seeks apology from Nawaz Sharif for his words.
Q: Find the adjoint operator of $T(x_1,x_2,x_3,\dots)=(\sum_{n=2}^\infty x_n, x_1, x_2, x_3, \dots)$ in $\ell^1$. Find the adjoint operator of $T(x_1,x_2,x_3,\dots)=(\sum_{n=2}^\infty x_n, x_1, x_2, x_3, \dots)$ in $\ell^1$. I know that the dual space of $\ell^1$ is $\ell^\infty$ and thus $T^*$ should map from $\ell^\infty$ to $\ell^\infty$. Then I try to see what $T^*$ does to any $a \in \ell^\infty$. $T^*a=aT$ by the definition of adjoint operators in Banach spaces. However, what I want to know is the element to which $T^*$ maps $a$. I don't know how to play with it to find the adjoint. A: Let $x\in l^1, y \in l^\infty$, then $$\langle Tx, y \rangle = \langle (\sum_{n=2} x_n, x_1, x_2, \cdots ), (y_1,y_2,\cdots) \rangle $$ $$ = y_1 \sum_{n=2} x_n + x_1 y_2 + \cdots$$ $$ = x_1 y_2 + x_2(y_1+y_3) + x_3 (y_1 + y_4) + \cdots$$ $$ = \langle (x_1 , x_2, \cdots), (y_2, y_1+y_3 , y_1+y_4 , +\cdots )\rangle$$ Hence $$T^*(y_1, y_2, y_3, \cdots) = (y_2, y_1+y_3, y_1+y_4, \dots)$$
Targeted therapy in metastatic renal carcinoma. Advanced renal cell carcinoma is one of the most treatment-resistant malignancies to conventional cytotoxic chemotherapy. The development of new targeted therapy was result of understanding biological pathways underlying renal cell carcinoma. Our objective is to provide an overview of current therapies in metastatic renal cell carcinoma. MEDLINE/PUBMED was queried in December 2012 to identify abstracts, original and review articles. The research was conducted using the following words: "metastatic renal cell carcinoma" and "target therapy". Phase II and Phase III clinical trials were included followed FDA approval. Total of 40 studies were eligible for review. The result of this review shows benefit of these target drugs in tumor burden, increase progression-free and overall survival and improvement the quality of life compared with previous toxic immunotherapy, although complete response remains rare.
Katy Birchall announces the winners of Country Life’s 2017 Gentleman of the Year Awards. First place: Colin Firth Charming, self-effacing and unfailingly polite, our 2017 Gentleman of the Year continues to navigate fame and success with modesty, good humour and courtesy. Often hailed as the quintessential on-screen gentleman – his name is synonymous with Mr Darcy’s and there is no one better suited for the role of a stylish, authoritative Kingsman spy – it is his off-screen reputation that has caught our attention. The recipient of glowing reviews from fellow cast members, make-up artists and set runners, he’s never been a fan of talking about himself and, thanks to a certain wet-shirt-in-the-lake scene, he’s spent the past 22 years sheepishly brushing aside his global heartthrob status. He’s devoted to his family (earlier this year, he applied for and was granted dual citizenship, so that he could have the same passports as his Italian wife and children) and he shows a level of commitment to charitable projects that is rare in the glitzy world of celebrity endorsement. As well as his dedication to Oxfam and Amnesty International, he also helped to launch Progreso, the world’s first chain of fair-trade coffee shops, and was consequently awarded several philanthropy and humanitarian awards – not that he’d ever mention any of those, of course. As if all that wasn’t enough to convince us he’s a worthy winner, when asked by a journalist to describe the modern gentleman, he replied that he doesn’t consider himself to have any authority on the matter, but that he thinks it’s something to do with kindness. Mr Darcy, who? Second place : John Timpson A remarkable, no-nonsense businessman, this cobbler knows exactly what it means to put yourself in someone else’s shoes. Our judges were bowled over by his extraordinary generosity and empathetic nature, qualities that have gained the Timpson chief executive a reputation as a paternalistic employer. His ‘Dreams Come True’ programme rewards an employee each month and has paid for operations and weddings, as well as recently flying a staff member out to Barbados, where she was reunited with her father after 13 years apart. A pioneer of giving people that crucial second chance, his policy of hiring directly from prisons means former offenders make up 10% of his workforce and, along with his late wife, Alex, he has fostered more than 90 children and adopted two. An immensely charming man who never loses his temper, he may be in the business of mending broken soles, but he won’t give up on lost souls, either, landing him firmly in our runner-up spot. Third place: Jonathan Agnew As comforting and familiar as your favourite armchair on a lazy Sunday afternoon, the BBC cricket correspondent has been so successful at charming the nation from the commentary box that it’s easy to forget he once played for England himself. Through his warmth, wit and extensive knowledge, Aggers has become a source of national pride, scoring legions of fans that loyally tune in to Test Match Special (TMS) whether they’re interested in cricket or not – his humorous digressions about sunsets, spongecakes and kippers are like listening to an old friend rambling on in the pub. His endearing sense of fun and fondness for pranks often has listeners in stitches – the famous 1991 broadcast in which he and Brian Johnston dissolved into infectious laughter, following his comment that Ian Botham hit the stumps because he ‘didn’t quite get his leg over’, gave all of Britain the giggles. TMS would be lost without him and, quite frankly, so would we. The runners up Rod Stewart ‘A gentleman should be able to laugh at himself,’ says judge Andrew Love and, in the new world of clean-cut YouTube popstars droning on about the benefits of steamed kale and organic hair gel, this rock icon injects a much-needed sense of fun. Notoriously cheerful and kind at heart, he puts journalists at ease and is a devoted father to eight children. If in doubt, just ask his exes – he’s on good terms with all of them. Matt Baker Whether he’s talking to his enraptured audience from a stack of hay bales in muddied wellies or from a green studio sofa in a crisply ironed shirt, there is no denying the Countryfile and The One Show host’s good nature and captivating passion for what he does – his soaring ratings speak for themselves. To put it simply, he’s terribly nice. Nicholas Parsons He’s never missed an episode of Just a Minute and, at 94 years old, the charismatic presenter is still a class-act entertainer. He’s a stickler for manners and refuses to say a bad word about anyone, even his stalker – ‘it may be old-school, but I think everyone deserves respect,’ he has said. David Gandy Model, fashion designer, columnist, entrepreneur and philanthropist: is there anything he can’t do? Even if there is, he wouldn’t be afraid to try – a gentleman, he once said, understands that more is achieved through failure than success. Shy, well-mannered and hard-working, he sealed his place on our shortlist when he admitted that he likes cats, but dogs will always have the edge. David Dimbleby Coming out tops in our inaugural 2014 Gentleman of the Year Awards, the Question Time host and General Election presenter is a model of courtesy and a fountain of knowledge. Earlier this year, he proved a gentleman can be full of surprises when he enthusiastically performed an Eminem-inspired rap about Brexit. Nicholas Coleridge Despite a hectic schedule, the chairman of Condé Nast Britain and the V&A will make time for everyone, from editors to interns – there’s no whiff of pretentiousness here. Charismatic and mischievous, his thoughtful and amusing thank-you letters are second to none and pride won’t stop him from giving credit where it’s due – he loves Country Life and isn’t afraid to say so. Sir David Tang He was due to take a place at the judging table for these awards, so the Country Life team and fellow judges were left bereft at the news of Sir David’s death this summer. The prominent entrepreneur was well known for his lively spirit, intelligence, generosity and extensive charitable work. The chief executive officer of Fortnum & Mason, Ewan Venters, spoke for everyone when he said that, without Sir David, the ‘world is a little duller’. A gentleman to the last.
Will you sign or do you intend to sign the London Cycling Campaign’s manifesto detailing 10 demands for the planned cycle super-highways, which the campaign group worry will not live up to the standards you originally promised
<!DOCTYPE html> <html lang="zh-cn"> <head> <meta charset="utf-8"> <title>World Wind</title> <style> html, body, #canvas { width: 100%; height: 100%; margin: 0; padding: 0; background: #000; overflow: hidden; } </style> <script src="../../assets/js/three.js"></script> <script src="build/WorldWind.js"></script> </head> <body> <canvas id="canvas"></canvas> <script> var canvas, map; window.onload = function () { canvas = document.querySelector('#canvas'); map = new WorldWind.WorldWindow(canvas); map.addLayer(new WorldWind.XYZLayer()); map.addLayer(new WorldWind.AtmosphereLayer()); map.addLayer(new WorldWind.StarFieldLayer()); } </script> </body> </html>
An hour into Her I was a mess. Though many have complained that they found it hard to empathize with the human-operating system relationship the movie depicts, I found the film all too real because it embodied the worst parts of a long distance relationship. From the little miscommunications that come from not being able to see your partner’s face to struggling to overcome the impossibility of physical intimacy to the panic that strikes when a call goes unanswered — they were all familiar problems. So I couldn’t help but cry as I watched the movie while sitting next to my boyfriend who lives 2500 miles away from me. In an increasingly global job market, more relationships have to go the distance, but, friends assured me, it was easier than ever thanks to technology. Before he moved, we had joked that those iPhone commercials showing couples sharing intimate moments as they FaceTime from opposite ends of the world would be our lives. But after many months of anxiously glancing at my phone during work or dinners with friends to see if boyfriend was texting me, I realized that the devices and apps that were supposed to bring us closer together were actually driving us apart. Of course there are ways technology has made long distance relationships much more manageable. I can call my boyfriend every day without having to worry about massive phone bills. When something good or bad happens at work, I can notify him immediately by texting him. I see a food truck we love by my apartment, I Snapchat it to him. If I want to see his face, we can use Skype or Google Hangout or FaceTime. If I want to know what articles he is reading, I can look at his Twitter. If I want know what the road trip he went on yesterday was like, I can stalk his Instagram. Soon, when he finally gets Spotify, he’ll be able to share playlists with me, and I’ll be able to spam him with Beyoncé songs. We watch movies and TV shows together, messaging each other “I told you so” when a plot twist is revealed or our favorite emoticons when the guy ends up with the right girl. (We were watching episodes of Sports Night simultaneously long before the New York Times dubbed the practice sync-watching.) It’s unimaginable to me that my dad had to sit by a landline waiting for my mother to call him at a specified time when they were dating long distance. But my generation’s hyper-connectivity is a double-edged sword. Sometimes my boyfriend and I don’t know what to say to each other on the phone at the end of the night. He already knows the stories I’ve written that day because I’ve tweeted them. I know what new quote they posted on his quote board at work because it popped up on Facebook. And the blurry, jerky, pausing unreality of video chat only makes you yearn for real-life interactions all the more. Video cameras and phones can’t always capture laughter, smirks or sighs of frustration. A joke becomes a fight because the tone of a text is misinterpreted. Long silences after arguments can’t be broken by reaching across the table and holding the person’s hand. And eventually you have to shut off the phone or computer and must confront the fact that you can’t feel his arm around you as you drift off to sleep. So in some ways I envy my parents who were far enough away from one another to form separate lives. They didn’t feel guilty when they missed a text or let down when a Snapchat went unopened. Being so close digitally only widens the gap between my boyfriend and me. And I’m not alone. Young couples are operating in a competitive, geographically diffuse job market that makes it hard to give up a good opportunity. A month before my boyfriend moved to the other side of the country, he rationally pointed out that this could happen to us at any point in our lives: one person has to move for a job, and the other person either has to stay put or go with him. For us, it was happening shortly after graduation from college, but for others a long distance separation could come years into a relationship or even a marriage. An estimated 75 percent of college students have engaged in a long distance love at one point or another, and about three million American adults in relationships live apart. It’s one of the many reasons Americans are waiting longer to marry, according to research by Jeffrey Arnett, a professor of psychology at Clark University: men want a partnership with equals and therefore want women to pursue their own career goals. That unfortunately means more geographically-challenged relationships. And we’re not talking measly one-year separations. A recent Wall Street Journal article tells the tale of a couple that spent the better part of five years in a long distance relationship as they pursued their separate degrees and careers. They planned visits around their separate lives, probably in a Google Cal — another modern invention that’s made relationships simpler. Luckily, it’s not all bad news. A study from Cornell published in June found that couples in long-distance relationships feel more intimate with their partners than those who live in the same area. They value what little time they have together — during visits or over the phone — so greatly that they optimize those moments emotionally. I find this is especially true towards the end of a visit when you want to savor every moment, memorize every freckle on the other person’s face — any memory you can cling to until the next visit. According to the study, long-distance lovers were also more accepting of their partners’ behaviors and felt more committed to each other. The international job market will test more and more relationships in the years to come, so the information from the Cornell study is heartening. But the positive aspects of long-distance all seem to be based on how little couples see one another. If we reach a point, like in Her, where we can be connected to our partner at all times through an earpiece like the one Theodore Twombly wears or — more realistically — through messaging and social media, the benefits of being apart may be lost. Yes, demands at our respective work places keep us from emailing all day; but it’s easy to imagine that won’t always be the case as socializing online becomes easier to hide and young workers become more proficient at multi-tasking. So before you become too connected to your long-distance lover, consider the value of space. The illusion of togetherness can be masochistic. Hold out for the real thing the next time he visits. Clichés exist for a reason, which is why I have “distance makes the heart grow fonder” written on a post-it in my desk. Write to Eliana Dockterman at eliana.dockterman@time.com.
Interactions of saruplase with acetylsalicylic acid, heparin, glyceryl trinitrate, tranexamic acid and aprotinin in a rabbit pulmonary thrombosis model. In anesthetized rabbits with pulmonary embolized thrombi the interactions of saruplase (recombinant unglycosylated single-chain urokinase-type plasminogen activator, CAS 99149-95-8) with acetylsalicylic acid (ASA), glyceryl trinitrate (nitroglycerin, GTN), heparin, and the antifibrinolytics tranexamic acid and aprotinin have been studied. Lysis rates were evaluated as reduction of the weight and as reduction of the incorporated 125J-fibrin content of the embolized thrombi. In untreated controls the spontaneous 125J-fibrinolysis was 8.3 +/- 0.7% and the thrombus weight was reduced by 55.3 +/- 4.5% (mainly due to loss of H2O) at 195 min after the thromboembolization. Infusion of saruplase (21.5 micrograms/kg.min) for 60 min significantly increased the 125J-fibrinolysis to 36.8 +/- 3.7% and the reduction of the thrombus weight to 69.9 +/- 2.1%. ASA (50 mg/kg p.o.) by itself did not exert a lytic effect; in combination with saruplase ASA insignificantly augmented the 125J-fibrinolysis rate and significantly further increased the thrombus weight reduction. GTN (5.0 micrograms/kg.min i.v.) neither influenced the spontaneous nor the saruplase-mediated lysis rates. Treatment with heparin (200 IU/kg i.v.-bolus plus 50 IU/kg.min i.v.-infusion) resulted in significant greater thrombus weight reduction than observed in untreated controls; in combination with saruplase heparin significantly intensified the lytic effect. Tranexamic acid (30 mg/kg i.v.) and aprotinin (30.10(3) KIU/kg i.v.) completely abrogated the lytic effect of saruplase. Treatment with saruplase alone produced an insignificant decrease of the plasma level of fibrinogen by 23% to 200 +/- 20 mg/dl. ASA, GTN and aprotinin did not influence this slight fibrinogenolysis in saruplase-treated animals. A slight decrease of plasma fibrinogen levels was observed by heparin, whereas the decrease by tranexamic acid was significant.
Q: Trouble parsing a delimited string in php I'm extracting values from a delimited string that usually has the form: valueA|valueB|valueC Where '|' is a delimiter. In this simple case, I'm just using explode to extract the separate values. However, sometimes the string will have brackets, where any characters including '|' can be between those brackets. For example: valueA|valueB[any characters including '|']|valueC How can I parse this string to reliably extract the three separate values (valueA, valueB[any characters including '|'], valueC). I'm pretty sure a regex is my best bet, but I haven't been able to figure it out. Any help is appreciated. Thanks! A: Per the comments to the question, you have the ability to change the format. That being the case, one small adjustment will have you rolling. Since it's character-delimited, you're essentially working with a CSV file. Conventionally, CSV functionality allows you to enclose the data values in quotes between the delimiters. That way, if your delimiter character occurs within a piece of data, it will be parsed simply as part of the data string and not mistaken for a delimiter. That's how spreadsheets work -- the delimiter is usually a comma or tab, but fields can still have commas/tabs in them because they're enclosed in quotes. Those quotes are part of the standard CSV format, and PHP's CSV functions recognize them. As a simple illustration, your old strings: valueA|valueB|valueC valueA|valueB[any characters including '|']|valueC would then be this: "valueA"|"valueB"|"valueC" "valueA"|"valueB[any characters including '|']"|"valueC" See how the StackOverflow syntax highlighter catches that above? :-) There are PHP functions for both reading and writing CSV formats like this. Writing CSV from an array of fields: fputcsv() (to a file descriptor) Reading CSV into an array: fgetcsv() (from a file) or str_getcsv() (from a string, new in 5.3) Default assumes that the delimiter is a comma and enclosure is a double quote, but you can optionally specify any character (such as '|') for those tasks.
430 F.3d 929 UNITED STATES FIDELITY & GUARANTEE INSURANCE COMPANY; Catherine L. Rootness, Trustee for the Heirs and Next of Kin of Brent A. Hincher, deceased; Payless Cashways, Inc., d/b/a/ Knox Lumber Company, Appellants,v.COMMERCIAL UNION MIDWEST INSURANCE COMPANY; American Employers' Insurance Company, Appellees. No. 04-1826. United States Court of Appeals, Eighth Circuit. Submitted: March 18, 2005. Filed: December 7, 2005. COPYRIGHT MATERIAL OMITTED Gregory J. Johnson, argued, St. Paul, MN, for appellant. Megan D. Hafner, argued, Minneapolis, MN, for appellee. Before MURPHY, HANSEN, and SMITH, Circuit Judges. HANSEN, Circuit Judge. 1 This case involves a dispute between insurance companies over whose policy covers Payless Cashways, Inc.'s (Payless) liability related to the tragic death of Brent A. Hincher, who was killed during a work accident on Payless's premises. The district court granted summary judgment to Commercial Union Midwest Insurance Company (CU) and American Employers' Insurance Company (AE), finding that Payless was not insured under AE's policy and that CU's policy provided only excess insurance after United States Fidelity and Guarantee Insurance Company's (USF & G) primary coverage was exhausted. The plaintiffs appeal the district court's judgment granting summary judgment to CU, and we reverse and remand for further proceedings.1 I. 2 Payless contracted with KamCo Inc. to install lighting displays and to perform re-merchandising work at Payless's stores, including one in Newport, Minnesota. The late Mr. Hincher worked for KamCo and was performing re-merchandising work in Payless's Newport store when a display fell, killing him. Catherine L. Rootness, Hincher's widow and trustee for the heirs and next of kin of Hincher, commenced a wrongful death action on behalf of Hincher's estate against Payless in 1998. Payless filed a third-party claim against KamCo, asserting that KamCo was liable under the Payless-KamCo re-merchandising contract to defend and indemnify Payless for any liability related to the accident. KamCo had a commercial general liability (CGL) policy with AE and an umbrella policy with CU. Payless carried its own excess liability policy with USF & G. 3 Payless and KamCo disagreed about whether their contract included an indemnity agreement, the terms of which were contained in an unsigned Supplementary Terms and Conditions Agreement (Supplementary Agreement). In addition to the indemnity provisions, the Supplementary Agreement required KamCo to provide and maintain $2 million worth of CGL insurance and to name Payless as an additional insured. In June 1999, Payless and KamCo entered into a stipulation, in which they agreed to allow the district court to determine whether their agreement included the Supplementary Agreement. The court ultimately found that the Supplementary Agreement was part of the contract between Payless and KamCo. Payless agreed in the stipulation to waive its claim against KamCo for Payless's defense costs in the underlying wrongful death suit. The parties also agreed that KamCo was obligated to indemnify Payless under the indemnity agreement only for fault attributable to KamCo; KamCo was not obligated to indemnify Payless for Payless's own negligence. 4 After the court determined that the Supplementary Agreement was part of the re-merchandising contract, Payless's insurer, USF & G, tendered Payless's defense of the wrongful death lawsuit to KamCo's insurers, CU and AE, pursuant to Payless's status as an additional insured under KamCo's CGL policies. Both insurers refused the tender. Rootness settled the claim with KamCo, whereby AE paid $500,000 to Rootness on KamCo's behalf under the CGL policy that AE had issued to KamCo. 5 The case proceeded to trial where a jury awarded total damages of $1.2 million, apportioning fault as follows: 22% to KamCo, 60% to Payless, and 18% to the decedent, Mr. Hincher. Rootness received a judgment against Payless for $720,000 based on Payless's 60% share of the fault. During Rootness's appeal of the jury verdict to this court, Rootness settled the claims with Payless and its insurer, USF & G, for $950,000, which was paid as $750,000 in cash from USF & G and $200,0002 in Payless stock. CU and AE refused to participate in the settlement negotiations on Payless's behalf or contribute toward the settlement. 6 Following the settlement, Rootness, Payless, and USF & G commenced this action to recover the amounts paid by or on behalf of Payless, as well as the defense costs, from CU and AE.3 The parties filed cross motions for summary judgment, and the district court granted summary judgment to AE and CU. The district court found that Payless was not insured under AE's policy because, even though KamCo was contractually obligated to add Payless as an additional insured under its CGL policies, KamCo had failed to notify AE and pay the required $50 premium for adding additional insureds to AE's policy. 7 CU's policy definition of additional insureds was broader than AE's policy, extending insured status to "any person or organization with whom or with which [KamCo] ha[d] agreed in writing prior to any loss, `occurrence' or `offense' to provide insurance such as is afforded by this policy, but only with respect to [KamCo's] operations or facilities [KamCo] own[ed] or use[d]." (Appellants' App. at 172, Section II.) The policy defined an "insured" as "any person or organization qualifying as such under SECTION II-WHO IS AN INSURED." (Id. at 175.) For purposes of summary judgment, CU agreed that Payless was an additional insured under this provision of CU's policy. The district court determined that CU and USF & G both provided excess insurance, that their "other insurance" provisions conflicted, and that USF & G's policy should provide primary coverage as it was closer to the risk. In the district court's view, because USF & G's policy limits exceeded the $750,000 it had paid to the estate, CU's "secondary" excess coverage was never triggered. The district court also concluded that Payless was self-insured to the extent of the $200,000 SIR in the USF & G policy and that the self-insurance was "other insurance" that was primary to CU's excess coverage, such that CU was not liable for the first $200,000 of liability not covered by the USF & G policy. Finally, the district court found that CU was not obligated to pay the defense fees for either the underlying wrongful death lawsuit or the current lawsuit. The plaintiffs appeal the district court's ruling to the extent that it dismissed their claims against CU. II. 8 We review the district court's grant of summary judgment de novo. Summary judgment is appropriate if there are no genuine issues of material fact to be decided, and the moving party is entitled to judgment as a matter of law. Dowdle v. Nat'l Life Ins. Co., 407 F.3d 967, 970 (8th Cir.2005); Fed.R.Civ.P. 56(c). The parties agree that Minnesota law applies to this diversity action. The interpretation of insurance policies is a legal issue for the court to determine. Dowdle, 407 F.3d at 970. 9 The parties agree that Payless is insured under CU's policy as an additional insured. The CU and the USF & G policies both provide coverage for Payless's own liability resulting from Mr. Hincher's accident. When two policies provide coverage for the same incident, the question of which policy provides primary coverage is a legal determination that we make by looking to the language of the policies at issue. See Christensen v. Milbank Ins. Co., 658 N.W.2d 580, 587 (Minn.2003). Minnesota courts determine the order of coverage by looking to the priority rules contained in each policy, generally found in the policies' "other insurance" provisions. See N. Star Mut. Ins. Co. v. Midwest Family Mut. Ins. Co., 634 N.W.2d 216, 222 (Minn.Ct.App.2001). If the "other insurance" clauses contained in the applicable policies conflict, then the court looks beyond the language of the policies and assigns primary coverage to the policy that more closely contemplated the risk. Christensen, 658 N.W.2d at 587. Where the policies equally contemplate the risk, Minnesota courts pro rate the loss among the applicable policies. See Cargill, Inc. v. Commercial Union Ins. Co., 889 F.2d 174, 179-80 (8th Cir.1989) (applying Minnesota law and apportioning liability based on the proportion that each insurer's policy limit bears to the total available insurance limits). 10 The district court first determined that the "other insurance" clauses conflicted, and then went on to determine that USF & G's policy was closer to the risk of loss stemming from Mr. Hincher's death. The CU policy contained an "other insurance" clause which provided in part: "If there is other insurance, other than as provided above, which applies to the loss, we pay only for the excess of the amount due from such other insurance, whether collectible or not." (Appellants' App. at 180.) USF & G's policy likewise contained an "other insurance" clause that provided: "This insurance is excess over other insurance whether primary, excess, contingent or on any other basis, except other such insurance purchased specifically to apply in excess of this insurance." (Id. at 208.) "Other insurance" provisions contained in two different policies conflict if "`the apportionment among the companies cannot be made without violating the other insurance clause of at least one company.'" Christensen, 658 N.W.2d at 587 (quoting Integrity Mut. Ins. Co. v. State Auto. & Cas. Underwriters Ins. Co., 307 Minn. 173, 239 N.W.2d 445, 446 (1976)). The clauses at issue here conflict as both policies purport to provide excess coverage where another policy applies to the loss, and application of each policy's "other insurance" provision would preclude coverage by either. See N. Star Mut. Ins. Co., 634 N.W.2d at 222 ("Because each policy's `other insurance' clause provided for coverage only after all other applicable insurance coverage was exhausted, the clauses conflict."). We, like the district court, proceed to determine which policy more closely contemplated the risk. 11 Minnesota courts apply two different tests in apportioning liability between insurers when the other insurance clauses conflict: the "total policy insuring intent" analysis or the "closer to the risk" analysis. The "total policy insuring intent" analysis is a broader test and "examines the primary policy risks upon which each policy's premiums were based and the primary function of each policy." CPT Corp. v. St. Paul Fire & Marine Ins. Co., 515 N.W.2d 747, 751 (Minn.Ct.App.1994). In assessing which policy is closer to a given risk, Minnesota courts consider three specific questions: 12 (1) Which policy specifically described the accident-causing instrumentality? (2) Which premium is reflective of the greater contemplated exposure? (3) Does one policy contemplate the risk and use [of] the accident-causing instrumentality with greater specificity than the other policy-that is, is coverage of the risk primary in one policy and incidental to the other? 13 Ed Kraemer & Sons, Inc. v. Transit Cas. Co., 402 N.W.2d 216, 222 (Minn.Ct.App.1987) (internal marks omitted). The tests are similar, though application of the total policy insuring intent analysis is less mechanical than the closer to the risk analysis. 14 The district court found that USF & G's policy was closer to the risk primarily because Payless paid a premium to USF & G specifically to cover Payless's negligence, whereas CU received no additional premium to cover Payless's negligence. The district court also made much of the fact that the liability at issue was solely that of Payless as determined by the jury's apportionment of 60% of the fault to Payless, and the parties agreed that KamCo was obligated to indemnify Payless under the Supplementary Agreement only for any fault attributed to KamCo. We respectfully believe the district court erroneously relied on these facts in applying the closer to the risk analysis. 15 Although KamCo's liability under the indemnity agreement was limited to the fault attributable to KamCo, CU's insurance policy does not so limit CU's policy's coverage of Payless. The CU policy covers "any . . . organization with . . . which [KamCo] ha[s] agreed in writing . . . to provide insurance such as is afforded by this [CU] policy, but only with respect to [KamCo's] operations." (Appellants' App. at 172.) Minnesota courts have held in similar circumstances that as long as there is a but-for causation between the loss and the named insured's operations, the "additional insured" coverage provides coverage for the additional insured's own liability, irrespective of any limitations of liability between the named insured and the additional insured. See Andrew L. Youngquist, Inc. v. Cincinnati Ins. Co., 625 N.W.2d 178, 183-84 (Minn.Ct.App.2001). The loss at issue here clearly has a but-for connection to KamCo's operations; Mr. Hincher, a KamCo employee, would not have been killed on Payless's premises if KamCo had not been performing its obligations under the contract. Having satisfied the contingencies of the policy, CU's policy provides coverage for Payless's liability arising from its own fault regardless of KamCo's indemnity agreement. The district court erred in relying on the allocation of fault by the jury because "[t]he closeness to the risk test is separate from any dispute among defendants as to whose actions negligently contributed to the accident." See Ed Kraemer & Sons, Inc., 402 N.W.2d at 222 (reversing district court's allocation of the risk and noting that it was "disturbed by this focus on Kraemer's potential negligence"). 16 Considering the closer to the risk analysis, we find the test to be of little guidance. Neither policy specifically describes the accident-causing instrumentality, in this case, the display being installed at Payless's store. USF & G's policy is an "excess general liability" policy and provides that it "will pay `ultimate net loss' . . . because of `bodily injury' . . . to which this insurance applies caused by an `occurrence' that takes place in the `coverage territory.'" (Appellants' App. at 189.) "Ultimate net loss" is defined as "those sums that the insured [Payless] becomes legally obligated to pay as damages." (Id. at 200.) As it is titled, the USF & G policy provides excess general liability coverage to Payless for its "Building Supplies" business. (Id. at 188.) The loss at issue occurred in the course of Payless's business of "Building Supplies" to the extent that Payless was changing the display of those building supplies. 17 CU's policy is equally general. CU agreed to "pay those sums that the `insured' becomes legally obligated to pay as damages. . . because of `bodily injury' . . . to which this insurance applies." (Id. at 168.) Thus, the CU policy covers those sums that Payless becomes legally obligated to pay because of bodily injury. The insurance applies to Payless as an "additional insured," "but only with respect to [KamCo's] operations" (id. at 172), which include hanging display services (id. at 166). The loss at issue here was incurred with respect to KamCo's operations of hanging displays. Thus, the risk was also contemplated by CU's policy, but not with any specificity. 18 Although KamCo paid a flat-rate premium for CU's coverage, the policy clearly contemplated that CU would be providing coverage to other entities with which KamCo contracted to provide coverage as additional insureds without requiring an additional premium to cover the additional insured. The Supreme Court of Minnesota has noted the "long-standing practice in the construction industry by which the parties to a subcontract. . . agree that one party would protect `others' involved in the performance of the construction project." Holmes v. Watson-Forsberg Co., 488 N.W.2d 473, 475 (Minn.1992) (explaining the Minnesota legislature's actions in prohibiting indemnity agreements in construction contracts that indemnify a party from its own negligence, Minn.Stat. § 337.02, but allowing one party to provide insurance that covers the other party's negligence, Minn.Stat. § 337.05). That CU did not charge an additional premium to add additional insureds under its policy does not change the clear import of its policy that provides coverage to additional insureds. Presumably, CU considered the risk posed by including additional insureds within its policy coverage when it set its rate and issued the policy at the flat-rate premium. Cf. Presley Homes, Inc. v. Am. States Ins. Co., 90 Cal.App.4th 571, 108 Cal.Rptr.2d 686, 691 (2001) ("If, as defendant asserts, it simply provided the additional insured endorsements without increasing the amount of the subcontractors' premiums, that still would not affect a covered party's reliance on the policies' language and the nature of the activity covered by them."). 19 Although Payless paid a much larger premium for its policy with USF & G ($614,000) than KamCo paid for its policy with CU ($3,736), those are the total premiums paid for two policies that covered widely divergent risks. Neither policy required or stated a premium specifically for the loss at issue here. KamCo's premium was a one-year flat-rate premium, and its coverage explicitly contemplated "additional insureds" similar to Payless for risks stemming from KamCo's operations. Payless's premium was based on a percentage of Payless's receipts over a two-year period, which were estimated at $5.2 billion dollars, and appeared to cover all of Payless's stores throughout the country. Thus, the premium sizes do not reflect that USF & G contemplated the specific risk of Mr. Hincher's death, an employee of a subcontractor installing displays at one of Payless's stores, any more than CU's policy contemplated that risk, which involved an employee of CU's named insured injured in the course of the named insured's business. The third factor of the closer to the risk test adds little to our analysis, as neither policy specifically contemplated the risk or the instrumentality at issue here, so neither one can be said to have provided primary rather than incidental coverage for the loss. Cf. Nat'l Union Fire Ins. Co. of Pittsburgh, PA v. Republic Underwriters Ins. Co., 429 N.W.2d 695, 697 (Minn.Ct.App.1988) (concluding that a liability policy issued to a daycare provider for her business provided primary coverage for an injury to a child in the daycare over the daycare provider's homeowner's policy because the liability policy specifically described the type of risk involved). 20 We turn then to application of the "total policy insuring intent" analysis, which focuses on the primary risks and functions of each policy. Our review of each policy under this analysis convinces us that the loss here should be shared pro rata. Both policies provide general liability coverage rather than coverage for a specific risk or instrumentality. Cf. Redeemer Covenant Church of Brooklyn Park v. Church Mut. Ins. Co., 567 N.W.2d 71, 80-81 (Minn.Ct.App.1997) (holding that a professional liability policy provided primary coverage over a general liability policy for acts committed by a pastor during counseling sessions); Interstate Fire & Cas. Co. v. Auto-Owners Ins. Co., 433 N.W.2d 82, 86 (Minn.1988) (holding that a school's excess policy was primary over a student's homeowner's policy for injuries to the student during physical education class). The primary risks contemplated by the USF & G policy are broad, covering all of Payless's stores nationwide for any liability Payless incurs related to its business of supplying building supplies. The risk that a subcontractor will get injured installing displays for the building supplies is contemplated by that policy, though it is by no means expressly contemplated. CU's policy provides general liability coverage to KamCo for liability arising from KamCo's operations of providing hanging display services, including providing coverage to additional insureds for which KamCo provides these services. We believe that each policy provides broad coverage, and each policy equally contemplated the loss at issue here. 21 Having decided that each policy provides pro rata coverage, we must address the issue of the first $200,000 of liability, as Payless's policy with USF & G contains a self-insured retention of that amount applicable to this claim. The district court concluded that Payless was self-insured for that amount, and that the self-insurance was "other insurance" within the meaning of CU's "other insurance" clause, such that Payless's self-insurance was primary over CU's excess coverage for the first $200,000 for which USF & G is not liable. Although Minnesota courts have found certificates of self-insurance4 to constitute "insurance" in the context of automobile insurance and compulsory liability insurance statutes, see McClain v. Begley, 465 N.W.2d 680, 682 (Minn.1991) (holding that "[t]he certificate [of self-insurance] filed with the commissioner is the functional equivalent of an insurance policy" for purposes of Minnesota's no-fault statutes, Minn.Stat. § 65B.49, subd. 3(1) (2002)), we do not believe that those cases dictate a finding that Payless's SIR under its USF & G policy is "other insurance" for purposes of allocating coverage for Payless's liability. The self-insurance involved in those cases was approved by the Minnesota legislature as an alternative to statutorily-mandated automobile insurance coverage. The plan of self-insurance must meet statutory requirements before it is approved and a certificate is issued. See Minn.Stat. § 65B.48, subd. 1 (2002) (requiring every automobile owner to maintain "a plan of reparation security under provisions approved by the commissioner"); § 65B.48, subd. 3 (listing requirements for self-insurance); § 65B.43, subd. 9 (defining "Reparation obligor" as "an insurer or self-insurer obligated to provide the benefits required by [Minnesota's No-Fault Automobile Insurance Act]" (emphasis added)); Hertz Corp. v. State Farm Mut. Ins. Co., 573 N.W.2d 686, 688 (Minn.1998) (noting that Minnesota's no-fault automobile statute requires a self-insurer to obtain authorization from the commissioner of commerce to operate as a self-insurer). Self-insurance in that context is statutorily-defined to equate to insurance only because the Minnesota legislature has decided to allow entities to meet their statutory obligation to provide no-fault automobile insurance with self-insurance if specific requirements are met. No Minnesota case suggests that self-insurance is the equivalent of insurance outside the context of no-fault legislation. 22 "The interpretation of an insurance contract is a question of law as applied to the facts presented. . . . When insurance policy language is clear and unambiguous, `the language used must be given its usual and accepted meaning.'" State Farm Mut. Auto. Ins. Co. v. Tenn. Farmers Mut. Ins. Co., 645 N.W.2d 169, 175 (Minn.Ct.App.2002) (internal citations omitted). The term "insurance" generally does not include a SIR under an insurance policy. Compare Black's Law Dictionary 814 (8th ed.2004) (defining "insurance" as "[a] contract by which one party (the insurer) undertakes to indemnify another party (the insured) against risk of loss, damage, or liability arising from the occurrence of some specified contingency") with Black's Law Dictionary 1391 (8th ed.2004) (defining "self-insured retention" as "[t]he amount of an otherwise-covered loss that is not covered by an insurance policy and that usually must be paid before the insurer will pay benefits"). "A majority of jurisdictions across the nation subscribe to the . . . view of self-insurance as `not insurance' in, inter alia, an `other insurance' context." Consolidated Edison Co. of N.Y., Inc. v. Liberty Mut., 193 Misc.2d 399, 749 N.Y.S.2d 402, 404 (N.Y.Sup.Ct.2002) (collecting cases). If CU intended its "other insurance" clause to apply to self-insurance or self-insured retentions included within other insurance policies, it could have so stated in its "other insurance" clause, but it did not. Cf. Redeemer Covenant Church of Brooklyn Park, 567 N.W.2d at 79 (construing policy that stated that its coverage was "excess over and above any other valid and collectible insurance (including any deductible portion) or agreement of indemnity, available to the insured" (emphasis added)); Nabisco, Inc. v. Transp. Indem. Co., 143 Cal.App.3d 831, 192 Cal.Rptr. 207, 208-09 (1983) (finding self-insurance to be "other insurance" where policy explicitly stated that its coverage was excess if there was "other insurance or self-insurance" (emphasis added)). 23 The SIR in this case is nothing more than a large deductible. Payless did not hold itself out to the world as a self-insurer but merely contracted with USF & G for a reduced premium by agreeing to a larger SIR. If Payless had no insurance at all it could be said to be "self-insured" in that it would be responsible for its own liability. CU does not purport that in that circumstance Payless's actual uninsured but allegedly "self-insured" status would be "other insurance" within the meaning of its "other insurance" clause. Rather, CU would be liable for the full amount of Payless's liability as the only insurer. In fact, during oral argument, CU's counsel agreed that if we determined that CU provided primary coverage and USF & G provided excess coverage, then the $200,000 SIR was no longer at issue. If the SIR truly was self-insurance as CU wants to define that term, then the SIR would arguably be primary regardless of what we determined USF & G's coverage to be. CU's concession cements our conclusion that the SIR is nothing more than a deductible contracted between Payless and USF & G. Because the SIR contained in the USF & G policy is not "other insurance" within the meaning of CU's "other insurance" clause, CU's policy provides the only insurance coverage for the first $200,000 of Payless's liability (subject of course to the $10,000 SIR in CU's own policy). See Cargill, Inc., 889 F.2d at 180 (applying Minnesota law and holding that, up to the higher deductible contained in one of two applicable insurance policies, only one insurance policy provided coverage so that its "other insurance" clause was not triggered); cf. Wallace v. Tri-State Ins. Co. of Minn., 302 N.W.2d 337, 340-41 (Minn.1980) (holding that the excess carrier was liable only for the insured's deductible, which was not covered by the primary insurer). 24 The district court determined that CU was not obligated to pay any defense costs because USF & G provided primary coverage and CU's excess policy was never required to contribute toward the settlement, as the settlement was less than USF & G's policy limits. CU's policy included a "duty to defend any claim or `suit' seeking damages to which this insurance applies, but only with respect to damages . . . [n]ot covered by `underlying insurance' or by any other valid and collectible insurance." (Appellants' App. at 168.) Having determined that CU's insurance applies to the underlying lawsuit and that a portion of the damages from the lawsuit are "[n]ot covered by any other valid and collectible insurance," we leave to the district court to address issues of defense costs in the first instance on remand.5 III. 25 We hold that because the self-insured retention under the USF & G policy is not "other insurance," Commercial Union's policy provides coverage for the first $200,000 of Payless's liability, and further, that the policies equally contemplated the risk. Accordingly, the insurers should pro rate the balance of the settlement. The district court's judgment granting summary judgment in favor of Commercial Union is reversed, and the case is remanded to the district court for further proceedings not inconsistent with this opinion. Notes: 1 The appellants do not appeal the district court's ruling that Payless was not insured under AE's policy 2 As discussed in more detailinfra, Payless had a $350,000 self-insured retention (SIR) under the USF & G policy, obligating Payless to cover the first $350,000 of any claim under the policy. Payless filed for bankruptcy protection during the pendency of this case, and the bankruptcy court apportioned $200,000 of the $350,000 SIR toward this litigation, with the remaining SIR apportioned to another employee injured in the same incident. 3 The lawsuit initially named KamCo as a defendant on the theory that KamCo breached its contractual duty to name Payless as an additional insured on the AE policy. Those claims were resolved, and KamCo was released from the case prior to the district court's ruling on the motions for summary judgment 4 CU does not contend that Payless has a "certificate of self-insurance" issued by the Minnesota Department of Commerce 5 Given the intricacies involved with Payless's bankruptcy and the lack of a developed appellate record on the issue, we also leave to the district court to address on remand CU's argument that Payless is not entitled to recoup $200,000 from CU because Payless tendered stock in lieu of cash in the settlement of its $200,000 obligation. (See Appellees' Br. at 23-24.)
package client const ( SERVICE_RESTART_TYPE = "serviceRestart" ) type ServiceRestart struct { Resource RollingRestartStrategy RollingRestartStrategy `json:"rollingRestartStrategy,omitempty" yaml:"rolling_restart_strategy,omitempty"` } type ServiceRestartCollection struct { Collection Data []ServiceRestart `json:"data,omitempty"` client *ServiceRestartClient } type ServiceRestartClient struct { rancherClient *RancherClient } type ServiceRestartOperations interface { List(opts *ListOpts) (*ServiceRestartCollection, error) Create(opts *ServiceRestart) (*ServiceRestart, error) Update(existing *ServiceRestart, updates interface{}) (*ServiceRestart, error) ById(id string) (*ServiceRestart, error) Delete(container *ServiceRestart) error } func newServiceRestartClient(rancherClient *RancherClient) *ServiceRestartClient { return &ServiceRestartClient{ rancherClient: rancherClient, } } func (c *ServiceRestartClient) Create(container *ServiceRestart) (*ServiceRestart, error) { resp := &ServiceRestart{} err := c.rancherClient.doCreate(SERVICE_RESTART_TYPE, container, resp) return resp, err } func (c *ServiceRestartClient) Update(existing *ServiceRestart, updates interface{}) (*ServiceRestart, error) { resp := &ServiceRestart{} err := c.rancherClient.doUpdate(SERVICE_RESTART_TYPE, &existing.Resource, updates, resp) return resp, err } func (c *ServiceRestartClient) List(opts *ListOpts) (*ServiceRestartCollection, error) { resp := &ServiceRestartCollection{} err := c.rancherClient.doList(SERVICE_RESTART_TYPE, opts, resp) resp.client = c return resp, err } func (cc *ServiceRestartCollection) Next() (*ServiceRestartCollection, error) { if cc != nil && cc.Pagination != nil && cc.Pagination.Next != "" { resp := &ServiceRestartCollection{} err := cc.client.rancherClient.doNext(cc.Pagination.Next, resp) resp.client = cc.client return resp, err } return nil, nil } func (c *ServiceRestartClient) ById(id string) (*ServiceRestart, error) { resp := &ServiceRestart{} err := c.rancherClient.doById(SERVICE_RESTART_TYPE, id, resp) if apiError, ok := err.(*ApiError); ok { if apiError.StatusCode == 404 { return nil, nil } } return resp, err } func (c *ServiceRestartClient) Delete(container *ServiceRestart) error { return c.rancherClient.doResourceDelete(SERVICE_RESTART_TYPE, &container.Resource) }
New York Jets safety Jamal Adams went out of his way to greet a fan with special needs at training camp on Monday. The fan, who was wearing Adams’s Jersey, visibly becomes excited when the NFL player signs it. His whole face breaks out into a huge grin. Watch. Make someone’s day. #MondayMotivation A post shared by New York Jets (@nyjets) on Jul 30, 2018 at 5:21am PDT Adams also gives the excited fan a pair of his gloves and the two shared a happy moment. The former Louisiana State University player said that getting to meet the eager fan is “the best part of the game of football” and that it was a moment he won’t forget.
Following the unprecedented terrorist attacks against US targets on 9/11, many public figures, including President George W. Bush, alleged that terrorism is rooted in low per capita gross domestic product (GDP) or low development (see, e.g., [@bibr31-0022002714535252]).^[1](#fn1-0022002714535252){ref-type="fn"}^ Other public figures made similar allegations. The empirical literature, surveyed in the next section, established no clear-cut connection between terrorism and various income measures. In all but a few instances, the extant literature used a linear specification and either focused on total or transnational terrorist incidents for one extended period. In so doing, the literature generally ignored the possibility that per capita GDP may have a different impact on domestic as opposed to transnational terrorism or that this impact may have morphed over time. In a novel study, [@bibr11-0022002714535252]) investigated the relationship between terrorism and per capita GDP, while distinguishing between the two forms of terrorism for a short recent period, 1998 to 2007. These authors hypothesized that potential terrorists in most very poor countries possess little means for supporting terrorism, while rich countries have ample resources for crushing resident terrorists. This reasoning then implies a nonlinear relationship with terrorist attacks rising to a peak at some intermediate per capita GDP level. This peak level was found to differ between the two kinds of terrorism, but these authors offered no theoretical explanation for this difference. The relationship between per capita GDP and terrorism is not static.^[2](#fn2-0022002714535252){ref-type="fn"}^ As the composition of terrorist groups changed to include fewer leftists and many more religious fundamentalists around the early 1990s ([@bibr35-0022002714535252]; [@bibr21-0022002714535252]), the causal link between per capita GDP and terrorism is likely to have changed. This follows because the leftists staged many of their attacks in rich countries during the 1970s and 1980s, while the religious fundamentalists directed their attacks against targets of opportunities in poor countries after the early 1990s (e.g., Americans in the Middle East or Asia). As homeland security improved following 9/11, these transnational terrorist attacks shifted to poorer countries with less border security, where foreign interests were targeted ([@bibr13-0022002714535252], [@bibr14-0022002714535252]). The purpose of the current article is to investigate the nexus between per capita GDP and terrorism for various scenarios using a flexible nonlinear empirical specification that includes linear, quadratic, and other functional forms. This article differs from [@bibr11-0022002714535252]) in a number of essential ways. First, the current article examines a much longer period that runs from 1970 to 2010. This longer time frame allows us to ascertain changes, if any, in the nonlinear income--terrorism relationship for two important subperiods---1970 to 1992 and 1994 to 2010---that correspond to the greater dominance of the leftist and fundamentalist terrorists, respectively.^[3](#fn3-0022002714535252){ref-type="fn"}^ We indeed uncover a shift in the income--terrorism relationship after 1993 that not only involves per capita GDP associated with the most terrorism but also the nature of the nonlinearity. Second, unlike Enders and Hoover, we distinguish between the location (i.e., venue) of the attack and the perpetrators' country for transnational terrorist attacks. By so doing, we uncover a stronger link between low per capita income and transnational terrorism when the perpetrators' country is the focus. Third, the current article develops a modified Lorenz curve to display visually the dispersion between terrorist attacks and per capita GDP percentiles for various subsamples. [@bibr11-0022002714535252]) relied, instead, on hard-to-read scatter plots with income per capita on the horizontal axis. Fourth, the current article establishes that the nonlinear relationship between per capita GDP and terrorism cannot be adequately captured by a quadratic representation for any of the eight terrorism series examined. This finding raises questions about earlier works that tried to capture the nonlinearity with a simple quadratic per capita GDP representation. The clustering of terrorist incidents that we find for some series is more complex than that for the two short series in Enders and Hoover. Fifth, the current article provides a much greater in-depth econometric analysis with more controls. Finally, unlike Enders and Hoover, we provide a theoretical foundation for our anticipated findings. Our analysis strongly suggests that the myriad findings in the literature stem from the different periods used, the aggregation of terrorist attacks, and the generally, but not universally, assumed linear specification. The changing mix of terrorist ideologies may affect how per capita GDP impacts terrorist attacks. In addition, the country's viewpoint may make a difference in how per capita GDP impacts terrorism. The low per capita GDP justification for terrorism appears more descriptive of the perpetrators' country than of the venue country. No clear findings characterize the literature because too many confounding considerations are aggregated in the empirical tests, which relied on an inflexible functional form. Preliminaries {#section1-0022002714535252} ============= On Terrorism {#section2-0022002714535252} ------------ Terrorism is the premeditated use or threat to use violence by individuals or subnational groups to obtain a political objective through the intimidation of a large audience beyond that of the immediate victim. Consistent with the literature, this definition views the perpetrators as below the state level in order to rule out state terrorism. Two distinct categories of terrorism are relevant. Domestic terrorism is a single-country affair where the victims and perpetrators hail from the venue country, where the attack occurs. If the nationalities of the victims or the perpetrators involve more than one country, or if the venue country differs from that of the victims or perpetrators, then the terrorist attack is a transnational incident. For transnational terrorism, a researcher must decide whose (victim or perpetrator) countries' economic, political, and demographic variables to include in the empirical investigation.^[4](#fn4-0022002714535252){ref-type="fn"}^ Terrorist Event Data {#section3-0022002714535252} -------------------- Two terrorist event data sets are used in our statistical analysis. The International Terrorism: Attributes of Terrorist Events (ITERATE) records the incident date, venue country, casualties, perpetrators' nationalities (up to three), victims' nationalities (up to three), and other variables for just transnational terrorist incidents ([@bibr30-0022002714535252]). Currently, ITERATE covers 1968 to 2011 and, like other terrorist event databases, relies on the news media for its variables. A second event data set is the Global Terrorism Database (GTD), which records both domestic and transnational terrorist incidents ([@bibr38-0022002714535252]). Unfortunately, GTD does not distinguish between domestic and transnational terrorist incidents. Since the two types of terrorism may be differentially influenced by alternative drivers, this distinction is essential in order to ascertain the relationship, if any, between per capita GDP and terrorism. Enders, Sandler, and Gaibulloev (hereafter, ESG; 2011) devised a five-step procedure for distinguishing between domestic and transnational terrorist incidents in GTD for 1970 to 2007, which was later updated to 2008 to 2010.^[5](#fn5-0022002714535252){ref-type="fn"}^ ESG calibrated GTD transnational terrorist attacks to those in ITERATE to address periods of under- and overreporting of terrorist incidents in GTD. We use ESG's calibrated data in our empirical runs. Although GTD records many of the same variables as ITERATE, a crucial difference is that GTD does not record the countries of perpetrators. On the Changing Nature of Terrorism {#section4-0022002714535252} ----------------------------------- In the 1970s and 1980s, the secular leftists, including the nationalist Palestinian and Irish groups, were the dominant transnational terrorist influence ([@bibr35-0022002714535252]; [@bibr21-0022002714535252]). These leftist terrorist groups' grievances were often against rich countries that pursued unpopular foreign policy (e.g., the Vietnam War or support of Israel). The leftists also included the anarchists and communist groups that desired the overthrow of rich capitalist systems and the governments that ruled them. There were also leftist terrorist groups---for example, Direct Action in France---that specialized in domestic terrorism. With the decline of communism in Eastern Europe, many European leftist terrorist groups---for example, Red Army Faction, Italian Red Brigades, and Direct Action---either ended operations or were annihilated by the authorities ([@bibr2-0022002714535252]). The very active Shining Path, a leftist terrorist group in Peru, became much less active after the arrest of its leader, Abimael Guzmán, in September 1992. By the early 1990s, religious fundamentalist terrorists gained ground as a dominant terrorist force ([@bibr12-0022002714535252]; [@bibr21-0022002714535252]). Unlike the leftists who generally wanted to limit collateral damage,^[6](#fn6-0022002714535252){ref-type="fn"}^ the fundamentalists wanted to maximize death tolls as 9/11 and the Madrid commuter train bombings demonstrated. The number of active nationalist/separatist terrorist groups also increased after 1993.^[7](#fn7-0022002714535252){ref-type="fn"}^ In any study of the relationship between per capita GDP and terrorism, there must be recognition of this changing nature of terrorism, which we place at 1994 and beyond. On the Poverty and Terrorism Literature {#section5-0022002714535252} --------------------------------------- Prior to the [@bibr11-0022002714535252]) study, the literature on poverty and terrorism displayed some noteworthy characteristics. First, the underlying empirical models generally hypothesized and tested a linear relationship between per capita GDP and terrorism (e.g., [@bibr24-0022002714535252]; [@bibr1-0022002714535252]; [@bibr31-0022002714535252]). However, articles by [@bibr9-0022002714535252] for total terrorism (1970--1997), [@bibr18-0022002714535252]) for total terrorism (1971--2007), and [@bibr25-0022002714535252]) for transnational terrorism (1968--1998) used a quadratic per capita GDP term, whose negative and significant coefficient implied an inverted U-shape relationship between per capita GDP and terrorism. Second, some studies investigated micro-level data (e.g., [@bibr3-0022002714535252]), others examined macro-level data (e.g., [@bibr27-0022002714535252]; [@bibr33-0022002714535252]), and still others analyzed both micro- and macro-level data ([@bibr24-0022002714535252]). Third, this literature typically used transnational or total terrorist data, with the notable exception of [@bibr33-0022002714535252]), who used ESG's (2011) division of GTD. Fourth, these earlier studies analyzed varied samples of countries for alternative periods. For example, [@bibr4-0022002714535252]) examined 127 countries for 1968 to 1991 during the dominance of the leftists and found a positive long-run relationship between per capita GDP in the venue country and transnational terrorist attacks. This finding is consistent with reduced per capita GDP decreasing terrorism. Fifth, most of these articles focused on the venue country (e.g., [@bibr27-0022002714535252]; [@bibr31-0022002714535252]), with the exception of [@bibr23-0022002714535252]) and [@bibr20-0022002714535252]). Krueger and Laitin distinguished between venue and perpetrators' countries, whereas Gassebner and Luechinger distinguished venue, perpetrators', and victims' countries. Neither of these two studies ran separate regressions for domestic and transnational terrorist incidents. In fact, Krueger and Laitin only investigated transnational terrorist attacks, while Gassebner and Luechinger examined transnational and total terrorist attacks. In terms of the relationship between per capita GDP and terrorism, these earlier studies found diverse results. [@bibr24-0022002714535252]) showed that there was no relationship between per capita income and transnational terrorism once political freedoms were introduced into the regressions. Similarly, [@bibr1-0022002714535252]) demonstrated that the risk of terrorism was not greater in poor countries once political freedoms and other country-specific controls (e.g., ethnic fractionalization) were introduced. [@bibr23-0022002714535252]) showed that political repression, not GDP measures, encouraged transnational terrorism. [@bibr31-0022002714535252]) also found that economic variables (e.g., the Human Development Index) did not affect the level of transnational terrorism. More recently, [@bibr33-0022002714535252]) uncovered that higher levels of per capita GDP increased domestic terrorism. This positive relationship is inconsistent with the low per capita GDP cause of terrorism. [@bibr20-0022002714535252]) also reported a robust positive relationship between per capita GDP and terrorism when using the viewpoint of victims' countries. The relationship was not robust from the venue or perpetrator countries' viewpoints. In their study of globalization and terrorism, [@bibr27-0022002714535252]) showed that higher per capita GDP in the venue country reduced the amount of transnational terrorism for some models. Their sample included 112 countries for 1975 to 1997, which was primarily before the prevalence of the fundamentalist terrorists. Subsequently, [@bibr26-0022002714535252]) also found a negative relationship between per capita GDP and transnational terrorism when additional control variables were introduced. Except for [@bibr27-0022002714535252]) and [@bibr26-0022002714535252]), there was little empirical support that low per capita GDP encouraged terrorism. Even the micro-level studies did not support this view. Rather, some micro-level studies found that reduced economic conditions (e.g., greater unemployment) allowed terrorist leaders to recruit more skilled operatives (see [@bibr6-0022002714535252]; [@bibr3-0022002714535252]), but this is not the same as arguing that low per capita GDP is the root cause of terrorism. A puzzle concerns the alternative empirical findings regarding per capita GDP as a cause of terrorism. We believe that these diverse findings come from the lack of linearity between per capita GDP and terrorism and from their changing relationship as different terrorist motives came to dominate the world stage. The latter suggests that the sample period is an important consideration. Other contributing factors to past findings arise from the country viewpoint assumed and the type of terrorism investigated. Theoretical Discussion {#section6-0022002714535252} ====================== We draw from the literature and our own insights to hypothesize a nonlinear, nonsymmetric relationship between per capita GDP and terrorism. In particular, we identify a number of considerations that give rise to this nonlinear relationship from the venue or perpetrators' countries' viewpoints. There is no reason to expect the per capita GDP influence to be symmetric, as reflected in previous explanations behind an inverted U-shape parabolic relationship ([@bibr25-0022002714535252]; [@bibr18-0022002714535252]; [@bibr9-0022002714535252]). Since countries with very low levels of per capita GDP correlate with failed states ([@bibr32-0022002714535252]), there might be a negative relationship between terrorism and income starting with the poorest countries. These lawless states provide an opportunity for terrorist groups to operate with impunity. In many cases, these states serve as safe havens for launching attacks abroad. Such failed states possess little counterterrorism capability or law enforcement assets, because of limited tax revenue ([@bibr16-0022002714535252]; [@bibr25-0022002714535252]). Another contributing factor to a clustering of terrorism at the lowest income levels may arise from opportunity cost considerations, since terrorists have few market opportunities to sacrifice by becoming terrorists ([@bibr18-0022002714535252]). As income levels grow in real terms in these failed states, counterterrorism capabilities and opportunity costs improve, thereby potentially curbing terrorism. A peak is anticipated at some intermediate income level, whose location depends on the period, type of terrorism, and country viewpoint (see the following).^[8](#fn8-0022002714535252){ref-type="fn"}^ This peak is pronounced because there are many nonfailed states that experience terrorism or are home to perpetrators. For all forms of terrorism, as per capita GDP rises to some middle level in the venue or perpetrators' countries, terrorists and their supporters have greater resources to mount a larger sustained terrorist campaign ([@bibr18-0022002714535252]). However, a threshold per capita GDP will eventually be reached where still higher per capita GDP levels will set in motion terrorism-curbing influences. After some threshold per capita GDP level, terrorists and their supporters must sacrifice much in the way of opportunity cost.^[9](#fn9-0022002714535252){ref-type="fn"}^ Also, potential grievances are apt to dissipate as a perpetrator's economy becomes richer, where government expenditures can serve more varied interests ([@bibr25-0022002714535252]). The capacity of the government to quash terrorist groups or to harden potential targets will be formidable at high per capita GDP levels in either the venue or perpetrators' countries. Moreover, education levels, which are positively correlated with per capita GDP, can bolster terrorist attacks at an intermediate income level by providing terrorist groups with operatives with sufficient human capital ([@bibr3-0022002714535252]). But after some per capita GDP, opportunity cost considerations will curb these skilled operatives' enthusiasm in the venue and perpetrators' countries. For both venue and perpetrators' countries, our theoretical discussion implies not only the possibility of an intermediate income peak but also the *nonsymmetrical rises and falls* on either side of this peak. For example, if a targeted government relies on defensive measures, then the reduction of terrorism beyond some intermediate per capita GDP level is apt to be gradual. In contrast, a government's reliance on proactive measures to annihilate the terrorist groups at home or abroad could, if successful, result in a steep drop in terrorism beyond its apogee. The rise to the peak level of terrorist activity may be gradual or steep depending on how grievances or other terrorism-supporting factors build. Asymmetry may also arise from multiple underlying considerations, which need not be in sync as per capita GDP rises or falls. There is, thus, no reason to expect a symmetric peak terrorism level, associated with a quadratic per capita GDP term. This suggests the need for a flexible nonlinear form, as used here, that allows for the quadratic representation as a special case. Next, we turn to why the per capita GDP and terrorism relationship is anticipated to differ for alternative terrorism samples. Domestic terrorism is expected to be more motivated by economic grievances ([@bibr33-0022002714535252], [@bibr34-0022002714535252]), while transnational terrorism is more motivated by grievances tied to foreign policy decisions by rich democracies ([@bibr37-0022002714535252]). Consequently, the peak level of domestic terrorism will correspond to a lower per capita GDP than that for transnational terrorism, especially before 1993. After 9/11, transnational terrorists faced tighter international borders, which would have restricted their movement, thereby affecting attack venues in the latter part of 1994 to 2010. These security measures should keep the peak level of domestic and transnational terrorism at similar per capita GDP levels after 1993 as transnational terrorist attacks increasingly targeted foreign interests at home ([@bibr13-0022002714535252]). Based on the perpetrators' nationality, there is an expected *shift* in the per capita GDP associated with the most transnational terrorist attacks after 1993. In the pre-1993 period, the leftist groups were a strong terrorist influence. Many of their members resided in wealthy countries. In contrast, the religious fundamentalists were generally located in poor Middle Eastern and Asian nations after 1993 ([@bibr13-0022002714535252]). Thus, we should anticipate the greatest concentration of transnational terrorist attacks at a higher per capita GDP in the earlier than in the later period, based on the perpetrators' nationality. This prediction is reinforced by the resurgence of nationalist/separatist terrorists in relatively poor countries after 1993 (see note 7). This same predicted shift should apply to the venue country owing to the greater presence of leftists before 1993. In addition, increased security measures in rich countries after 9/11 should reinforce this shift during the last half of 1994 to 2010. Examining the Terrorism Series {#section7-0022002714535252} ============================== Throughout the analysis, our terrorist series involve at least one casualty. In total, we have eight terrorism incident series: GTD domestic terrorism casualty events before and after 1993, GTD transnational terrorism casualty events before and after 1993, ITERATE casualty events by location before and after 1993, and ITERATE casualty events by perpetrator's country before or after 1993. We choose our two periods to reflect the predominance of the leftists and religious fundamentalists, respectively, while taking advantage of discarding 1993, for which GTD has no data.^[10](#fn10-0022002714535252){ref-type="fn"}^ The usual normality assumption is inappropriate because many countries experienced no terrorism and most countries experienced no more than a single incident. In the pre-1993 period, 53 of the 166 usable sample countries experienced no domestic casualty incidents, while 54 experienced no transnational casualty incidents (summary table available upon request). There was a slight increase in the number of incidents over time. Notably, the standard error of each series is at least twice its mean, and all series fail the Jarque--Bera test for normality. As is standard, we estimate the various incident series as counts using the Poisson and the negative binomial distributions. Prior to a rigorous econometric analysis, we devise a straightforward modification of a Lorenz curve to illustrate the relationship between terrorism and per capita GDP (or income). A standard Lorenz curve shows the cumulative shares of total world income accounted for by the cumulative percentiles of countries, ranked from poorest to richest. Instead, our modified Lorenz curves show the cumulative shares of total world terrorism accounted for by the cumulative percentiles of states, ranked from poorest to richest. For example, in panel 1 of [Figure 1](#fig1-0022002714535252){ref-type="fig"}, the horizontal axis shows the cumulative percentiles of countries ranked by per capita income, while the vertical axis shows the cumulative percentage of world domestic terrorism casualty incidents. As such, points along the diagonal line represent the line of equality for the pre-1993 data set. The 20th, 40th, 60th, and 80th income percentiles correspond to real per capita GDP levels of US\$366 (Nigeria), US\$1,028 (Honduras), US\$2,410 (Chile), and US\$7,947 (Slovenia). ![Lorenz curve of GTD casualty incidents.\ *Note:* GTD = Global Terrorism Database.](10.1177_0022002714535252-fig1){#fig1-0022002714535252} If there were a uniform distribution of terrorism among all countries, our so-called terrorism Lorenz curve would lie along the diagonal; instead, the cumulative terrorism percentiles lie below the diagonal in panel of [Figure 1](#fig1-0022002714535252){ref-type="fig"} until the 55th income percentile is reached. In fact, the poorest 25 percent of states accounted for about 18 percent of total domestic casualty incidents and the next 25 percent accounted for about 16 percent of these incidents, so that the lowest 50 percent accounted for 34 percent of these incidents. However, there are sharp increases in the amount of terrorism in the next 20 percent of the states; the countries in the 51st through 70th percentiles of the income distribution experienced 38 percent of domestic terrorism. Hence, during the pre-1993 period, domestic terrorism seems to be clustered in the states with income levels that are slightly to well above the 51st percentile. This pattern is consistent with the prevalent leftist and nationalist/separatist terrorists directing attacks at their relatively wealthy homelands (e.g., France, Spain, the United Kingdom, and West Germany). Panel 2 of [Figure 1](#fig1-0022002714535252){ref-type="fig"} shows a different pattern of domestic terrorism for the post-1993 period, where the rapid increase in terrorism occurred at a much lower income percentile than that shown in panel 1. Specifically, for the post-1993 period, the poorest 20 percent of countries ranked by per capita GDP levels only sustained about 7 percent of the domestic terrorism incidents with casualties, whereas the next 30 percent accounted for about 65 percent of these incidents. Because the next 10 percent of countries suffered about 18 percent of the domestic terrorism, the richest 40 percent experienced only 10 percent of these incidents. For the post-1993 data set, the 20th, 40th, 60th, and 80th percentiles correspond to real per capita GDP levels of US\$287 (Ghana), US\$1,431 (Paraguay), US\$4,133 (Lithuania), and US\$14,531 (Spain). Terrorism was clustered in the 30th to 60th income percentiles although the point at which the rapid increases in terrorism occurred shifted toward the lower end of this real per capita income spectrum. According to our priors, this marked shift after 1993 is likely due to the much greater prevalence of religious fundamentalist terrorists, who generally resided in low- and middle-income countries ([@bibr13-0022002714535252]). This era was also marred with many internal conflicts in these countries. Such conflicts, orchestrated by nationalist/separatist motives, are often associated with terrorism ([@bibr36-0022002714535252]). In contrast to panels 1 and 2, panel 3 shows that transnational terrorism strongly clustered in the middle- to upper-income countries in the pre-1993 period. The poorest 50 percent of states had only 24 percent of transnational terrorism with casualties, whereas the next richer 40 percent of states sustained 66 percent of these attacks. Panel 4 shows that this pattern changed dramatically for the post-1993 period. In fact, the shape of this terrorism Lorenz curve is very much like that in panel 2. The poorest 20 percent of countries accounted for about 11 percent of transnational terrorism; however, the next richer 30 percent of countries accounted for 50 percent of the incidents. Panel 3 is consistent with the prevalence of the leftist terrorists in the early period, while panel 4 is consistent with the prevalence of the religious fundamentalist and nationalist/separatist terrorists after 1993. As theorized earlier, the push for homeland security in rich countries after 9/11 ([@bibr14-0022002714535252], 328-33) would also reinforce the Lorenz pattern in panel 4, where countries in the 25th to 35th percentiles sustained a disproportionately large percentage of transnational terrorist attacks and rich countries suffered a disproportionately small percentage of transnational terrorist attacks. In [Figure 2](#fig2-0022002714535252){ref-type="fig"}, we use ITERATE data to show the Lorenz curves for transnational casualty incidents measured by location and by the nationality of the incident's perpetrator. Since panels 1 and 2 measure terrorism by the location of the incident, these two panels correspond to panels 3 and 4 of [Figure 1](#fig1-0022002714535252){ref-type="fig"}, constructed using the GTD data. Given that we adjusted the GTD data using the weighting scheme developed in ESG (2011), it is not surprising that the shapes of the corresponding terrorism Lorenz curves are quite similar. ![Lorenz curve of ITERATE casualty incidents.\ *Note:* ITERATE = International Terrorism: Attributes of Terrorist Events.](10.1177_0022002714535252-fig2){#fig2-0022002714535252} In comparing panels 1 and 3 of [Figure 2](#fig2-0022002714535252){ref-type="fig"}, we find that the different measures of terrorism have different implications. In the pre-1993 period, the location of terrorism tended to cluster in the high-income and upper end of the middle-income countries; countries in the 55th to 70th percentiles had 25 percent of transnational terrorism and countries in the upper 10 percentiles had 30 percent of these attacks. In contrast, the perpetrators tended to hail from the upper middle-income countries; countries in the 55th to 70th percentiles had 44 percent of the terrorism. Comparing pre-1993 and the corresponding post-1993 panels, we see that the clustering of terrorism measured by location or by perpetrators' nationality shifted greatly toward the poorer countries in the post-1993 period. These post-1993 patterns agree with our priors. Terrorist attacks became more concentrated in lower income countries, home to the religious fundamentalists in North Africa, the Middle East, and Asia. This agrees with more attacks against Western influences in North Africa, the Middle East, and Asia ([@bibr13-0022002714535252]). Linear Models of Terrorism and Income {#section8-0022002714535252} ===================================== Consider the simple linear model$$T_{i} = \mathit{\alpha}_{0} + \mathit{\alpha}_{1}gdp_{i} + \mathit{\epsilon}_{i},$$ where *T~i~* denotes the number of terrorist incidents occurring in country *i*, the αs are parameters to be estimated, *gdp~i~* is a measure of real per capita GDP in country *i*, and ∊*~i~* is the error term. For now, it does not matter whether other control variables are added to [equation (1)](#disp-formula1-0022002714535252){ref-type="disp-formula"}, what measure of terrorism or sample period is selected, or whether [equation (1)](#disp-formula1-0022002714535252){ref-type="disp-formula"} is estimated with ordinary least squares or with maximum likelihood estimation using a Poisson or negative binomial distribution. The key point is that the specification in [equation (1)](#disp-formula1-0022002714535252){ref-type="disp-formula"} does not allow for the type of clustering described in the previous section. In [equation (1)](#disp-formula1-0022002714535252){ref-type="disp-formula"}, if *gdp~i~* increases by one unit, terrorism increases by α~1~ units, and if *gdp~i~* increases by two units, terrorism increases by 2α~1~ units. However, this is not the pattern observed in [Figures 1](#fig1-0022002714535252){ref-type="fig"} and [2](#fig2-0022002714535252){ref-type="fig"}, where per capita GDP increases in the poorest and the richest countries had relatively small effects on terrorism. When we pool all of the ITERATE casualty incidents over the two sample periods, ignore the possibility of nonlinearities, and estimate the model using the negative binomial distribution, we obtain^[11](#fn11-0022002714535252){ref-type="fn"}^ $$\begin{matrix} \, & {{\widehat{T}}_{i} = {exp}( - 8{.32 + 0}.30lgdp_{i} + 0.55lpop_{i}{),}\,\mathit{\eta} = 1.38,} \\ \, & {\quad\quad( - 5.60)\quad(3.59)\quad\quad(6.76)\quad\quad(16.71)} \\ \end{matrix}$$ where ${\widehat{T}}_{i}$ = estimated number of domestic terrorist incidents, *lgdp* = log of real per capita *GDP*, *lpop* = log of population, η^2^ = is the variance parameter of the negative binomial distribution, *i* is a country subscript, and the *t*-statistics (constructed using robust standard errors to account for heteroscedasticity) are in parentheses.^[12](#fn12-0022002714535252){ref-type="fn"}^ Hence, pooling the ITERATE data over the entire 1970 to 2010 period implies that there is actually a positive relationship between per capita income and terrorism. In accord with some findings, a linear specification that pools data across a long time span indicates that increasing per capita GDP is not expected to mitigate terrorism (e.g., [@bibr33-0022002714535252]).^[13](#fn13-0022002714535252){ref-type="fn"}^ As a diagnostic check for nonlinearity, we estimated each of the eight terrorism series with an intercept, *lgdp~i~*, its square (i.e., *lgdp~i~* ^2^), and *lpop~i~*. If there is a nonlinear relationship between terrorism and per capita GDP, the parabolic shape engendered by the squared term might capture the tendency for terrorism to cluster within the middle-income nations, as argued by [@bibr25-0022002714535252]) and others. This is not to say that the quadratic specification is the most appropriate one to capture the effects of per capita GDP on terrorism. Clearly, misspecifying the actual nonlinear form of the relationship between terrorism and per capita GDP can be as problematic as ignoring the nonlinearity altogether. The results in [Table 1](#table1-0022002714535252){ref-type="table"} are instructive, where the first four series use GTD data, while the last four series use ITERATE (IT) data in the pre-1993 (pre) and post-1993 (post) periods. As indicated, the various series allow for venue, perpetrators' nationality, and domestic and transnational incidents. For each of the eight terrorism measures, the point estimate of the coefficient on *lgdp~i~* is positive, while the coefficient on (*lgdp~i~*)^2^ is negative. This implies that terrorism increases with real per capita income until a maximum, thereafter further per capita income increases reduce terrorism. In six of the eight cases, the overall fit of the model with the (*lgdp~i~*)^2^ term is selected by the Akaike Information Criterion (AIC) over the linear specification. Finally, a χ^2^ test indicates that the null hypothesis that both the *lgdp* and (*lgdp*)^2^ coefficients jointly equal zero cannot be maintained in five of the eight cases. ###### Diagnostics with Squared Logarithm of GDP. ![](10.1177_0022002714535252-table1) Series Intercept *lgdp* *lgdp* ^2^ *lpop* η χ^2^ AIC AIC(lin) -------------------------- -------------------- --------------- ------------------- ------------------ --------------- ----------------- ------------- ------------- Domestic_pre (GTD) --24.273 (--6.63) 6.187 (6.44) --0.391 (--6.30) 0.999 (11.86) 2.052 (17.84) 42.317 (.00) **823.55** --823.40 Domestic_post (GTD) --6.166 (--1.36) 1.382 (1.07) --0.100 (--1.12) 1.047 (11.30) 1.573 (14.67) 1.425 (.49) **−614.00** --613.99 Transnational_pre (GTD) --16.196 (--4.03) 3.826 (3.62) --0.224 (--3.32) 0.655 (10.23) 1.583 (13.93) 30.458 (.00) **--80.63** --80.55 Transnational_post (GTD) --7.590 (--2.65) 1.676 (2.08) --0.114 (--2.08) 0.691 (8.90) 1.327 (9.40) 4.345 (.11) **--29.46** --29.43 Location_pre (IT) --12.247 (--31.88) 2.897 (74.78) --0.165 (--76.78) 0.601 (1,043.20) 1.600 (15.80) 7,297.911 (.00) **--86.29** --86.27 Location_post (IT) --1.926 (--0.93) 0.299 (0.54) --0.023 (--0.61) 0.578 (6.59) 1.654 (15.75) 0.524 (.77) --24.26 **--24.27** Nationality_pre (IT) --12.760 (--3.54) 3.187 (3.33) --0.208 (--3.40) 0.640 (7.92) 1.821 (13.04) 11.880 (.00) **--32.87** −32.83 Nationality_post (IT) --4.957 (--1.89) 1.026 (1.51) --0.078 (--1.81) 0.603 (8.19) 1.811 (11.79) 10.365 (.01) --8.40 **--8.41** *Note*: AIC = Akaike Information Criterion; GDP = gross domestic product; IT = ITERATE; lin = linear; GTD = Global Terrorism Database. Boldface entries in the AIC column indicate that the model containing the quadratic *lgdp* term is selected. *t*-statistics are in parentheses, except for the *p* values in parentheses beneath the chi-square statistic. Exponential STR (ESTR) and Logistic Variant of the STR (LSTR) Models {#section9-0022002714535252} ==================================================================== As we show in ensuing sections, the relationship between terrorism and per capita income is often more complicated than adding a quadratic per capita income term. A specification that captures the tendency of terrorist incidents to cluster in countries with similar GDP levels is the smooth transition regression (STR) model ([@bibr39-0022002714535252]). The STR model is a flexible functional form that nests the linear model and can approximate the quadratic model. Since we have count data, the STR model is estimated using a negative binomial distribution. Consider the following specification: $${\widehat{T}}_{i} = \exp\left\lbrack {\left( {\mathit{\alpha}_{0} + \mathit{\alpha}_{1}lgdp_{i} + \mathit{\alpha}_{2}lpop_{i}} \right) + \mathit{\theta}_{i}\left( {\mathit{\beta}_{0} + \mathit{\beta}_{1}lgdp_{i} + \mathit{\beta}_{2}lpop_{i}} \right)} \right\rbrack,$$ where α*~j~* and β*~j~* are coefficients (*j* = 1, 2) and, in the ESTR variant of the model, θ*~i~* has the form: $$\mathit{\theta}_{i} = 1 - \exp\left\lbrack {- \mathit{\gamma}\left( {lgdp_{i} - c} \right)^{2}} \right\rbrack,{\mathit{\gamma} > 0}.$$ The parameter γ is called the "smoothness" parameter, because it determines how quickly θ*~i~* transitions between the two extremes of zero and unity. The ESTR model is clearly nonlinear because the effect of *lgdp~i~* on terrorism depends on the magnitude of *lgdp~i~* itself. As *lgdp~i~* runs from the lowest to highest values, θ*~i~* goes from 1 to 0 and back to 1. Hence, for countries such that *lgdp~i~* is far below or far above *c*, the value of θ*~i~* is approximately 1, so that [equation (3)](#disp-formula3-0022002714535252){ref-type="disp-formula"} becomes ${\widehat{T}}_{i} = \exp\lbrack(\mathit{\alpha}_{0} + \mathit{\beta}_{0}) + (\mathit{\alpha}_{1} + \mathit{\beta}_{1})lgdp_{i} + (\mathit{\alpha}_{2} + \mathit{\beta}_{2})lpop_{i}\rbrack$. However, for countries with *lgdp~i~* very close to *c*, the magnitude of θ*~i~* is approximately zero, so that the relationship in [equation (3)](#disp-formula3-0022002714535252){ref-type="disp-formula"} can be written as ${\widehat{T}}_{i} = \exp(\mathit{\alpha}_{0} + \mathit{\alpha}_{1}lgdp_{i} + \mathit{\alpha}_{2}lpop_{i})$. Because θ*~i~* is a smooth function of *lgdp~i~*, the ESTR specification allows for a smooth transition between these two extremes. Given that θ*~i~* is symmetric around *c*, countries with values of *lgdp~i~* close to *c* will behave differently from countries with values of *lgdp~i~* much smaller, or much larger, than *c*. When, for example, we set *c* = 6.5 and γ = 4, the solid line in panel 1 of [Figure 3](#fig3-0022002714535252){ref-type="fig"} traces out how θ*~i~* varies as *lgdp~i~* runs from 5 to 11 (i.e., the approximate range of the *lgdp~i~* values in our sample). For the lowest values of *lgdp~i~*, $\mathit{\theta}_{i} \cong 1$ (i.e., $1 - {exp\lbrack} - {4(5} - 6{.5)}^{2}{\rbrack = 0}.99988$) and as *lgdp~i~* approaches 6.5, the value of θ*~i~* approaches zero. Subsequent increases in *lgdp~i~* act to increase the value of θ*~i~* from zero toward unity. Once *lgdp~i~* is about 7.5, θ*~i~* is sufficiently close to unity that further increases in *lgdp~i~* have no substantive impact on the values of θ*~i~*. As shown by the two dashed lines in panel 1 of [Figure 3](#fig3-0022002714535252){ref-type="fig"}, increases in γ act to sharpen the transition. ![ESTR and LSTR processes.\ *Note:* ESTR = exponential smooth transition regression; LSTR = logistic variant of the smooth transition regression model.](10.1177_0022002714535252-fig3){#fig3-0022002714535252} There are two essential features of the ESTR specification for our analysis. First, the U shape of the exponential function allows us to capture clustering within closely aligned cohorts along the income spectrum. If terrorism occurs in countries with *lgdp~i~* levels equal to 6.5 (= \$665 real US dollars), but seldom occurs in the poorest or richest countries, we would then expect an ESTR model to fit the data such that *c* is close to 6.5 with γ reflecting the extent of the clustering. Second, the ESTR model is quite flexible relative to the usual models. For example, a value of γ = 0 is equivalent to a linear model, since θ*~i~* is then zero. Moreover, very tight clustering can be captured by large values of γ. The type of quadratic specification reported in [Table 1](#table1-0022002714535252){ref-type="table"} can be well approximated by an ESTR model with a small value of γ. Panel 2 of [Figure 3](#fig3-0022002714535252){ref-type="fig"} illustrates the effect of nesting the ESTR model within the negative binomial framework. As detailed subsequently, for the GTD post-1993 transnational terrorism series, the coefficient estimates are approximately *c* = 6.5, γ = 10.0, α~1~ = 11, and β~1~ = −12.5. Evaluating *a* ~0~ + *a* ~2~ *lpop~i~* and *b* ~0~ + *b* ~2~ *lpop~i~* at the sample mean of *lpop~i~*, we obtain −77.0 and 81, respectively. As such, panel 2 plots the values of *T~i~* against *lgdp~i~*, where an increase in per capita GDP is associated with a dramatic increase in the level of terrorism for *lgdp~i~* values sufficiently close to 6.5. The subsequent income-induced drop-off in the number of terrorist incidents causes a substantial clustering within the cohort of countries with values of *lgdp~i~* between 6.2 and 7. Thus, a linear specification or a quadratic specification cannot capture such extreme clustering. In the LSTR model, θ*~i~* has the form$$\theta_{i} = {1/\left\{ {1 + \exp\left\lbrack {- \gamma\left( {lgdp_{i} - c} \right)} \right\rbrack} \right\}}.$$ Unlike the U shape of the ESTR specification, [equation (5)](#disp-formula5-0022002714535252){ref-type="disp-formula"} best characterizes a two-regime model. Panel 3 of [Figure 3](#fig3-0022002714535252){ref-type="fig"} uses the identical parameters values used in panel 1. As *lgdp~i~* increases from 5 to 11, θ*~i~* monotonically increases from 0 to 1, so that poorest countries are most dissimilar to the richest countries, in the LSTR specification. The solid curve in panel 3 is drawn for γ = 4. As shown by the dashed lines, increases in the value of γ act to sharpen the transition between the low- and high-income countries. Panel 4 plots the values of *T~i~* against *lgdp~i~*. For the poorest states, there is a very small positive effect of *lgdp~i~* on terrorism, whereas, for the richest states, there is a negative effect of *lgdp~i~* on terrorism. An ESTR model captures clustering in the middle of the income cohorts, while an LSTR model best captures discrepancies between the poorest and richest income groups. Since the LSTR model is not well suited to capture mid-group clustering, we allow for the possibility of squared *lgdp~i~* terms when estimating an LSTR model, such that$${\hat{T}}_{i} = \exp\left\lbrack {\left( {\alpha_{0} + \alpha_{1}lgdp_{i} + \alpha_{2}lpop_{i} + \alpha_{3}lgdp_{i}^{2}} \right) + \theta_{i}\left( {\beta_{0} + \beta_{1}lgdp_{i} + \beta_{2}lpop_{i} + \beta_{3}lgdp_{i}^{2}} \right)} \right\rbrack.$$ Estimates of the ESTR and LSTR Models {#section10-0022002714535252} ===================================== We estimate each of the eight incident series as either an ESTR or LSTR process using the negative binomial distribution.^[14](#fn14-0022002714535252){ref-type="fn"}^ The model with the best fit is taken as the most appropriate specification.^[15](#fn15-0022002714535252){ref-type="fn"}^ Given the well-known difficulties in estimating γ, we constrained the upper bound for γ to be no greater than 10.00.^[16](#fn16-0022002714535252){ref-type="fn"}^ The results for each series are shown in [Table 2](#table2-0022002714535252){ref-type="table"}. Perhaps, the most important result is that, as measured by the AIC, the fit of every nonlinear model is superior to that of the corresponding linear *and* quadratic models reported in [Table 1](#table1-0022002714535252){ref-type="table"}. For example, the AIC for the ITERATE series containing incidents by location during the pre-1993 period is −86.40, whereas those for the linear *and* quadratic models are −86.27 and −86.29, respectively. Moreover, as shown in the fifth line of [Table 2](#table2-0022002714535252){ref-type="table"}, the estimated equation is given by$$\begin{array}{l} {{\hat{T}}_{i} = \exp\left\lbrack {(21.55 - 5.69lgdp_{i} + 0.38lpop_{i}) + \theta_{i}(92.36 - 3.42lgdp_{i} + 0.65lpop_{i})} \right\rbrack,} \\ {\quad\quad\left( 10.40 \right)\quad\quad\left( - 3.55 \right)\quad\left( 1.75 \right)\quad\quad\quad\left( 2.03 \right)\quad\left( - 0.68 \right)\quad\left( 1.03 \right)} \\ {\theta_{i} = 1 - \exp\lbrack - 0.02\left( {lgdp_{i} - 2.72} \right)^{2}\rbrack,\quad\eta\operatorname{=\, 1.57}\text{.}} \\ {\quad\quad\quad\quad\text{(2.29)}} \\ \end{array}$$ For this sample, the poorest countries have a value of θ*~i~* very close to zero, so that the model becomes ${\widehat{T}}_{i} = \exp(21.55 - 5.69lgdp_{i} + 0.38lpop_{i})$. However, for the very high-income countries in our sample, the value of θ*~i~* is close to 0.7, so that the model becomes^[17](#fn17-0022002714535252){ref-type="fn"}^ ${\widehat{T}}_{i} = \exp\left\lbrack \left( {82.20 - 8.08lgdp_{i} + 8.35lpop_{i}} \right) \right\rbrack$. ###### ESTR and LSTR Estimates of the Terrorism Incident Series (Negative Binomial). ![](10.1177_0022002714535252-table2) Series α~0~ *lgdp* *lgdp* ^2^ *lpop* β~0~ *lgdp* *lgdp* ^2^ *lpop* γ *c* AIC -------------------------- ------------------- ------------------ ----------------- ----------------- ------------------ ------------------- ------------------ ----------------- ------------- --------------- ---------- Domestic_pre (GTD) --15.41 (--2.51) 2.03 (2.02) 1.30 (7.86) 20.82 (7.52) --2.69 (--4.43) --0.36 (--1.38) 0.40 (1.99) 5.88 (13.03) --823.65 Domestic_post (GTD) --0.70 (--0.62) --0.27 (--2.33) 0.92 (2.87) 2.18 (1.39) --0.12 (--1.44) 0.15 (0.41) 3.69 (1.74) 5.79 (54.47) --614.13 Transnational_pre (GTD) --6.85 (--0.76) 3.53 (12.06) --0.13 (--1.68) 7.77 (0.97) --17.04 (--7.61) 2.15 (1.60) --0.20 (--12.34) --7.14 (--0.89) 1.86 (1.24) 3.30 (2.78) −80.71 Transnational_post (GTD) --76.55 (--18.42) 10.47 (15.27) 1.96 (4.67) 75.50 (17.15) --10.52 (--15.32) --1.37 (--3.00) 10.00 6.53 (120.85) --29.66 Location_pre (IT) 21.55 (10.40) --5.69 (--3.55) 0.38 (1.75) 92.36 (2.03) --3.42 (--0.68) 0.65 (1.03) 0.02 (4.08) 2.72 (2.29) --86.40 Location_post (IT) --0.82 (--0.36) --0.18 (--0.51) 0.53 (0.60) 1.12 (0.44) 0.03 (0.09) 0.04 (0.04) 10.00 5.63 (49.60) --24.34 Nationality_pre (IT) 30.05 (30.80) --6.42 (--45.65) 0.41 (3.18) 69.12 (19.84) --3.00 (--19.35) 0.78 (2.42) 0.04 (7.20) 4.09 (15.91) --33.04 Nationality_post (IT) 7.22 (1.78) --0.07 (--0.87) --1.47 (--1.50) --7.22 (--1.76) --0.18 (--2.26) 2.11 (2.13) 10.00 5.40 (64.92) --8.48 *Note*: GTD = Global Terrorism Database; IT = ITERATE; ESTR = exponential smooth transition regression; LSTR = logistic variant of the smooth transition regression model; AIC = Akaike Information Criterion. *t*-statistics are indicated in parentheses. Since the intercept is positively related to *lgdp*, it is a mistake to think that the negative coefficients on the *lgdp* variables mean that terrorism is always negatively related to *lgdp*. The essential insight is that the relationship between terrorism and per capita income is not monotonic. Given our use of a negative binomial distribution combined with an ESTR model, the interpretation of the coefficients in [Table 2](#table2-0022002714535252){ref-type="table"} can be difficult since the model is highly nonlinear in its parameters. We rely on [Figures 4](#fig4-0022002714535252){ref-type="fig"} and [5](#fig5-0022002714535252){ref-type="fig"} to display the nonlinear relationship between terrorism and the log of real per capita GDP for GTD and ITERATE terrorism samples, respectively. In panel 1 of [Figure 4](#fig4-0022002714535252){ref-type="fig"}, we display this relationship for Domestic_pre (GTD) when evaluated at the sample mean for *lpop~i~*. Panel 1 shows that increases in *lgdp~i~* act to augment domestic casualty incidents until real per capita GDP reaches US\$1,762 (i.e., exp(7.47) = 1,762) with a maximum of almost seventy-nine incidents. Further increases in real GDP reduce terrorism. In panel 2, post-1993 domestic attacks initially fall, then rise to a maximum of almost twenty-eight incidents, and finally decline as *lgdp~i~* increases. In comparing panels 1 and 2, we discern that there are fewer incidents in the post-1993 period, where the venue of domestic terrorist acts has shifted toward the lower income countries. Panel 3 of [Figure 4](#fig4-0022002714535252){ref-type="fig"} has the largest number of incidents at a per capita income of US\$4,316, consistent with the dominant leftist and nationalist/separatist terrorists favoring richer venues for pre-1993 transnational terrorist attacks. Panel 4, however, shows a substantial clustering of terrorism in countries with per capita GDP levels in the range of US\$800 to US\$1,000 (i.e., exp(6.68) ≅ 800 and exp(6.91) ≅ 1,000). Thus, over time there has been a substantial movement of terrorism toward the low-income countries. For panels 2 and 4 of [Figure 4](#fig4-0022002714535252){ref-type="fig"}, the post-1993 shifts of the greatest concentration of terrorist attacks to lower per capita GDP levels agree with our priors. Also, consistent with our priors, domestic terrorism peaks at a smaller per capita GDP level than transnational terrorism during 1970 to 1992. Finally, we note the relatively high terrorism activity in some poor countries after 1993, which include some failed states. None of the four panels corresponds to a quadratic relationship. ![Effects of income on GTD casualty incidents.\ *Note:* GTD = Global Terrorism Database.](10.1177_0022002714535252-fig4){#fig4-0022002714535252} ![Effects of income on ITERATE casualty incidents.\ *Note:* ITERATE = International Terrorism: Attributes of Terrorist Events.](10.1177_0022002714535252-fig5){#fig5-0022002714535252} In panel 1 of [Figure 5](#fig5-0022002714535252){ref-type="fig"}, increases in real per capita income cause the level of transnational terrorism to rise until a maximum of about twenty-two incidents when *lgdp* = 8.63, corresponding to a real per capita GDP of US\$5,633. Subsequent increases in per capita GDP result in a decline in transnational terrorism. In panel 2 of [Figure 5](#fig5-0022002714535252){ref-type="fig"}, the location sample response function for the post-1993 period indicates that, except for the very small number of low-income countries (i.e., those with per capita income levels below log(\$5.56) = \$261), increases in real per capita GDP raise the level of terrorism until a per capita income level of about US\$480. Thereafter, increases in per capita income levels gradually reduce terrorism, so that most transnational terrorism is bunched in the lower middle-income countries. Notably, the clustering of the location of transnational terrorist incidents now occurs at much lower income levels than in the pre-1993 period, consistent with the greater dominance of the religious fundamentalist and nationalist/separatist terrorist groups after 1993. Clearly, panel 2 cannot be captured by a quadratic per capita GDP term. The effects of per capita income on the number of terrorist incidents associated with the nationality of the perpetrators are shown in panels 3 and 4. Both response functions have a hump shape, such that the maximum values of terrorist incidents are clustered in the middle-income countries. Again, the maximal values for the post-1993 period occurs at a much lower income levels than those for the pre-1993 period, indicating that transnational terrorists are concentrating their attacks in poorer countries after the start of 1994. Testing for Nonlinearity in the Presence of Other Determinants of Terrorism {#section11-0022002714535252} =========================================================================== We now address whether *lgdp~i~* remains a determinant of terrorism in the presence of *lpop~i~* and other explanatory variables that [@bibr20-0022002714535252]), [@bibr33-0022002714535252]), and others identified as potentially important determinants of terrorism. This exercise also allows us to address the omitted variable concern. Specifically, we want to determine whether real per capita GDP levels affect terrorism in the presence of other covariates of terrorism, such as measures of freedom (POLITY, Freedom House), the Rule of Law, ethnic tension, religious tension, education, area, income distribution (the Gini coefficient), and unemployment. Because our goal is to focus on the functional relationship between per capita GDP and terrorism, we do not include every potential control for terrorism. We do, however, include many of the most important ones. Because some of the covariates are not available for all countries over the entire sample period, the covariate measures, used in the study, are the sample averages over the available dates (e.g., ethnic tension). The testing methodology is not straightforward because the ESTR and LSTR specifications are not convenient for testing the null hypothesis of linearity against the alternative of nonlinearity. To explain, we substitute [equation (4)](#disp-formula4-0022002714535252){ref-type="disp-formula"} into [equation (3)](#disp-formula3-0022002714535252){ref-type="disp-formula"} to obtain$${\hat{T}}_{i} = \exp\left\lbrack {\left( {\text{α}_{0} + \text{α}_{1}lgdp_{i} + \text{α}_{2}lpop_{i}} \right) + \left\{ {1 - \exp\left\lbrack {- \text{γ}\left( {lgdp_{i} - c} \right)^{2}} \right\rbrack} \right\} \times \left( {\beta_{0} + \beta_{1}lgdp_{i} + \beta_{2}lpop_{i}} \right)} \right\rbrack.$$ The test for linearity entails the restriction that γ = 0, so that [equation (8)](#disp-formula8-0022002714535252){ref-type="disp-formula"} becomes$${\widehat{T}}_{i} = \exp\left( {\mathit{\alpha}_{0} + \mathit{\alpha}_{1}lgdp_{i} + \mathit{\alpha}_{2}lpop_{i}} \right),$$ where the values of β~0~, β~1~, β~2~, and *c* are all unidentified under the null hypothesis of linearity. As long as γ = 0, these four coefficients can take on any value without altering the value of the likelihood function. As [@bibr8-0022002714535252]) showed, whenever a parameter is unidentified under the null hypothesis, standard inference on the parameters is not possible. We note, however, that the problem does not exist for testing whether *lpop~i~* influences terrorism (i.e., testing whether α~2~ = β~2~ = 0), since γ and all of the other parameters of [equation (8)](#disp-formula8-0022002714535252){ref-type="disp-formula"} are identified in the null model. Although [equation (8)](#disp-formula8-0022002714535252){ref-type="disp-formula"} relies on the ESTR specification, the analogous issue holds for the LSTR specification using [equation (5)](#disp-formula5-0022002714535252){ref-type="disp-formula"}. [@bibr39-0022002714535252]) indicated how to circumvent this so-called Davies' problem in STR models by relying on a third-order Taylor series approximation for θ*~i~*. To explain briefly, we rewrite [equation (4)](#disp-formula4-0022002714535252){ref-type="disp-formula"} as follows:$$\mathit{\theta}_{i} = 1 - \exp\left( {- h_{i}^{2}} \right),$$ where $h_{i} = \mathit{\gamma}^{0.5}\left( {lgdp_{i} - c} \right)$. When we expand [equation (10)](#disp-formula10-0022002714535252){ref-type="disp-formula"} using the third-order approximation and evaluate at *h~i~* = 0 (so that γ = 0), we obtain$$\mathit{\theta}_{i} = a_{0} + a_{1}lgdp_{i} + a_{2}lgdp_{i}^{2} + a_{3}lgdp_{i}^{3}.$$ Substituting [equation (11)](#disp-formula11-0022002714535252){ref-type="disp-formula"} into [equation (3)](#disp-formula3-0022002714535252){ref-type="disp-formula"} and collecting terms in the powers of *lgdp~i~* yield the following nonlinear representation of [equation (8)](#disp-formula8-0022002714535252){ref-type="disp-formula"}: $${\widehat{T}}_{i} = \exp\left\lbrack {c + \sum\limits_{j\, = \, 1}^{4}{c_{j}lgdp_{i}^{j}} + d_{0}lpop_{i} + \sum\limits_{j\, = \, 1}^{3}{d_{j}lgdp_{i}^{j}}\left( {lpop_{i}} \right)} \right\rbrack.$$ If it is possible to restrict all values of the *c~j~* and *d~j~* to equal zero, then we can accept the null hypothesis that terrorism is unaffected by real per capita income levels. As detailed in [@bibr10-0022002714535252]), the LSTR specification also yields a model in the form of [equation (12)](#disp-formula12-0022002714535252){ref-type="disp-formula"}. Given the large number of parameters that would be necessary to estimate in an unrestricted model, we estimate the following restricted form of [equation (12)](#disp-formula12-0022002714535252){ref-type="disp-formula"}:^[18](#fn18-0022002714535252){ref-type="fn"}^ $${\widehat{T}}_{i} = \exp\left\lbrack {c + \sum\limits_{j\, = \, 1}^{n}{c_{j}lgdp_{i}^{j}} + d_{0}lpop_{i} + e_{0}z_{i}} \right\rbrack,$$ where *z~i~* is one of the previously mentioned covariates. In moving from [equation (12)](#disp-formula12-0022002714535252){ref-type="disp-formula"} to [equation (13)](#disp-formula13-0022002714535252){ref-type="disp-formula"}, we simplify by setting *d* ~1~ = *d* ~2~ = *d* ~3~ = 0 and add the single covariate *z~i~*. That is, we enter the covariates one at a time in [equation (13)](#disp-formula13-0022002714535252){ref-type="disp-formula"} and restrict the nonlinearity to appear only in the *lgdp~i~* variable. The test for the effect of real per capita GDP on terrorism is straightforward. If, in [equation (13)](#disp-formula13-0022002714535252){ref-type="disp-formula"}, the null hypothesis that *c* ~1~ = *c* ~2~ = *c* ~3~ = *c* ~4~ = 0 cannot be rejected, then we conclude that *lgdp~i~* has no influence on the terrorism series. If, however, the null hypothesis that *c* ~2~ = *c* ~3~ = *c* ~4~ = 0 cannot be rejected, then we conclude that the effect of *lgdp~i~* on terrorism is linear. Of the eight terrorism casualty series, we focus on domestic terrorism and transnational terrorism based on the perpetrators' nationality in the pre- and post-1993 eras. These four series displayed interesting shifts in per capita income for maximal terrorism over the two eras; hence, we are interested in ascertaining which of the standard covariates remain robust for the two eras. The top portion of [Table 3](#table3-0022002714535252){ref-type="table"} reports the results for the pre-1993 values for domestic terrorism and transnational terrorism by perpetrators' nationality. The lower portion of the table contains the corresponding results for the post-1993 data. Column 2 reports the number of observations (Obs.), columns 3 and 6 report the *p* values of the *F* statistic for the null hypothesis that all values of the $c_{j}s\left( {j = 1,...,4} \right)$ equal zero. Since the *p* values of the sample *F* statistic are so small for every case, we can reject the null hypothesis that terrorism is not affected by real per capita GDP (i.e., accept the alternative hypothesis that terrorism is affected by real GDP levels). Although not reported in [Table 3](#table3-0022002714535252){ref-type="table"}, it is always the case that the *p* values of the test for linearity (i.e., the test that *c* ~2~ = *c* ~3~ = *c* ~4~ = 0) are smaller than .001, so that we can reject the linear specification. The fourth and seventh columns report the various values of *e* ~0~, and the fifth and eighth columns report the associated *t*-statistics for the null hypothesis *e* ~0~ = 0.^[19](#fn19-0022002714535252){ref-type="fn"}^ ###### Pretesting for Nonlinearity in the Presence of the Covariates. ![](10.1177_0022002714535252-table3) Covariates Obs. *p* (*F*) *e* ~0~ *t*-statistics *p* (*F*) *e* ~0~ *t*-statistics --------------------------- ------------------------------------------ ---------------- --------- ---------------- ----------- --------- ---------------- Freedom House 153 .000 −1.866 −2.904 .000 −1.533 −3.782 POLITY 139 .000 −0.150 −0.225 .000 −0.152 −0.252 Rule of law 112 .000 −1.013 −8.536 .000 −0.681 −5.260 Ethnic tension 112 .000 −0.272 −2.832 .000 −0.228 −2.628 Religious tension 112 .000 −0.372 −2.492 .000 −0.288 −3.158 log(Education/population) 146 .000 −0.358 −0.945 .000 −0.785 −3.127 Log(Area) 153 .000 −0.176 −1.598 .000 −0.154 −1.444 Gini coefficient 71 .000 0.149 4.449 .000 0.100 4.959 Unemployment 104 .000 −0.025 −0.880 .000 0.035 1.415 Post-1993 data Domestic terrorism Transnational terrorism (by nationality) Freedom House 162 .000 −1.980 −3.804 .000 −1.590 −3.958 POLITY 148 .000 −1.051 −2.058 .000 −1.949 −6.004 Rule of law 131 .000 −0.797 −2.669 .000 −0.392 −3.185 Ethnic tension 128 .000 −0.511 −3.433 .000 −0.120 −1.115 Religious tension 128 .000 −0.618 −4.358 .000 −0.649 −7.239 log(Education/population) 162 .000 0.356 0.677 .000 0.547 1.618 Log(Area) 162 .000 −0.375 −3.442 .000 −0.173 −1.484 Gini coefficient 139 .000 −0.026 −1.031 .000 −0.027 −1.479 Unemployment 143 .000 0.063 2.116 .000 0.027 1.078 Before discussing [Table 3](#table3-0022002714535252){ref-type="table"}, we introduce two of our control variables. The [@bibr17-0022002714535252]) indices for political rights and civil liberties vary on a scale from 1 to 7, so that their sum goes from 2 to 14, with smaller values indicating more freedom. A sum is typically computed before assigning a dummy value, because the two measures are highly correlated. If the sum is 5 or less, the country is deemed free and we assign it a dummy value of 1. Otherwise, we assign the country a dummy of 0. The POLITY index reflects a country's adherence to democratic principles and varies from --10 (strongly autocratic) to 10 (strongly democratic; [@bibr28-0022002714535252]). If the POLITY index is 7 or higher, we assign it a dummy value of 1, indicating a relatively democratic country. In [Table 3](#table3-0022002714535252){ref-type="table"}, all four measures of terrorism are negative and significantly related to the [@bibr17-0022002714535252]) measure, so that increases in civil and political rights reduce terrorism. The point estimates for POLITY are always negative, but POLITY is significant only in the post-1993 period, where greater democracy reduces terrorism. Large values of the Rule of Law, whose index varies from 0 to 6, indicate a strong legal system with impartiality and popular observance of the laws ([@bibr22-0022002714535252]), while large values of the Ethnic Tension and Religious Tension variables indicate little ethnic division and suppression of religious freedoms, respectively ([@bibr22-0022002714535252]). Both of the tension indices vary from 0 to 6. Greater Rule of Law limits terrorism, while increased ethnic or religious tensions (reduced values to the index) generally augments terrorism, probably from enhanced grievances. These findings are consistent with the literature---see, for example, [@bibr7-0022002714535252]), [@bibr20-0022002714535252]), and [@bibr1-0022002714535252]). Education levels (i.e., the number of people receiving secondary education) and the remaining covariates are from the [@bibr40-0022002714535252]). Higher education levels are associated with less transnational terrorism in the pre-1993 period, but are statistically insignificant in the other three cases. Greater income inequality (higher Gini coefficient) is positively related to both forms of terrorism in the pre-1993 period during the reign of the leftists, who wanted to right social wrongs. Income inequality is not a significant determinant of terrorism after 1993, which suggests that inequality is not motivating the religious fundamentalist or the nationalist/separatist terrorists. The unemployment rate (as a percentage of the total labor force) is positive and marginally significant for post-1993 domestic terrorism, but is not significant in the other three cases. The *essential insight* is that in every case, real per capita GDP always influences terrorism and that this effect remains nonlinear when standard covariates are included in our analysis. Concluding Remarks {#section12-0022002714535252} ================== This article establishes a robust nonlinear relationship between per capita income and various terrorist time series during 1970 to 2010. Unlike most previous articles, this study limits its aggregation of terrorist attacks in order to distinguish domestic and transnational terrorist incidents and the era of leftist prevalence from that of religious fundamentalist and nationalist/separatist prevalence. For transnational terrorism, we also distinguish attacks based on where the attack occurred from where the perpetrators originated from. By so doing, we establish that terrorist attacks are most concentrated at a middle-income range that varies in a predictable fashion according to the sample examined. For example, terrorist attacks peaked at a lower per capita income level for the perpetrators' country than for the venue country. Thus, the low per capita GDP rationale for terrorism is more descriptive of the perpetrators' home country. When the leftist terrorists were a greater influence prior to 1993, the peak per capita income level for transnational terrorist incidents was higher than when the religious fundamentalist and nationalist/separatist terrorist groups became a greater influence after 1993. Even when the standard controls are added, our nonlinear relationship remains robust. One reason that the literature failed to uncover a clear and robust income--terrorism relationship is that its aggregation of terrorist incidents and periods introduced too many confounding and opposing influences. Moreover, the type of nonlinearity present in the identified terrorist--income relationships cannot be readily captured by linear or quadratic estimation techniques, in contrast to the extant literature. **Authors' Note:** We have profited from comments from two anonymous referees. Replication materials are available at the *Journal of Conflict Resolution* website at <http://jcr.sagepub.com/>. The authors would like to thank Aidan Hathaway for his excellent research assistance. **Declaration of Conflicting Interests:** The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. **Funding:** The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This study was funded, in part, by the US Department of Homeland Security (DHS) through the Center for Risk and Economic Analysis of Terrorist Events (CREATE) at the University of Southern California, Grant 2010-ST-061-RE0001-04. However, any opinions, findings, conclusions, and recommendations are solely those of the authors and do not necessarily reflect the views of DHS or CREATE. **Supplemental Material:** The online \[appendices/data supplements/etc\] are available at <http://jcr.sagepub.com/supplemental>. Low per capita gross domestic product (GDP) is the preferred proxy for poverty in the literature (e.g., [@bibr24-0022002714535252]; [@bibr33-0022002714535252]). We do not use the Human Development Index because it does not lend itself to our nonlinear methods. In a different context, [@bibr29-0022002714535252]) showed that the growth--terrorism relationship changed after the end of the cold war. The current article is not about economic growth. We leave out 1993 because the data for this year are incomplete in the Global Terrorism Database. The diminished influence of the leftists in the late 1980s was documented in [@bibr2-0022002714535252]). We do not examine victims' nationalities because this falsely presumes that terrorists generally know the nationalities of potential victims of an intended transnational terrorist attack. The interested reader should consult [@bibr15-0022002714535252]) for details. This was not true of Shining Path, which killed many people. Using RAND terrorist event data, we can track 586 active terrorist groups during 1970 to 2007. Before 1993, there were 45 active religious terrorist groups, while, after 1993, there were 111 active religious terrorist groups. Thus, the number of these terrorist groups more than doubled after 1993. There were 140 active left-wing terrorist groups before 1993 and 123 active left-wing terrorist groups after 1993. Moreover, the activity level of these leftist terrorist groups declined in the latter period. Active nationalist/separatist terrorist groups increased somewhat from 127 before 1993 to 145 after 1993. Active right-wing terrorist groups numbered 15 before 1993 and 16 after 1993. [@bibr5-0022002714535252]) put forward a dynamic model, in which terrorism increases during economic downturns in rich powerful countries. The interface between their model and our analysis is imperfect, because we are not looking at economic shocks or downturns per se. We are, instead, relating terrorism to income per capita for a cross section of countries. Our theoretical discussion follows that of the literature where per capita GDP proxies workers' opportunity cost. We recognize the imperfection of this proxy. The pre-1993 and post-1993 runs included a different set of countries, since some of the 166 countries were not in existence for the entirety of both sample periods. A country that did not exist during one of the subperiods was excluded from the analysis of that period. Similarly, countries not in existence for the preponderance of a subperiod were excluded from the analysis for that period. For example, since Macedonia came into existence in 1991, it was excluded from the pre-1993 runs. For each variable, we used per-year country averages. As such, a country in existence for, say, twelve of the seventeen years of the post-1993 period could be compared to the other countries in the sample. A complete discussion of the sources and the variables used in the study is contained in the Online Appendix. As in the literature on economic growth, we use long-run cross-sectional data to account for the fact that our dependent variable (terrorism) may have a long and varied cross-country response to our key independent variable (GDP). Even without the added degrees of freedom that a dynamic panel would provide, all of our nonlinear terrorism estimates display a significant response to per capita GDP. Future work could apply our analysis to a dynamic panel. Throughout our analysis, each model is estimated using a Poisson as well as a negative binomial model. Because the Poisson models always show excess volatility, they are not reported. The results do depend on whether per capita GDP is measured in logs or in levels. When we use per capita GDP in levels as opposed to logs, the per capita GDP coefficients are positive and insignificant in [equation (2)](#disp-formula2-0022002714535252){ref-type="disp-formula"}. Similar results hold when we use the GTD domestic and transnational terrorism series. Results using the Poisson distribution are available upon request. We also estimate models using only those countries with nonzero levels of terrorism, but the results are similar to those reported here. Given a recent article by [@bibr19-0022002714535252]), we are not concerned about the reversed causality between terrorism and income. After correcting for Nickell bias and cross-sectional dependence, these authors showed that terrorism had no significant impact on per capita GDP growth or other macroeconomics aggregates for myriad cross sections. As discussed in [@bibr10-0022002714535252]) and [@bibr39-0022002714535252]), once γ is reasonably large, further increases in γ have little effect on the likelihood function, so that estimation using numerical methods becomes difficult. This can be seen in panels 1 and 3 of [Figure 3](#fig3-0022002714535252){ref-type="fig"}, wherein increases in γ from 8 to 12 do little to influence the shape of θ*~i~*. As such, if the transition between regimes is sharp, it is standard to constrain the upper bound of γ. When γ is estimated at its upper bound of 10, the *t* statistic for the null hypothesis γ = 0 is meaningless and, thus, not reported. No country in this sample has an income level sufficiently large to drive θ*~i~* to 1. In [Table 2](#table2-0022002714535252){ref-type="table"}, all equations, except Transnational_pre (GTD), are best estimated in the exponential form of the smooth transition regression (STR) model. Note that the coefficients in [equation (13)](#disp-formula13-0022002714535252){ref-type="disp-formula"} are related in the substitution of [equation (11)](#disp-formula11-0022002714535252){ref-type="disp-formula"} into [equation (3)](#disp-formula3-0022002714535252){ref-type="disp-formula"} as $c = \mathit{\alpha}_{0} + a_{0}\mathit{\beta}_{0},c_{1} = a_{0}\mathit{\beta}_{1} + a_{1}\mathit{\beta}_{0} + \mathit{\alpha}_{1},c_{2} = a_{1}\mathit{\beta}_{1} + a_{2}\mathit{\beta}_{0},c_{3} = a_{2}\mathit{\beta}_{1} + a_{3}\mathit{\beta}_{0},c_{2} = a_{3}\mathit{\beta}_{1}$. Since the number of observations for each equation differs *b*y covariate, the AIC cannot be used to assess fit across the different covariates.
Properties making a chaotic system a good pseudo random number generator. We discuss the properties making a deterministic algorithm suitable to generate a pseudo random sequence of numbers: high value of Kolmogorov-Sinai entropy, high dimensionality of the parent dynamical system, and very large period of the generated sequence. We propose the multidimensional Anosov symplectic (cat) map as a pseudo random number generator. We show what chaotic features of this map are useful for generating pseudo random numbers and investigate numerically which of them survive in the discrete state version of the map. Testing and comparisons with other generators are performed.
namespace ClassLib037 { public class Class012 { public static string Property => "ClassLib037"; } }
[Genetic instability of microsatellites Mfd41 and DCC in the evolution of chronic myelogenous leukemia]. To explore the relationship between the genetic instability of microsatellite and the evolution of chronic myelogenous leukemia(CML). The loss of heterozygosity(LOH) and the microsatellite instability(MSI) of two polymorphic microsatellite markers, DCC and Mfd41, which located on chromosome 18q and 17p respectively, were assayed by standard PCR-silver staining analysis in bone marrow cells from 17 CML patients progressing from chronic phase to accelerated phase or blast crisis. LOH or MSI of the two microsatellites were demonstrated in 8/17 (47.5%) patients in accelerated/blastic phases. For DCC, genetic instability was revealed in 2 of 9 patients in accelerated phase and in 4 of 8 in blast crisis. For Mfd41, genetic instability was revealed in 1 of 9 patients in accelerated and in 1 of 8 in blast crisis. Genetic instability of DCC and Mfd41 may play a role in the evolution of CML.
Heparin-sulfate lyase In enzymology, a heparin-sulfate lyase () is an enzyme that catalyzes the chemical reaction Elimination of sulfate; appears to act on linkages between N-acetyl-D-glucosamine and uronate. Product is an unsaturated sugar. This enzyme belongs to the family of lyases, specifically those carbon-oxygen lyases acting on polysaccharides. The systematic name of this enzyme class is heparin-sulfate lyase. Other names in common use include heparin-sulfate eliminase, heparitin-sulfate lyase, heparitinase I, and heparitinase II. References Category:EC 4.2.2 Category:Enzymes of unknown structure
970 N.E.2d 133 (2008) 385 Ill. App.3d 1138 361 Ill. Dec. 133 IN RE ISAIAH J. No. 2-08-0869. Appellate Court of Illinois, Second District. December 18, 2008. Affirmed.
Attitudes Toward Dating Violence in Early and Late Adolescents in Concepción, Chile. This study compares attitudes toward teen relationship (or dating) violence (TRV) between early and late adolescents in the province of Concepción, Chile. The sample consisted of 770 adolescents, aged between 11 and 19 with an average age of 14.8 years old, of which 389 were female (50.5%) and 381 were male (49.5%). An adapted version of the Scale of Attitudes Towards Intimate Violence was used. Results found greater justifying attitudes toward violence in early adolescents than in late adolescents, in 6 of 12 items of the scale, with a statistical significance of p ≤ .001 in 4 items and in the overall score, and p ≤ .05 in 2 items. In the comparison according to sex, male adolescents tended to justify violence more than female adolescents did in one item ( p ≤ .001). In dating/no dating comparison, statistically significant differences were found in just 2 items, in favor of those who are not in a relationship ( p ≤ .05). These results are analyzed and discussed in relation to previous literature. Finally, orientations to future interventions are proposed, and it is suggested that aspects related to sampling and possible modulating variables such as cognitive development and moral development be considered for future investigations.
Q: Independence via conditional expectation and characteristic function I have encountered the following statement but can't think of a proper argument to explain/prove it: If for every $A \in \mathcal F$ s.t. $P(A>0)$ we have $ E\left[ \exp (i \lambda X) | A \right] = E \left[\exp(i \lambda X)\right]$, then $X$ is independent from $\mathcal F$. I know that the characteristic function defines the distribution completely and independence is somewhat a distribution property, but I am still not very convinced. In particular, only $E\left[ X | A \right] = E \left[X \right] \Rightarrow$ $X$ and $\mathcal F$ are independent, is false, isn't it? A: The statement is true. Proving it takes a fair amount of work, depending on what results you already know (about Fourier transforms and characteristic functions in particular). However, for some very rough intuition, the idea is that any bounded Borel function $f(x)$ can be approximated in some sense by "trigonometric polynomials" of the form $g(x)=a_1 e^{i \lambda_1 x} + \dots + a_n e^{i \lambda_n x}$. Now since $E[g(X) \mid A] = E[g(X)]$ for each such $g$, with some limiting arguments you conclude that $E[f(X) \mid A] = E[f(X)]$ for every bounded Borel function $f$. In particular, taking $f = 1_B$, you have $P(X \in B \mid A) = E[1_B(X) \mid A] = E[1_B(X)] = P(X \in B)$. This shows that the event $\{X \in B\}$ is independent of $A$, and since every event in $\sigma(X)$ is of the form $\{X \in B\}$, you are done. The best way I know to execute the "limiting argument" is the multiplicative system theorem, which is to measurable functions what the Stone-Weierstrass theorem is to continuous functions (and indeed the proof of the former uses the latter, or ideas from its proof). In particular, only $E\left[ X | A \right] = E \left[X \right] \Rightarrow$ $X$ and $\mathcal F$ are independent, is false, isn't it? Yes, it's false. But notice that here you have this relation not just for the random variable $X$ itself, but for infinitely many random variables from $\sigma(X)$. That makes it sound a little more plausible, hopefully.
Induction therapy for the management of early relapsing forms of multiple sclerosis. A critical opinion. The therapeutic approach in multiple sclerosis (MS) is radically changing. From the early stages of MS, a hard-hitting approach to treatment is taken with strong anti-inflammatory drugs being a possible therapeutic option. Areas covered: The concept of induction therapy is emerging in the MS therapeutic scenario. Expert opinion: Not all the MS licensed drugs are suitable candidates for induction therapy. The upcoming challenge will be to identify, after a careful and individual assessment of risk/benefit ratio, the ideal patient who is a candidate to such aggressive therapeutic approach.
package tsi1 import ( "bufio" "bytes" "encoding/binary" "errors" "fmt" "hash/crc32" "io" "os" "sort" "sync" "time" "unsafe" "github.com/influxdata/platform/models" "github.com/influxdata/platform/pkg/bloom" "github.com/influxdata/platform/pkg/mmap" "github.com/influxdata/platform/tsdb" ) // Log errors. var ( ErrLogEntryChecksumMismatch = errors.New("log entry checksum mismatch") ) // Log entry flag constants. const ( LogEntrySeriesTombstoneFlag = 0x01 LogEntryMeasurementTombstoneFlag = 0x02 LogEntryTagKeyTombstoneFlag = 0x04 LogEntryTagValueTombstoneFlag = 0x08 ) // defaultLogFileBufferSize describes the size of the buffer that the LogFile's buffered // writer uses. If the LogFile does not have an explicit buffer size set then // this is the size of the buffer; it is equal to the default buffer size used // by a bufio.Writer. const defaultLogFileBufferSize = 4096 // indexFileBufferSize is the buffer size used when compacting the LogFile down // into a .tsi file. const indexFileBufferSize = 1 << 17 // 128K // LogFile represents an on-disk write-ahead log file. type LogFile struct { mu sync.RWMutex wg sync.WaitGroup // ref count id int // file sequence identifier data []byte // mmap file *os.File // writer w *bufio.Writer // buffered writer bufferSize int // The size of the buffer used by the buffered writer nosync bool // Disables buffer flushing and file syncing. Useful for offline tooling. buf []byte // marshaling buffer keyBuf []byte sfile *tsdb.SeriesFile // series lookup size int64 // tracks current file size modTime time.Time // tracks last time write occurred // In-memory series existence/tombstone sets. seriesIDSet, tombstoneSeriesIDSet *tsdb.SeriesIDSet // In-memory index. mms logMeasurements // In-memory stats stats MeasurementCardinalityStats // Filepath to the log file. path string } // NewLogFile returns a new instance of LogFile. func NewLogFile(sfile *tsdb.SeriesFile, path string) *LogFile { return &LogFile{ sfile: sfile, path: path, mms: make(logMeasurements), stats: make(MeasurementCardinalityStats), seriesIDSet: tsdb.NewSeriesIDSet(), tombstoneSeriesIDSet: tsdb.NewSeriesIDSet(), } } // bytes estimates the memory footprint of this LogFile, in bytes. func (f *LogFile) bytes() int { var b int b += 24 // mu RWMutex is 24 bytes b += 16 // wg WaitGroup is 16 bytes b += int(unsafe.Sizeof(f.id)) // Do not include f.data because it is mmap'd // TODO(jacobmarble): Uncomment when we are using go >= 1.10.0 //b += int(unsafe.Sizeof(f.w)) + f.w.Size() b += int(unsafe.Sizeof(f.buf)) + len(f.buf) b += int(unsafe.Sizeof(f.keyBuf)) + len(f.keyBuf) // Do not count SeriesFile because it belongs to the code that constructed this Index. b += int(unsafe.Sizeof(f.size)) b += int(unsafe.Sizeof(f.modTime)) b += int(unsafe.Sizeof(f.seriesIDSet)) + f.seriesIDSet.Bytes() b += int(unsafe.Sizeof(f.tombstoneSeriesIDSet)) + f.tombstoneSeriesIDSet.Bytes() b += int(unsafe.Sizeof(f.mms)) + f.mms.bytes() b += int(unsafe.Sizeof(f.path)) + len(f.path) return b } // Open reads the log from a file and validates all the checksums. func (f *LogFile) Open() error { if err := f.open(); err != nil { f.Close() return err } return nil } func (f *LogFile) open() error { f.id, _ = ParseFilename(f.path) // Open file for appending. file, err := os.OpenFile(f.Path(), os.O_WRONLY|os.O_CREATE, 0666) if err != nil { return err } f.file = file if f.bufferSize == 0 { f.bufferSize = defaultLogFileBufferSize } f.w = bufio.NewWriterSize(f.file, f.bufferSize) // Finish opening if file is empty. fi, err := file.Stat() if err != nil { return err } else if fi.Size() == 0 { return nil } f.size = fi.Size() f.modTime = fi.ModTime() // Open a read-only memory map of the existing data. data, err := mmap.Map(f.Path(), 0) if err != nil { return err } f.data = data // Read log entries from mmap. var n int64 for buf := f.data; len(buf) > 0; { // Read next entry. Truncate partial writes. var e LogEntry if err := e.UnmarshalBinary(buf); err == io.ErrShortBuffer || err == ErrLogEntryChecksumMismatch { break } else if err != nil { return err } // Execute entry against in-memory index. f.execEntry(&e) // Move buffer forward. n += int64(e.Size) buf = buf[e.Size:] } // Move to the end of the file. f.size = n _, err = file.Seek(n, io.SeekStart) return err } // Close shuts down the file handle and mmap. func (f *LogFile) Close() error { // Wait until the file has no more references. f.wg.Wait() if f.w != nil { f.w.Flush() f.w = nil } if f.file != nil { f.file.Close() f.file = nil } if f.data != nil { mmap.Unmap(f.data) } f.mms = make(logMeasurements) return nil } // FlushAndSync flushes buffered data to disk and then fsyncs the underlying file. // If the LogFile has disabled flushing and syncing then FlushAndSync is a no-op. func (f *LogFile) FlushAndSync() error { if f.nosync { return nil } if f.w != nil { if err := f.w.Flush(); err != nil { return err } } if f.file == nil { return nil } return f.file.Sync() } // ID returns the file sequence identifier. func (f *LogFile) ID() int { return f.id } // Path returns the file path. func (f *LogFile) Path() string { return f.path } // SetPath sets the log file's path. func (f *LogFile) SetPath(path string) { f.path = path } // Level returns the log level of the file. func (f *LogFile) Level() int { return 0 } // Filter returns the bloom filter for the file. func (f *LogFile) Filter() *bloom.Filter { return nil } // Retain adds a reference count to the file. func (f *LogFile) Retain() { f.wg.Add(1) } // Release removes a reference count from the file. func (f *LogFile) Release() { f.wg.Done() } // Stat returns size and last modification time of the file. func (f *LogFile) Stat() (int64, time.Time) { f.mu.RLock() size, modTime := f.size, f.modTime f.mu.RUnlock() return size, modTime } // SeriesIDSet returns the series existence set. func (f *LogFile) SeriesIDSet() (*tsdb.SeriesIDSet, error) { return f.seriesIDSet, nil } // TombstoneSeriesIDSet returns the series tombstone set. func (f *LogFile) TombstoneSeriesIDSet() (*tsdb.SeriesIDSet, error) { return f.tombstoneSeriesIDSet, nil } // Size returns the size of the file, in bytes. func (f *LogFile) Size() int64 { f.mu.RLock() v := f.size f.mu.RUnlock() return v } // Measurement returns a measurement element. func (f *LogFile) Measurement(name []byte) MeasurementElem { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } return mm } func (f *LogFile) MeasurementHasSeries(ss *tsdb.SeriesIDSet, name []byte) bool { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return false } // TODO(edd): if mm is using a seriesSet then this could be changed to do a fast intersection. for _, id := range mm.seriesIDs() { if ss.Contains(id) { return true } } return false } // MeasurementNames returns an ordered list of measurement names. func (f *LogFile) MeasurementNames() []string { f.mu.RLock() defer f.mu.RUnlock() return f.measurementNames() } func (f *LogFile) measurementNames() []string { a := make([]string, 0, len(f.mms)) for name := range f.mms { a = append(a, name) } sort.Strings(a) return a } // DeleteMeasurement adds a tombstone for a measurement to the log file. func (f *LogFile) DeleteMeasurement(name []byte) error { f.mu.Lock() defer f.mu.Unlock() e := LogEntry{Flag: LogEntryMeasurementTombstoneFlag, Name: name} if err := f.appendEntry(&e); err != nil { return err } f.execEntry(&e) // Flush buffer and sync to disk. return f.FlushAndSync() } // TagKeySeriesIDIterator returns a series iterator for a tag key. func (f *LogFile) TagKeySeriesIDIterator(name, key []byte) tsdb.SeriesIDIterator { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } tk, ok := mm.tagSet[string(key)] if !ok { return nil } // Combine iterators across all tag keys. itrs := make([]tsdb.SeriesIDIterator, 0, len(tk.tagValues)) for _, tv := range tk.tagValues { if tv.cardinality() == 0 { continue } itrs = append(itrs, tsdb.NewSeriesIDSetIterator(tv.seriesIDSet())) } return tsdb.MergeSeriesIDIterators(itrs...) } // TagKeyIterator returns a value iterator for a measurement. func (f *LogFile) TagKeyIterator(name []byte) TagKeyIterator { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } a := make([]logTagKey, 0, len(mm.tagSet)) for _, k := range mm.tagSet { a = append(a, k) } return newLogTagKeyIterator(a) } // TagKey returns a tag key element. func (f *LogFile) TagKey(name, key []byte) TagKeyElem { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } tk, ok := mm.tagSet[string(key)] if !ok { return nil } return &tk } // TagValue returns a tag value element. func (f *LogFile) TagValue(name, key, value []byte) TagValueElem { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } tk, ok := mm.tagSet[string(key)] if !ok { return nil } tv, ok := tk.tagValues[string(value)] if !ok { return nil } return &tv } // TagValueIterator returns a value iterator for a tag key. func (f *LogFile) TagValueIterator(name, key []byte) TagValueIterator { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil } tk, ok := mm.tagSet[string(key)] if !ok { return nil } return tk.TagValueIterator() } // DeleteTagKey adds a tombstone for a tag key to the log file. func (f *LogFile) DeleteTagKey(name, key []byte) error { f.mu.Lock() defer f.mu.Unlock() e := LogEntry{Flag: LogEntryTagKeyTombstoneFlag, Name: name, Key: key} if err := f.appendEntry(&e); err != nil { return err } f.execEntry(&e) // Flush buffer and sync to disk. return f.FlushAndSync() } // TagValueSeriesIDSet returns a series iterator for a tag value. func (f *LogFile) TagValueSeriesIDSet(name, key, value []byte) (*tsdb.SeriesIDSet, error) { f.mu.RLock() defer f.mu.RUnlock() mm, ok := f.mms[string(name)] if !ok { return nil, nil } tk, ok := mm.tagSet[string(key)] if !ok { return nil, nil } tv, ok := tk.tagValues[string(value)] if !ok { return nil, nil } else if tv.cardinality() == 0 { return nil, nil } return tv.seriesIDSet(), nil } // MeasurementN returns the total number of measurements. func (f *LogFile) MeasurementN() (n uint64) { f.mu.RLock() defer f.mu.RUnlock() return uint64(len(f.mms)) } // TagKeyN returns the total number of keys. func (f *LogFile) TagKeyN() (n uint64) { f.mu.RLock() defer f.mu.RUnlock() for _, mm := range f.mms { n += uint64(len(mm.tagSet)) } return n } // TagValueN returns the total number of values. func (f *LogFile) TagValueN() (n uint64) { f.mu.RLock() defer f.mu.RUnlock() for _, mm := range f.mms { for _, k := range mm.tagSet { n += uint64(len(k.tagValues)) } } return n } // DeleteTagValue adds a tombstone for a tag value to the log file. func (f *LogFile) DeleteTagValue(name, key, value []byte) error { f.mu.Lock() defer f.mu.Unlock() e := LogEntry{Flag: LogEntryTagValueTombstoneFlag, Name: name, Key: key, Value: value} if err := f.appendEntry(&e); err != nil { return err } f.execEntry(&e) // Flush buffer and sync to disk. return f.FlushAndSync() } // AddSeriesList adds a list of series to the log file in bulk. func (f *LogFile) AddSeriesList(seriesSet *tsdb.SeriesIDSet, collection *tsdb.SeriesCollection) ([]tsdb.SeriesID, error) { var writeRequired bool var entries []LogEntry var i int // Track the index of the point in the batch seriesSet.RLock() for iter := collection.Iterator(); iter.Next(); { seriesID := iter.SeriesID() if seriesSet.ContainsNoLock(seriesID) { i++ continue } writeRequired = true // lazy allocation of entries to avoid common case of no new series if entries == nil { entries = make([]LogEntry, 0, collection.Length()) } entries = append(entries, LogEntry{ SeriesID: seriesID, name: iter.Name(), tags: iter.Tags(), cached: true, batchidx: i, }) i++ } seriesSet.RUnlock() // Exit if all series already exist. if !writeRequired { return nil, nil } f.mu.Lock() defer f.mu.Unlock() seriesSet.Lock() defer seriesSet.Unlock() var seriesIDs []tsdb.SeriesID for i := range entries { // NB - this doesn't evaluate all series ids returned from series file. entry := &entries[i] if seriesSet.ContainsNoLock(entry.SeriesID) { // We don't need to allocate anything for this series. continue } if err := f.appendEntry(entry); err != nil { return nil, err } f.execEntry(entry) seriesSet.AddNoLock(entry.SeriesID) if seriesIDs == nil { seriesIDs = make([]tsdb.SeriesID, collection.Length()) } seriesIDs[entry.batchidx] = entry.SeriesID } // Flush buffer and sync to disk. if err := f.FlushAndSync(); err != nil { return nil, err } return seriesIDs, nil } // DeleteSeriesID adds a tombstone for a series id. func (f *LogFile) DeleteSeriesID(id tsdb.SeriesID) error { f.mu.Lock() defer f.mu.Unlock() e := LogEntry{Flag: LogEntrySeriesTombstoneFlag, SeriesID: id} if err := f.appendEntry(&e); err != nil { return err } f.execEntry(&e) // Flush buffer and sync to disk. return f.FlushAndSync() } // SeriesN returns the total number of series in the file. func (f *LogFile) SeriesN() (n uint64) { f.mu.RLock() defer f.mu.RUnlock() for _, mm := range f.mms { n += uint64(mm.cardinality()) } return n } // appendEntry adds a log entry to the end of the file. func (f *LogFile) appendEntry(e *LogEntry) error { // Marshal entry to the local buffer. f.buf = appendLogEntry(f.buf[:0], e) // Save the size of the record. e.Size = len(f.buf) // Write record to file. n, err := f.w.Write(f.buf) if err != nil { // Move position backwards over partial entry. // Log should be reopened if seeking cannot be completed. if n > 0 { f.w.Reset(f.file) if _, err := f.file.Seek(int64(-n), io.SeekCurrent); err != nil { f.Close() } } return err } // Update in-memory file size & modification time. f.size += int64(n) f.modTime = time.Now() return nil } // execEntry executes a log entry against the in-memory index. // This is done after appending and on replay of the log. func (f *LogFile) execEntry(e *LogEntry) { switch e.Flag { case LogEntryMeasurementTombstoneFlag: f.execDeleteMeasurementEntry(e) case LogEntryTagKeyTombstoneFlag: f.execDeleteTagKeyEntry(e) case LogEntryTagValueTombstoneFlag: f.execDeleteTagValueEntry(e) default: f.execSeriesEntry(e) } } func (f *LogFile) execDeleteMeasurementEntry(e *LogEntry) { mm := f.createMeasurementIfNotExists(e.Name) mm.deleted = true mm.tagSet = make(map[string]logTagKey) mm.series = make(map[tsdb.SeriesID]struct{}) mm.seriesSet = nil } func (f *LogFile) execDeleteTagKeyEntry(e *LogEntry) { mm := f.createMeasurementIfNotExists(e.Name) ts := mm.createTagSetIfNotExists(e.Key) ts.deleted = true mm.tagSet[string(e.Key)] = ts } func (f *LogFile) execDeleteTagValueEntry(e *LogEntry) { mm := f.createMeasurementIfNotExists(e.Name) ts := mm.createTagSetIfNotExists(e.Key) tv := ts.createTagValueIfNotExists(e.Value) tv.deleted = true ts.tagValues[string(e.Value)] = tv mm.tagSet[string(e.Key)] = ts } func (f *LogFile) execSeriesEntry(e *LogEntry) { var seriesKey []byte if e.cached { sz := tsdb.SeriesKeySize(e.name, e.tags) if len(f.keyBuf) < sz { f.keyBuf = make([]byte, 0, sz) } seriesKey = tsdb.AppendSeriesKey(f.keyBuf[:0], e.name, e.tags) } else { seriesKey = f.sfile.SeriesKey(e.SeriesID) } // Series keys can be removed if the series has been deleted from // the entire database and the server is restarted. This would cause // the log to replay its insert but the key cannot be found. // // https://github.com/influxdata/influxdb/issues/9444 if seriesKey == nil { return } // Check if deleted. deleted := e.Flag == LogEntrySeriesTombstoneFlag // Read key size. _, remainder := tsdb.ReadSeriesKeyLen(seriesKey) // Read measurement name. name, remainder := tsdb.ReadSeriesKeyMeasurement(remainder) mm := f.createMeasurementIfNotExists(name) mm.deleted = false if !deleted { mm.addSeriesID(e.SeriesID) } else { mm.removeSeriesID(e.SeriesID) } // Read tag count. tagN, remainder := tsdb.ReadSeriesKeyTagN(remainder) // Save tags. var k, v []byte for i := 0; i < tagN; i++ { k, v, remainder = tsdb.ReadSeriesKeyTag(remainder) ts := mm.createTagSetIfNotExists(k) tv := ts.createTagValueIfNotExists(v) // Add/remove a reference to the series on the tag value. if !deleted { tv.addSeriesID(e.SeriesID) } else { tv.removeSeriesID(e.SeriesID) } ts.tagValues[string(v)] = tv mm.tagSet[string(k)] = ts } // Add/remove from appropriate series id sets & stats. if !deleted { f.seriesIDSet.Add(e.SeriesID) f.tombstoneSeriesIDSet.Remove(e.SeriesID) f.stats.Inc(name) } else { f.seriesIDSet.Remove(e.SeriesID) f.tombstoneSeriesIDSet.Add(e.SeriesID) f.stats.Dec(name) } } // SeriesIDIterator returns an iterator over all series in the log file. func (f *LogFile) SeriesIDIterator() tsdb.SeriesIDIterator { f.mu.RLock() defer f.mu.RUnlock() ss := tsdb.NewSeriesIDSet() allSeriesSets := make([]*tsdb.SeriesIDSet, 0, len(f.mms)) for _, mm := range f.mms { if mm.seriesSet != nil { allSeriesSets = append(allSeriesSets, mm.seriesSet) continue } // measurement is not using seriesSet to store series IDs. mm.forEach(func(seriesID tsdb.SeriesID) { ss.AddNoLock(seriesID) }) } // Fast merge all seriesSets. if len(allSeriesSets) > 0 { ss.Merge(allSeriesSets...) } return tsdb.NewSeriesIDSetIterator(ss) } // createMeasurementIfNotExists returns a measurement by name. func (f *LogFile) createMeasurementIfNotExists(name []byte) *logMeasurement { mm := f.mms[string(name)] if mm == nil { mm = &logMeasurement{ name: name, tagSet: make(map[string]logTagKey), series: make(map[tsdb.SeriesID]struct{}), } f.mms[string(name)] = mm } return mm } // MeasurementIterator returns an iterator over all the measurements in the file. func (f *LogFile) MeasurementIterator() MeasurementIterator { f.mu.RLock() defer f.mu.RUnlock() var itr logMeasurementIterator for _, mm := range f.mms { itr.mms = append(itr.mms, *mm) } sort.Sort(logMeasurementSlice(itr.mms)) return &itr } // MeasurementSeriesIDIterator returns an iterator over all series for a measurement. func (f *LogFile) MeasurementSeriesIDIterator(name []byte) tsdb.SeriesIDIterator { f.mu.RLock() defer f.mu.RUnlock() mm := f.mms[string(name)] if mm == nil || mm.cardinality() == 0 { return nil } return tsdb.NewSeriesIDSetIterator(mm.seriesIDSet()) } // CompactTo compacts the log file and writes it to w. func (f *LogFile) CompactTo(w io.Writer, m, k uint64, cancel <-chan struct{}) (n int64, err error) { f.mu.RLock() defer f.mu.RUnlock() // Check for cancellation. select { case <-cancel: return n, ErrCompactionInterrupted default: } // Wrap in bufferred writer with a buffer equivalent to the LogFile size. bw := bufio.NewWriterSize(w, indexFileBufferSize) // 128K // Setup compaction offset tracking data. var t IndexFileTrailer info := newLogFileCompactInfo() info.cancel = cancel // Write magic number. if err := writeTo(bw, []byte(FileSignature), &n); err != nil { return n, err } // Retreve measurement names in order. names := f.measurementNames() // Flush buffer & mmap series block. if err := bw.Flush(); err != nil { return n, err } // Write tagset blocks in measurement order. if err := f.writeTagsetsTo(bw, names, info, &n); err != nil { return n, err } // Write measurement block. t.MeasurementBlock.Offset = n if err := f.writeMeasurementBlockTo(bw, names, info, &n); err != nil { return n, err } t.MeasurementBlock.Size = n - t.MeasurementBlock.Offset // Write series set. t.SeriesIDSet.Offset = n nn, err := f.seriesIDSet.WriteTo(bw) if n += nn; err != nil { return n, err } t.SeriesIDSet.Size = n - t.SeriesIDSet.Offset // Write tombstone series set. t.TombstoneSeriesIDSet.Offset = n nn, err = f.tombstoneSeriesIDSet.WriteTo(bw) if n += nn; err != nil { return n, err } t.TombstoneSeriesIDSet.Size = n - t.TombstoneSeriesIDSet.Offset // Write trailer. nn, err = t.WriteTo(bw) n += nn if err != nil { return n, err } // Flush buffer. if err := bw.Flush(); err != nil { return n, err } return n, nil } func (f *LogFile) writeTagsetsTo(w io.Writer, names []string, info *logFileCompactInfo, n *int64) error { for _, name := range names { if err := f.writeTagsetTo(w, name, info, n); err != nil { return err } } return nil } // writeTagsetTo writes a single tagset to w and saves the tagset offset. func (f *LogFile) writeTagsetTo(w io.Writer, name string, info *logFileCompactInfo, n *int64) error { mm := f.mms[name] // Check for cancellation. select { case <-info.cancel: return ErrCompactionInterrupted default: } enc := NewTagBlockEncoder(w) var valueN int for _, k := range mm.keys() { tag := mm.tagSet[k] // Encode tag. Skip values if tag is deleted. if err := enc.EncodeKey(tag.name, tag.deleted); err != nil { return err } else if tag.deleted { continue } // Sort tag values. values := make([]string, 0, len(tag.tagValues)) for v := range tag.tagValues { values = append(values, v) } sort.Strings(values) // Add each value. for _, v := range values { value := tag.tagValues[v] if err := enc.EncodeValue(value.name, value.deleted, value.seriesIDSet()); err != nil { return err } // Check for cancellation periodically. if valueN++; valueN%1000 == 0 { select { case <-info.cancel: return ErrCompactionInterrupted default: } } } } // Save tagset offset to measurement. offset := *n // Flush tag block. err := enc.Close() *n += enc.N() if err != nil { return err } // Save tagset offset to measurement. size := *n - offset info.mms[name] = &logFileMeasurementCompactInfo{offset: offset, size: size} return nil } func (f *LogFile) writeMeasurementBlockTo(w io.Writer, names []string, info *logFileCompactInfo, n *int64) error { mw := NewMeasurementBlockWriter() // Check for cancellation. select { case <-info.cancel: return ErrCompactionInterrupted default: } // Add measurement data. for _, name := range names { mm := f.mms[name] mmInfo := info.mms[name] assert(mmInfo != nil, "measurement info not found") mw.Add(mm.name, mm.deleted, mmInfo.offset, mmInfo.size, mm.seriesIDs()) } // Flush data to writer. nn, err := mw.WriteTo(w) *n += nn return err } // logFileCompactInfo is a context object to track compaction position info. type logFileCompactInfo struct { cancel <-chan struct{} mms map[string]*logFileMeasurementCompactInfo } // newLogFileCompactInfo returns a new instance of logFileCompactInfo. func newLogFileCompactInfo() *logFileCompactInfo { return &logFileCompactInfo{ mms: make(map[string]*logFileMeasurementCompactInfo), } } type logFileMeasurementCompactInfo struct { offset int64 size int64 } // MeasurementCardinalityStats returns cardinality stats for this log file. func (f *LogFile) MeasurementCardinalityStats() MeasurementCardinalityStats { f.mu.RLock() defer f.mu.RUnlock() return f.stats.Clone() } // LogEntry represents a single log entry in the write-ahead log. type LogEntry struct { Flag byte // flag SeriesID tsdb.SeriesID // series id Name []byte // measurement name Key []byte // tag key Value []byte // tag value Checksum uint32 // checksum of flag/name/tags. Size int // total size of record, in bytes. cached bool // Hint to LogFile that series data is already parsed name []byte // series naem, this is a cached copy of the parsed measurement name tags models.Tags // series tags, this is a cached copied of the parsed tags batchidx int // position of entry in batch. } // UnmarshalBinary unmarshals data into e. func (e *LogEntry) UnmarshalBinary(data []byte) error { var sz uint64 var n int var seriesID uint64 var err error orig := data start := len(data) // Parse flag data. if len(data) < 1 { return io.ErrShortBuffer } e.Flag, data = data[0], data[1:] // Parse series id. if seriesID, n, err = uvarint(data); err != nil { return err } e.SeriesID, data = tsdb.NewSeriesID(seriesID), data[n:] // Parse name length. if sz, n, err = uvarint(data); err != nil { return err } // Read name data. if len(data) < n+int(sz) { return io.ErrShortBuffer } e.Name, data = data[n:n+int(sz)], data[n+int(sz):] // Parse key length. if sz, n, err = uvarint(data); err != nil { return err } // Read key data. if len(data) < n+int(sz) { return io.ErrShortBuffer } e.Key, data = data[n:n+int(sz)], data[n+int(sz):] // Parse value length. if sz, n, err = uvarint(data); err != nil { return err } // Read value data. if len(data) < n+int(sz) { return io.ErrShortBuffer } e.Value, data = data[n:n+int(sz)], data[n+int(sz):] // Compute checksum. chk := crc32.ChecksumIEEE(orig[:start-len(data)]) // Parse checksum. if len(data) < 4 { return io.ErrShortBuffer } e.Checksum, data = binary.BigEndian.Uint32(data[:4]), data[4:] // Verify checksum. if chk != e.Checksum { return ErrLogEntryChecksumMismatch } // Save length of elem. e.Size = start - len(data) return nil } // appendLogEntry appends to dst and returns the new buffer. // This updates the checksum on the entry. func appendLogEntry(dst []byte, e *LogEntry) []byte { var buf [binary.MaxVarintLen64]byte start := len(dst) // Append flag. dst = append(dst, e.Flag) // Append series id. n := binary.PutUvarint(buf[:], e.SeriesID.RawID()) dst = append(dst, buf[:n]...) // Append name. n = binary.PutUvarint(buf[:], uint64(len(e.Name))) dst = append(dst, buf[:n]...) dst = append(dst, e.Name...) // Append key. n = binary.PutUvarint(buf[:], uint64(len(e.Key))) dst = append(dst, buf[:n]...) dst = append(dst, e.Key...) // Append value. n = binary.PutUvarint(buf[:], uint64(len(e.Value))) dst = append(dst, buf[:n]...) dst = append(dst, e.Value...) // Calculate checksum. e.Checksum = crc32.ChecksumIEEE(dst[start:]) // Append checksum. binary.BigEndian.PutUint32(buf[:4], e.Checksum) dst = append(dst, buf[:4]...) return dst } // logMeasurements represents a map of measurement names to measurements. type logMeasurements map[string]*logMeasurement // bytes estimates the memory footprint of this logMeasurements, in bytes. func (mms *logMeasurements) bytes() int { var b int for k, v := range *mms { b += len(k) b += v.bytes() } b += int(unsafe.Sizeof(*mms)) return b } type logMeasurement struct { name []byte tagSet map[string]logTagKey deleted bool series map[tsdb.SeriesID]struct{} seriesSet *tsdb.SeriesIDSet } // bytes estimates the memory footprint of this logMeasurement, in bytes. func (m *logMeasurement) bytes() int { var b int b += len(m.name) for k, v := range m.tagSet { b += len(k) b += v.bytes() } b += (int(m.cardinality()) * 8) b += int(unsafe.Sizeof(*m)) return b } func (m *logMeasurement) addSeriesID(x tsdb.SeriesID) { if m.seriesSet != nil { m.seriesSet.AddNoLock(x) return } m.series[x] = struct{}{} // If the map is getting too big it can be converted into a roaring seriesSet. if len(m.series) > 25 { m.seriesSet = tsdb.NewSeriesIDSet() for id := range m.series { m.seriesSet.AddNoLock(id) } m.series = nil } } func (m *logMeasurement) removeSeriesID(x tsdb.SeriesID) { if m.seriesSet != nil { m.seriesSet.RemoveNoLock(x) return } delete(m.series, x) } func (m *logMeasurement) cardinality() int64 { if m.seriesSet != nil { return int64(m.seriesSet.Cardinality()) } return int64(len(m.series)) } // forEach applies fn to every series ID in the logMeasurement. func (m *logMeasurement) forEach(fn func(tsdb.SeriesID)) { if m.seriesSet != nil { m.seriesSet.ForEachNoLock(fn) return } for seriesID := range m.series { fn(seriesID) } } // seriesIDs returns a sorted set of seriesIDs. func (m *logMeasurement) seriesIDs() []tsdb.SeriesID { a := make([]tsdb.SeriesID, 0, m.cardinality()) if m.seriesSet != nil { m.seriesSet.ForEachNoLock(func(id tsdb.SeriesID) { a = append(a, id) }) return a // IDs are already sorted. } for seriesID := range m.series { a = append(a, seriesID) } sort.Slice(a, func(i, j int) bool { return a[i].Less(a[j]) }) return a } // seriesIDSet returns a copy of the logMeasurement's seriesSet, or creates a new // one func (m *logMeasurement) seriesIDSet() *tsdb.SeriesIDSet { if m.seriesSet != nil { return m.seriesSet.CloneNoLock() } ss := tsdb.NewSeriesIDSet() for seriesID := range m.series { ss.AddNoLock(seriesID) } return ss } func (m *logMeasurement) Name() []byte { return m.name } func (m *logMeasurement) Deleted() bool { return m.deleted } func (m *logMeasurement) createTagSetIfNotExists(key []byte) logTagKey { ts, ok := m.tagSet[string(key)] if !ok { ts = logTagKey{name: key, tagValues: make(map[string]logTagValue)} } return ts } // keys returns a sorted list of tag keys. func (m *logMeasurement) keys() []string { a := make([]string, 0, len(m.tagSet)) for k := range m.tagSet { a = append(a, k) } sort.Strings(a) return a } // logMeasurementSlice is a sortable list of log measurements. type logMeasurementSlice []logMeasurement func (a logMeasurementSlice) Len() int { return len(a) } func (a logMeasurementSlice) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a logMeasurementSlice) Less(i, j int) bool { return bytes.Compare(a[i].name, a[j].name) == -1 } // logMeasurementIterator represents an iterator over a slice of measurements. type logMeasurementIterator struct { mms []logMeasurement } // Next returns the next element in the iterator. func (itr *logMeasurementIterator) Next() (e MeasurementElem) { if len(itr.mms) == 0 { return nil } e, itr.mms = &itr.mms[0], itr.mms[1:] return e } type logTagKey struct { name []byte deleted bool tagValues map[string]logTagValue } // bytes estimates the memory footprint of this logTagKey, in bytes. func (tk *logTagKey) bytes() int { var b int b += len(tk.name) for k, v := range tk.tagValues { b += len(k) b += v.bytes() } b += int(unsafe.Sizeof(*tk)) return b } func (tk *logTagKey) Key() []byte { return tk.name } func (tk *logTagKey) Deleted() bool { return tk.deleted } func (tk *logTagKey) TagValueIterator() TagValueIterator { a := make([]logTagValue, 0, len(tk.tagValues)) for _, v := range tk.tagValues { a = append(a, v) } return newLogTagValueIterator(a) } func (tk *logTagKey) createTagValueIfNotExists(value []byte) logTagValue { tv, ok := tk.tagValues[string(value)] if !ok { tv = logTagValue{name: value, series: make(map[tsdb.SeriesID]struct{})} } return tv } // logTagKey is a sortable list of log tag keys. type logTagKeySlice []logTagKey func (a logTagKeySlice) Len() int { return len(a) } func (a logTagKeySlice) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a logTagKeySlice) Less(i, j int) bool { return bytes.Compare(a[i].name, a[j].name) == -1 } type logTagValue struct { name []byte deleted bool series map[tsdb.SeriesID]struct{} seriesSet *tsdb.SeriesIDSet } // bytes estimates the memory footprint of this logTagValue, in bytes. func (tv *logTagValue) bytes() int { var b int b += len(tv.name) b += int(unsafe.Sizeof(*tv)) b += (int(tv.cardinality()) * 8) return b } func (tv *logTagValue) addSeriesID(x tsdb.SeriesID) { if tv.seriesSet != nil { tv.seriesSet.AddNoLock(x) return } tv.series[x] = struct{}{} // If the map is getting too big it can be converted into a roaring seriesSet. if len(tv.series) > 25 { tv.seriesSet = tsdb.NewSeriesIDSet() for id := range tv.series { tv.seriesSet.AddNoLock(id) } tv.series = nil } } func (tv *logTagValue) removeSeriesID(x tsdb.SeriesID) { if tv.seriesSet != nil { tv.seriesSet.RemoveNoLock(x) return } delete(tv.series, x) } func (tv *logTagValue) cardinality() int64 { if tv.seriesSet != nil { return int64(tv.seriesSet.Cardinality()) } return int64(len(tv.series)) } // seriesIDSet returns a copy of the logMeasurement's seriesSet, or creates a new // one func (tv *logTagValue) seriesIDSet() *tsdb.SeriesIDSet { if tv.seriesSet != nil { return tv.seriesSet.CloneNoLock() } ss := tsdb.NewSeriesIDSet() for seriesID := range tv.series { ss.AddNoLock(seriesID) } return ss } func (tv *logTagValue) Value() []byte { return tv.name } func (tv *logTagValue) Deleted() bool { return tv.deleted } // logTagValue is a sortable list of log tag values. type logTagValueSlice []logTagValue func (a logTagValueSlice) Len() int { return len(a) } func (a logTagValueSlice) Swap(i, j int) { a[i], a[j] = a[j], a[i] } func (a logTagValueSlice) Less(i, j int) bool { return bytes.Compare(a[i].name, a[j].name) == -1 } // logTagKeyIterator represents an iterator over a slice of tag keys. type logTagKeyIterator struct { a []logTagKey } // newLogTagKeyIterator returns a new instance of logTagKeyIterator. func newLogTagKeyIterator(a []logTagKey) *logTagKeyIterator { sort.Sort(logTagKeySlice(a)) return &logTagKeyIterator{a: a} } // Next returns the next element in the iterator. func (itr *logTagKeyIterator) Next() (e TagKeyElem) { if len(itr.a) == 0 { return nil } e, itr.a = &itr.a[0], itr.a[1:] return e } // logTagValueIterator represents an iterator over a slice of tag values. type logTagValueIterator struct { a []logTagValue } // newLogTagValueIterator returns a new instance of logTagValueIterator. func newLogTagValueIterator(a []logTagValue) *logTagValueIterator { sort.Sort(logTagValueSlice(a)) return &logTagValueIterator{a: a} } // Next returns the next element in the iterator. func (itr *logTagValueIterator) Next() (e TagValueElem) { if len(itr.a) == 0 { return nil } e, itr.a = &itr.a[0], itr.a[1:] return e } // FormatLogFileName generates a log filename for the given index. func FormatLogFileName(id int) string { return fmt.Sprintf("L0-%08d%s", id, LogFileExt) }
The present invention pertains to a process for continuously preparing powder lubricants to use in dry wiredrawing and/or in cold metal rolling, the apparatus to put the process into practice, and the powder lubricants thus obtained. By wiredrawing it is generally meant a process in which a material being worked is cold deformed without removal of chips by pulling it through special matrices called dies so as to reduce its section up to the desired diameter or profile the section thereof according to the desired shapes. Cold rolling is instead a process by which a material being worked is cold deformed without removal of chips by means of special rotating rolls. Obviously the friction of the wire through the die or rolls is in both cases very strong. In order to eliminate or reduce said friction it is necessary to resort to lubrication, which consists in interposing substances which may be greasy, solid or liquid and are exactly called lubricants, between the sliding surfaces. Currently, powder lubricants are increasingly used in the field of dry wiredrawing and/or cold rolling. They generally consist of metal salts of fatty acids added with inert mineral charges and additives. The first wide-spread procedure to prepare powder lubricants consisted in mixing the components in a mixer until a homogeneous mass was achieved, then the product was distributed into appropriate tray-like containers and finally said containers were disposed in a furnace in which baking took place. The finished product thus obtained was then ground and brought to the desired granulometry. It is clear that such a procedure needed very long working times, gave a reduced yield and involved an important waste of man-power and energy while offering insufficient security to the persons attending to the apparatus. A further method of production substantially quicker and less hard than the preceding one, provides the use of a bladed mixer, heated to a temperature of 100.degree. to 300.degree. C., in which components are simultaneously mixed and baked during a period of time ranging between 60 and 90 minutes. The mixture is afterwards taken out and then ground. Although the last mentioned method is better than the preceding one, it suffers the disadvantage that at the end of the mixing and baking operations the production cycle must be interrupted to allow the discharging of the baked product and the charging of raw materials necessary for the next working cycle.
The news channel’s helicopter captured the “complete gridlock” on the 405 motorway, one of the busiest and most congested routes in the United States. ABC7 shared the footage on social media, joking that the motorists appeared to be attempting the Mannequin Challenge. The footage racked up millions of views within a few hours of being uploaded, with Facebook user Gps Erick commenting: “Are you sure that's traffic? Maybe they decorated the 405 with Christmas lights.”
MEDREG and the Energy Community Regulatory Board (ECRB), representing Energy Regulators of the Mediterranean and the Eastern and Black Sea regions, joined forces to review the procedures available to energy consumers to handle complaints and settle disputes.The two organisations also proposed best practices. This report maps out the current status and nature of complaint handling, dispute settlement and consumer awareness in Eastern Europe and Mediterranean countries. It is accompanied by recommendations applicable to disputes between consumers and their providers. Based on a survey answered by 19 members from MEDREG and ECRB, the report reveals that regardless of the level of customers’ protection, consumers’ protection is gaining a growing interest. Here are the main findings gleaned by the survey: ? All respondent countries have energy consumer protection policies in place. In most cases, National Regulatory Authorities (NRAs) are responsible to review complaints and resolve disputes with clear procedures in place while in some minor cases other entities, such as judicial institutions, handle them. ? In almost all the respondent countries the energy service provider informs the customers about relevant information concerning price connection and disconnection rules and fees terms of bill payments dispute rules. ? Consumers are informed about their right to complain via contracts, websites and leaflets. ? In most countries, consumers raise a complaint through the supplier’s own mechanisms in the first instance, before turning to the regulator. Various means are available to customers to contact the regulator and to address their complaints, the most innovative being the use of messaging applications and social media. ? All regulators have access to complaint-related data, which is ensured by law. ? Billing and metering issues are the most common topics on which complaints are received. ? All NRAs must address complaints within specific periods, ensuring that customers receive redress or treatment of their complaint within an adequate timeframe. ? In nearly all cases, including in recourses to voluntary Alternative Dispute Resolution (ADR) procedures, the final settlement determined by the regulator or reached by the parties is binding, and in the case of a regulatory decision, subject to fines and enforcement in the case of non-compliance by the service provider. ? As a next step, ECRB and MEDREG will seek to publicly discuss the findings of this report to bring the understanding of customers’ rights protection to the attention of high level stakeholders and ensure more accountable responses from the authorities. Best Regards,MEDREGMediterranean Energy Regulators MEDREG is co-funded by the European Union. The contents of this document are the sole responsibility of MEDREG and can under no circumstances be regarded as reflecting the position of the European Union.
The grass is spotty, the dirt patches shine through. The field isn’t what you’d expect for an Ultimate Frisbee league, as it is centered in the outfield of a prison softball field. Yet, a little over a dozen men, convicts, stand at the ready. Some wearing gloves, some shirtless with all manner of tattoos, and the field is marked with orange pylons commandeered from the prison flag football league. This is Frisbee, prison Frisbee. When people think of federal prison they think of hardened convicts. They think of razor wire and gun towers. They think of stabbings, stompings, and race conflicts. They think of prison guards, their mace and their batons. Such of these thoughts are applicable. Just yesterday someone was stomped in the prison chow hall for sitting at the wrong table. Prison is prison, and what happens in prison simply happens. But perhaps when people think about prison they should think too of sports leagues, camaraderie, and healthy competition, albeit competition with teeth. After all, this is prison sports we’re talking about, not an intramural league at a liberal arts college. Most of us are serving sentences in excess of five or even 10 years; some may never go home. As the Frisbee players of FCI Petersburg – a medium security federal prison in Petersburg, Virginia – group up, they start to throw around a few Frisbees. Long games of catch ensue in the pre-game dawn light. Games here are often pick-up and last from 6 PM to 8 PM. These times coincide with the “activities moves,” when the reinforced internal prison gates – gates topped with razor wire and security cameras, of course – are opened and prisoners are allowed to move to different locations within the prison. While there are weeks when only one or two games are played, there have literally been complete months when everyone comes out and plays for every single night. Again, this isn’t regular Frisbee, it is prison Frisbee, and the stakes aren’t about wins on a scoreboard – of which we have none – but more about identity and a sense of being. Hey, we’re in prison and we need something to grasp in an effort to find meaning in life, even if for only a few hours a week, and even if only a small part of our identity.
The pharmacokinetics of 2,2',5,5'-tetrachlorobiphenyl and 3,3',4,4'-tetrachlorobiphenyl and its relationship to toxicity. The pharmacokinetics of two toxicologically diverse tetrachlorobiphenyls (TCBs) were measured in mice. After dosing to apparent steady-state conditions, 2,2',5,5'-TCB was found to have a tissue elimination half-life of between 1.64 and 2.90 days. The half-life of 3,3',4,4'-TCB was similar, ranging from 1.07 to 2.60 days. Systemic clearance and volume of distribution estimates were also similar for the two TCB isomers. The 3,3',4,4'-isomer had a substantially greater partitioning from serum into adipose, liver, and thymic tissues. With dosing regimens developed using these measured pharmacokinetic parameters, experiments were undertaken to compare toxic potency of these two TCBs when similar tissue concentrations of the two isomers were achieved in target and storage tissues. These studies demonstrated that thymic atrophy occurs at lower 3,3',4,4'-TCB doses and tissue concentrations than those required to produce hepatotoxicity. These two organ toxicities were produced only by 3,3',4,4'-TCB despite the fact that equivalent or higher tissue concentrations of 2,2',5,5'-TCB were achieved in vivo in all tissues. We conclude that the in vivo difference in the toxic potency of these two TCB isomers does not result from the significant differences in their tissue disposition, elimination, and ultimate bioaccumulation.
"[ Men Grunting ]" "There!" "All right!" "Unload those barrels over that way!" "Just set it on my shoulders." "That's right." "Good." "All of it." "That'll do it." "By the gods!" "Hmm." "[ Chuckling ]" "Nice chest we got here, boys." "[ Men Laugh ]" "[ Man ] What about the legs, Captain?" "[ Captain ] Hey!" "Step aside, ya mongrels." "The island's deserted, Captain." "Good." "When King Zolas realizes we grabbed... his, uh, family jewels, [ All Chuckling ] he'll have every bounty hunter from here to Hellespont lookin' for us." " Unload the rest of it." "We'll bury it there." "Aye, aye, Captain!" "Let's move it!" "You know I wouldn't question you in front of the men, Captain, but the Charybdean Sea is a known graveyard for ships, and it's our only way outta here." "That's why Zolas would never look for us here." "Don't worry." "If we die, we die rich." "[ Chuckling ]" "[ Shovel Banging ] Captain!" "You better take a look at this." "[ Man ] What in Tartarus?" "[ Man ] Over here." "What is it, Captain?" "No idea." "Dig it up." "[ Man ] Go on." "You heard the captain." "[ Hercules ] Watch the rope!" "It's about to give!" " [ Grunts ]" " Bromius!" "What happened?" "Give him a hand!" "Pull him out of there!" "You all right?" "Fine." "We've stopped the flooding, Captain, but she won't hold for long." "The storm's movin' this way, Cercetes." "We're gonna have to make some repairs." "You think we can do it before the storm hits?" "Let's not push our luck." "We've already lost too many men." "You're right." "We'll put in and wait for clear skies." "Sorry, Hercules, but I'm gonna have to get ya back to Corinth a little later than I hoped to." "Don't worry about it." "You're all right." "I'm just glad we're on dry land." "I just hope our ship survives the storm." "Ooh, spooky." "[ Low Growling ]" "We're not the only ones here." "Yeah." "Somebody's been busy." "Well, well, well." "By the gods!" "I, uh" " I think I found something!" "There's more." "Over here." "Oh, look at all this stuff!" "We're rich!" "We're rich!" "[ Boisterous Laughter ]" "They were burying this stuff." "You thinking what I'm thinking?" "Pirates." "So where did they go?" "[ Chuckles ] Everywhere." "And they left in a hurry." "[ Man Narrating ] This is the story of a time long ago, a time of myth and legend, when the ancient gods were petty and cruel, and they plagued mankind with suffering." "Only one man dared to challenge their power" "Hercules." "Hercules possessed a strength the world had never seen, a strength surpassed only by the power of his heart." "He journeyed the earth, battling the minions of his wicked stepmother, Hera, the all-powerful queen of the gods." "But wherever there was evil, wherever an innocent would suffer, there would be..." "Hercules." " [ Screeching ] - [ Roaring ]" "Hail!" "[ Both Laughing ]" "Hey!" "Don't celebrate yet, fellas." "This treasure isn't yours." "What?" "We found it!" "Yeah!" "Finders keepers!" "These chests have Corillian seals." "So?" "They belong to King Zolas." "Well, there's no seal on my bag!" "Yeah, mine neither!" "Shut up!" "Hercules is right." "Hey, listen, tree trunk." "You're not the captain!" "No, he's not!" "But I am." "We return the treasure, and that's that." "Hercules, there's a lot of equipment back there." "We could use it to repair the boat." "Yeah." "But first, let's find out who left it here, or at least why." "We'll split up and meet back here later." "I'll stay here and set up camp." "Paxxon, Bromius, come with me." "Why does Monicles get to hang out and relax?" "Paxxon." "What?" "Now!" "Oh, yeah, lookin' forward to this." "Well, Iolaus." "This is another fine mess you've gotten us into." "Funny." "Very funny." "Close-knit crew, huh?" "Yeah, like a family." "Remind me never to hitch a ride with strangers again." "Actually, that's pretty good advice." "Oh, thanks, Hercules." "What in Tartarus is that?" "[ Hercules ] It's, uh, some sort of cocoon." "What for?" "Giant butterfly?" "[ Chuckles ]" "Somehow, I don't think it's gonna turn out to be that friendly." "[ Screaming ] How come you're always right?" "It's a half-god thing." "Yeah." "Disgusting!" "[ Hercules ] Looks like he died defending himself." " From what?" " [ Groans ] It stinks." "[ Moans ]" "[ Retching ]" " What happened to his face?" "Looks like it just melted off." "We found something back there." "There's a cocoon." "What kind of cocoon?" "I'm not sure." "I've never seen anything like it." "Hercules, uh, you know, what came out of that cocoon could have" " It's possible." " Ah, you're kidding, right?" "I guess we know why those pirates left in such a hurry." "Well, we're not gonna be going anywhere with this storm coming on." "Oh, great." "We'll just sit around and wait for our faces to melt off." "If we stay together, we'll be fine." "As soon as the storm passes, we'll get out of here." "[ Thunderclaps ]" "Let's bury him." "What's takin' 'em so long?" "My, my." "There once was a sailor named Monicles... who couldn't be rich '''cause of Hercules." "As soon as he left, [ Hissing ]" "I started my theft." "[ Rustling ]" "The rest will have to wait till later." "Hello!" "[ Screaming ]" "Something's cookin'." "Monicles?" "[ Thunderclap ]" "[ Sighs ] Look what we have here." " I guess his pockets weren't big enough." " That snake!" "So where is he, huh?" "Maybe he tried to take his chances without us." "I don't think so." "He may be greedy, but he's not stupid." "Could have fooled me." "So what are you saying, man?" "Huh?" "He's dead, isn't he?" "He's dead, isn't he?" " He's dead and his whole face is melted off!" " I'm so sick of your whining!" "But, man, he's dead!" "His face is all melted off!" "Grow a spine, you coward!" "Don't push me, you great big oaf!" "Come on" "Hey, hey, hey, hey, hey!" "Cut it out, both of you!" "This isn't helping us find Monicles." "Find him?" "Why bother to find him?" "He's dead!" "We can't be sure of that." "We gotta go and look for him." "I'm not goin' anywhere." "Suit yourself." "Yell if you run into any trouble." "Bye-bye, Paxxon." "Have fun!" "No, I'm not goin' anywhere." "No, anywhere." "I'm not goin' anywhere." "[ Thunderclap ]" "Hey, wait for me!" "Guys?" "Hello?" "Monicles!" "[ Rustling ] - [ All Gasp ]" "What was that?" "I do believe..." "we have company." "Okay." "Back up slowly." "No, wait!" "My leg!" " Woof." " You all right?" " It's just a flesh wound." " Well, take care of it." "[ Hercules ] Iolaus, let's go!" "Hello." "Ah!" "Nice to see ya again." "How ya doin'?" "[ Groans ]" "Who are you?" "Who are you?" "No, let me guess." "Sloping forehead, dragging knuckles." "I'm thinkin' orangutan." "Maybe gorilla." " [ Moaning ]" " Ah, that must be your little chimp." "I'm laughing inside." "We weren't tryin' to hurt you." "Yeah, right." "Could have fooled me." "This is Iolaus." "I'm Hercules." "Never thought the mighty Hercules would stoop to bounty hunting." "What's wrong?" "Hero business not paying so well these days?" "Not as well as piracy, I take it." "You stole the treasure, huh?" "Yeah, yeah." "Look, whatever King Zolas is paying you, I'll double it." "But we have to get out of here now." "King Zolas didn't send us." "We don't take bribes." "Yeah, especially from pirates." "So you're not after me?" "No." "Well, in that case, I'm Nebula," "Captain of theLeviathan." "And you boys have come to the wrong place at the wrong time." "[ Voices Screaming ]" "This ought to stop the bleeding." "I'm fine." "Let's go." "Take it easy." "Paxxon, give me a hand." "Paxxon?" "Paxxon?" "That coward." "Probably got spooked by his own shadow!" "Boo!" "[ Gasps ] What in Tartarus are you doing?" " You jackass." " Ooh, man, the look on your face." "Well, who's the coward now, huh?" "Paxxon!" "That's enough." "I'm tryin' to make a point here, Captain." "If the point is you're a fool, you've made it." "Now shut up!" "I'm gonna tear your head off." " I'm gonna rip off your arms and beat you with '''em!" " Nyah, nyah, nyah, nyah, nyah." "Well, first you'll have to catch me!" "Paxxon, that's enough." " [ Laughing ]" "I said that's enough!" "Whoa!" "What was that?" " [ Screeching ] - [ Screams ]" "[ Screaming ]" "[ Screaming ]" "[ Screaming Continues ]" "It took Bromius!" "Oh, man!" "It took Bromius!" "[ Iolaus ] "It" what?" " I" " I don't know!" "[ Strained Voice ] Hercules, help me!" "Hang on, Cercetes." "We'll get you out of here." " You ever seen anything like this before?" " I have." "Out of the way." "I know you don't want him to keep suffering." "Please, anything." "The pain." "You better know what you're doing." "I do." " Oh!" " [ Gasps ]" "Why?" "The same thing happened to my first mate!" "I did everything that I could, but he died slow and painful." "Get it?" "That must be the body we found." "I did your friend a favor." "Now get your hands off me." "With pleasure." "Keep talking." "I landed here with a crew of 20." "We found something in the ground and I told them to dig it up." "Whatever that thing is took them all in less than a day... because I was curious about what it was." " So what happened to your ship?" " The hurricane set it adrift." "It's at the bottom of the sea by now." "Sorry about your crew." "But where I come from, we don't give up on people." "Is that right?" "Well, I admire your idealism, Hercules." "I really do." "But in case you haven't noticed, the rules are a little different here." "And by the way, if I end up like him," "I hope one of youmen has the guts to do the same for me." "Hercules." "Look at this." "What do you think it is?" "It's blood." "[ Paxxon ] Blood?" " Cercetes must have wounded it." " And look what happened to him." "Did you see it?" "Well, it-it was too fast." "It moves in the shadows." "We never saw it coming." "Then we've got no choice." "We find it before it finds us." "[ Thunderclap ]" "This way." "In there." "[ Groaning ]" "Something I can help you with?" "Yeah, I was, uh-- I was just lookin' at your tattoos." "Look." "Don't touch." "Didn't they hurt?" "Only the first time." "What's that?" "Poseidon's trident." "Keeps the wind at my back." "I see you have an Eastern calendar." "Yeah, helps me keep track of the seasons." "You've been to the East." "Well, that's where we both learned to fight, I take it." "How long were you in prison?" "On your neck, that's a Spartan prison marking." "[ Chuckles ] Funny." "I don't remember seeing you there." "You could say I gave some people... directions." "Well, I bet you did." "Something you wanna say to me?" "Only that I don't trust you." "That's your problem, '''cause until we get off this rock, you don't have much of a choice." "When we do get off this rock, you and that treasure you took are going back to King Zolas." "If you can get us out of here alive, Hercules, I'll think about it." "You do that." "Hercules, this is it." "Looks like the trail ends here." "[ Rustling ]" "[ Screeching ]" "Look out!" "That is one big spider." "[ Hissing ]" " Ha!" " [ Screeching ]" "Hercules?" "Hercules, don't leave me here!" "Please!" "[ Laughing ]" "Hercules, hurry up!" "Paxxon." "Hercules!" "Paxxon, hang on!" "Don't move, Paxxon!" "I'll be right there." "[ Screeching ]" "[ Thunderclap ] Uh-uh." "Hercules, I'm outta here!" "No!" "Stay where you are!" "Come on!" "Come on!" "Paxxon!" "Paxxon!" "This way!" "Paxxon!" "Stop running!" "Paxxon!" "Whoa!" "[ Screeching ]" "[ Screaming ]" "Paxxon!" "Paxxon!" "Paxxon!" "[ Thunderclaps Continue ]" "I say we hop on that ship and take our chances." "You wanna sail through a hurricane in a leaky boat?" " You got a better idea?" " Forget the ship, Nebula." "We'd never make it." "At least we'd have a chance." "We don't even know what we're up against here." "I do." "Her name is Arachne." "She was a queen." "Very vain, very cruel, very beautiful." "But her daughter was even more beautiful than she was." "So Arachne threw her own child into the sea." "So the gods put a curse on her, right?" " Exactly." " They did a good job too." "When the gods curse people, why can't they turn 'em into a gerbil or a hamster?" "Yeah, something we could drop-kick." "[ Chuckles ] It's gonna take a lot more than that to stop her." "Well, as long as we're blundering around here in the dark, we're at a big disadvantage." "Arachne moves in the shadows because she's ashamed of her appearance." "Light is her enemy." "You thinking what I'm thinking?" "Let's light this place up." "[ Insects Buzzing ]" "Wearily, the rose petals fell... when the sun turned its back." "Who are you?" "[ Shuddering ]" "Do you think I'm... beautiful?" "Yes." "[ Sighs ]" "Please, let me go." "[ Shudders ]" "There's nothing more exciting... than the look on a man's face... when he knows he's about to die." "Please, I'll do anything." "Shh." "I want to give you a part of myself-- something you'll keep deep inside." "[ Muffled Groans ]" "You're sure all these tunnels lead through here?" "Yeah, I'm sure." "All right." "Keep those torches handy." "Don't worry." "Nothing gets in here we don't see it." "We'll be okay." "Yeah." "Well, don't start the barbecue without me." "Yeah." "Be safe." "You too." "Good luck." "You too." "[ Chuckling ] What?" "Hey, nothing." "Quite a firm handshake between you two." "Yeah, so?" "So, I think it's great." "Well, I'm glad you approve." "Yeah." "So, how long you two been together?" "We've been partners since we were kids." "Oh, "partners."" "Do you ever think anything you didn't say?" "Life's just too, um, short to mince words, Goldilocks." "You know, Nebula, if you're trying to get a rise out of me, it ain't gonna work." "If I were tryin' to get a rise out of you, [ Gasps ] you'd know it." "Ah." "Now I recognize you." "Hmm?" "You're the woman my mother warned me about." "Mmm." "[ Chuckling ]" "Ah." "Anything?" "Not a sign of her." "This doesn't make sense." "She has to come through here." "Something's not right." "Yeah, like maybe there's a passageway you don't know about." "Look, this wasn't my idea." "[ Screeching ]" "You hear it?" "Oh, yeah." "But where is it?" "She's here." "There's nothing here." "Look out!" "Iolaus, behind you!" "[ Screaming ] [ Screeching ]" "Hercules!" "[ Screams ] Hercules!" "Help!" "Hercules!" "Iolaus!" "Hercules!" "He's gone!" "Then I have to find him." "No, it's over." "Iolaus is dead." "You don't know that." "I know what it's like." "My crew was as close to me as family, but he's gone and he's not comin' back." "And we have to get out of here while we still can." "I told you before:" "Where I come from we don't give up on people." "I know he was your friend, but I won't let you get me killed." "Then go." "I'm not askin' for your help." "But I hope you can live with yourself." "[ Thunderclap ]" "[ Insects Buzzing ]" "[ Grunting ]" "Paxxon?" "[ Gasping ]" "By the gods!" "[ Gasps ]" "The gods won't help you now." "Your friend felt no pain." "Neither will you." "Thanks." "That's very comforting." "[ Shudders ]" "Do you think..." "I'm beautiful?" "Yeah." "I like bugs." "So brave, willing to face death." "You'll be a loss to your kind, but a welcome addition to my family." "[ Muffled Groans ]" "Arachne!" "Get a room." "[ Hisses ] Hope I'm not spoiling the mood." "Oh, you are." "I was just gettin' into it." "Who are you?" "He's Hercules." "I can't say I'm pleased to meet you." "Oh, but I am pleased to meet you, Hercules." "Son of Zeus, who cursed me with this form." "You'll pay for your father's crime." "What else is new?" "Hercules, look out!" "[ Screaming, Groans ]" "[ Sighs ] This is one big web site." "Where were we?" "[ Sighing ]" "Ow!" "[ Screams ]" "I'm curious." "How long does it usually take you to shave those legs?" "[ Hissing ]" "Yeehaw!" "Don't bother getting up." "We'll show ourselves out." "What happened to you?" "[ Nebula ] Eew!" "[ Moans ]" "Now, let me show you how to damage a face!" "[ Hercules ] Hey!" "[ Moaning ]" "I wouldn't try that if I were you." "[ Groans ]" "[ Screeching ]" "You won't kill your own men." "Nice try, but they're already dead." "True, but they didn't die in vain." "From each death comes new life." "Spreading hatred isn't much of a life." "I prefer to think of it as spreading beauty." "The world will know true beauty once again." "You're fooling yourself, Arachne." "Ah." "Is this where you usually hang out?" "Ha, ha." "Hey." "Trust me." "[ Screaming ] [ Grunts ]" "I prefer to be on top." " [ Screeching ]" " Look out!" "Where did that come from?" "You don't wanna know." "I will reclaim the throne." "My army, born from the bodies of these men, will see to that!" "Sorry." "I'm gonna pull rank." "[ Screeching ]" "[ Hissing ]" "[ Screams ]" "Much better." "[ Chuckles ]" "[ Grunting ]" "[ Laughing ] I'm quite enjoying this, Hercules." "I could go on and on for days." "Sorry, but you'll have to play with yourself." "No!" "[ Screaming ]" "Well done." "[ Muffled Screams ]" "[ Coughing ] I think it likes you." "You know, I think what we need is a really big shoe." " Hercules!" "[ Screeching ]" "Get this thing off of me!" "Iolaus!" " On three!" " Okay." "Three!" "[ Sighs ] [ Chuckles ]" "Eew." "Overall, I'd have to say this has been a very disgusting day." "Yeah." "Hey, nice throw." "Nice assist." "Ah, thanks, buddy." "I hate to interrupt this really lovely moment, but can we go now?" "Yeah, sure." "Why don't you lead the way." "Yeah." "Is it me, or are you gettin' real comfortable with my goods?" "This treasure's going back to King Zolas, just like I told you." "Oh." "Now why'd you have to go and say a thing like that, huh?" "We go back there, he'll hang me." "I said the treasure's going back, not you." "Oh." "Well, guess it's worth a ticket out of that cave." "Don't wanna see any more of that." "I'm glad you changed your mind..." "about a lot of things." "Well, someone had to save your butts." "Right?" "[ Chuckles ]" "Did I, uh, miss something back there?" "All right!" "Let's get this bucket shipshape, huh?" "Hercules, trim the mainsail!" "Iolaus, hop up there and secure that mast!" "I'll be waiting, monkey boy." "Uh-huh." "Nah, you missed nothing." "You know, Herc, we should be pirates." "Hmm." "The two of us, swashbuckling!" "Yeah, I'd be good at buckling swashes." "In fact, you can call me Goldenbeard from now on." "Ow!" "[ Moaning ]" "So what'd you see?" "Closed-Captioned By Captions, Inc., Los Angeles"
Q: Where can I find a list of CAMLvariables and ServerVariables? Having some trouble finding a list of these variables. Any help would be greatly appreciated. More specifically, when I see this on the page: <ParameterBinding Name="UserID" Location="CAMLVariable" DefaultValue="CurrentUserName"/> I want to know what other CAMLVariable exist. This also of course includes ServerVariables as well. A: Here's the reference to Server Variables from my blog: http://mdasblog.wordpress.com/2007/10/19/data-view-web-part-parameters-based-on-server-variables/ A: Query schema: http://msdn.microsoft.com/en-us/library/ms467521.aspx